id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.02683
The Postdoc Accord in Theoretical High Energy Physics
We present the results of a survey meant to assess the opinion of the high-energy physics theory (HET) community on the January 7th postdoc acceptance deadline - specifically, whether there is a preference to shift the deadline to later in January or February. This survey, which served for information-gathering purpose only, is part of a community conversation on the optimal timing of an acceptance deadline and whether the community would be better served by a later date. In addition, we present an analysis of data from the postdoc Rumor Mill, which gives a picture of the current hiring landscape in the field. We emphasize the importance of preserving a universal deadline, and the current results of our survey show broad support for a shift to a later date. A link to the survey, frequently asked questions, a running list of supporters, and next steps can be found on our companion web page.
Djuna Croon, Patrick J. Fox, Roni Harnik, Simon Knapen, Mariangela Lisanti, Lina Necib, Tien-Tien Yu
2023-07-05T23:02:24Z
http://arxiv.org/abs/2307.02683v1
# The Postdoc Accord in Theoretical High Energy Physics ###### Abstract We present the results of a survey meant to assess the opinion of the high-energy physics theory (HET) community on the January 7th postdoc acceptance deadline--specifically, whether there is a preference to shift the deadline to later in January or February. This survey, which served for information-gathering purpose only, is part of a community conversation on the optimal timing of an acceptance deadline and whether the community would be better served by a later date. In addition, we present an analysis of data from the postdoc Rumor Mill, which gives a picture of the current hiring landscape in the field. We emphasize the importance of preserving a universal deadline, and the current results of our survey show broad support for a shift to a later date. A link to the survey, frequently asked questions, a running list of supporters, and next steps can be found here. ## 1 Introduction Since 2007, the majority of the high-energy physics theory (HET) community has adopted an agreement in which most institutions have pledged to set postdoc acceptance deadlines no earlier than January 7th [1]. This agreement has been critical for establishing a standard of equity and fairness in hiring and recruiting practices, allowing several generations of postdocs to make life and career decisions with the maximum amount of information at their disposal. Through our own experiences, as well as interactions with community members, several challenges associated with the specific choice of January 7th as the deadline have become apparent. Specifically, the weeks between December 25th and January 7th are highly stressful for postdocs still waiting for offers. In much of the world, this timing coincides with the Christmas & New Year holiday period, during which many are away from work. This means that once an applicant receives an offer, they have more difficulty getting in touch with their senior colleagues to solicit advice on their options. These issues are further compounded for applicants who balance family commitments like child-care and elder-care during the holiday season. On the hiring side, the lack of administrative support during this time can make it more difficult to respond promptly as initial offers are being declined, or if applicants have questions which require input from Human Resources (HR). The holiday break, moreover, means that colleagues are often scattered across the globe, making it difficult to find enough time to have thoughtful discussions about the candidates, especially for second and third round offers, which often need to happen on short notice. The January 7th deadline also increasingly conflicts with established deadlines in the astroparticle and cosmology communities, which have a growing overlap with the HET community. The American Astronomical Society's (AAS) policy, adopted in 1988 and later reaffirmed in 2003 and 2006, sets the common acceptance deadline to February 15th [2], which means that astroparticle applicants often have to make important career decisions with incomplete information. Similarly, the mathematics community has also reached an agreement with a deadline of February 6th [3]. To gather a broader picture on how our community at large views these issues, we conducted an online survey. We here report on results of this survey, which collected data from May 3 to June 13, 2023, and briefly comment on next steps. This note is organized as follows. In Sec. 2, we present the results of an analysis of the high-energy physics (HEP) theory Rumor Mill data, which shows when offers are typically extended, accepted and declined, and for how long candidates typically consider an offer. In Sec. 3, we describe the results of the survey, and then we summarize some of the write-in feedback in Sec. 4. For the latter section, we paraphrase all responses, to ensure individual respondents cannot be identified. We conclude in Sec. 5. The raw survey data, with all identifiable information and write-in comments redacted, is publicly available for others to analyse [4]. ## 2 Postdoc Rumor Mill Data Analysis To understand the current state of affairs, we analysed the data from the postdoc Rumor Mill from the last six years (2018 to 2023) [5]. We found no significant differences between the datasets on a year-by-year basis, even during the COVID era, and therefore group all years together to increase the statistics of the sample. We state up front that this data is inherently incomplete, as not all applicants post to the Rumor Mill, or only post partial information. We point out the instances where this may bias the reading of the results. The distributions of the offers extended and offers accepted are shown in Fig. 1. Most offers are extended about two weeks before January 7th, with a significant second peak on and right after January 7th. As expected, there is a large peak in offers accepted on January 7th. Curiously, there is also a minor but significant peak on January 1st. One must keep in mind, however, that there is often a delay between the acceptance of an offer and it appearing on the Rumor Mill, which introduces a systematic bias. The cumulative distributions reveal that the market converges relatively quickly after January 7th. Specifically, the 90% and 95% quantiles of the "offers accepted" distribution are at 13.6 and 24.8 days after January 7th, respectively. However, it is plausible that applicants receiving and accepting offers well after January 7th are less likely to post to the Rumor Mill. On the other hand, the expected delay between accepting an offer and having it appear on the Rumor Mill would skew the true distribution towards faster convergence. One may also wonder how long applicants typically hold on to those offers they end up declining. This distribution is shown in Fig. 2. This question is however most relevant for _first round_ offers, as the consideration time for later offers is typically not determined by the candidates themselves, but by the time remaining to the January 7th deadline. On the other side of the spectrum, there are institutions who usually extend offers earlier (_e.g._ CERN), and it is expected that those candidates hold on to offers longer as they wait to get a view of their full range of options. To remove these sources of bias, we only consider offers extended in December in Fig. 2. On average, applicants consider the offers they end up declining for 12 days. Naturally, this time window is getting shorter as the offer is extended closer to the January 7th deadline. The right-hand panel of Fig. 2 shows that a number of offers only get declined _after_ January 7th, even if they were extended well in December. We consider it unlikely that all those instances received an extension of the deadline by the institution making the offer. It is more likely that in the majority of those instances the actual posting to the Rumor Mill itself was delayed. Figure 1: Distributions **(top row)** and cumulative distributions **(bottom row)** of the dates at which offers were extended **(left column)** and accepted **(right column)**, relative to January 7th, for the years 2018 to 2023. The total sample size for the offers is 1906; the total sample size for the accepted offers is 1025. Figure 2: **(left)** Number of days applicants hold on to offers before declining them. **(right)** Correlation between time at which the offer is extended and time at which it is declined. In both plots, only offers extended in the month of December are shown. The sample size for these figures is 326. Community Survey To better understand the needs of the community, we conducted a survey through a Google form. The survey was advertised in the following ways: * Contributed talk at Theory Frontier P5 meeting on May 4th 2023 [6] * Conference talks: PONT 2023, PLANCK 2023, Pheno 2023 * APS division of Particles and Fields newsletter in May 2023 * Snowmass Theory Frontier mailing list * BSM PANDEMIC mailing list * EUCapt mailing list * UK Cosmo and Hi-Phi (UK HEP) mailing list * theory group) * Lattice mailing lists: [email protected], [email protected] * Neutrino community: Invisibles mailing list, [email protected], Neutrino theory platform at CERN * Japanese HEP community mailing list * Indian HEP community mailing list * Former signatories of the 2007 accord * Personal networks and targeted email solicitations. We are still collecting responses and feedback, and invite any additional suggestions for mailing lists and community outreach. ### Demographics As of June 13, 2023, we have received 588 responses, of whom 530 chose to identify themselves. All the named respondents were verified to be members of the HET community. We can therefore be confident that the survey was not significantly biased by malevolent actors and/or bots. The demographic distribution of the respondents is summarized in Tab. 1 and Fig. 3. For the geographical information, respondents were asked to indicate where they are currently based as opposed to their region of origin. The options provided were "North-America," "Europe (including UK)," "Asia," "Oceania," "Middle East" and "Central and South America," in addition to a write-in option. For subfield, the given options were "Formal Theoretical Physics," "Phenomenology/ Beyond the Standard Model," "QCD (including Lattice, SCET, etc.)," "Cosmology" and "Astrophysics," as well as a write-in option. Respondents were allowed to indicate more than one choice on this question, in which case we included them in all categories they indicated. A fraction chose to provide a write-in answer, which were often slight variations on the categories above. In those instances, we included the respondents in the one or more categories considered closest. Finally, for the seniority category, the respondents were asked to chose between "faculty" and "graduate student or postdoc," with no write-in option. We consider the overall distribution of respondents to be reasonably good reflection of the community. However, it is possible that some sectors of the community did not receive as much advertising as others and that we are thus missing a systematic effect. To address this concern, the survey will remain open until **July 21st, 2023** and we hope this arXiv posting (which is cross-listed broadly) will encourage additional responses from anyone who, to date, was unaware of this ongoing exercise. Our results will be updated accordingly, and we will post them on a companion web page [4]. ### Results The respondents were asked to rate four options for possible deadline dates: "January 7 (current deadline)," "Mid-January (around January 15)," "End-of-January (around Jan \begin{table} \begin{tabular}{l|c} **Category** & **Respondents** \\ \hline Total & 588 \\ \hline Faculty & 384 \\ Grad \& Postdoc & 204 \\ \hline North America & 297 \\ Europe & 242 \\ Oceania & 18 \\ Asia & 18 \\ Middle East & 8 \\ Central/South Amer. & 5 \\ \hline Pheno & 331 \\ Formal & 179 \\ QCD & 95 \\ Cosmology & 150 \\ Astrophysics & 99 \\ Other & 6 \\ \end{tabular} \end{table} Table 1: Summary of the distribution of survey respondents. Note that respondents could select more than one category for their subfield. The “Other” category includes gravity and quantum information science. Figure 3: Distribution of respondents sorted by several identifiers: **(left)** geographical location, where “other” covers Central and South America, as well as the Middle East; **(middle)** self-identified subfield(s), where “other” covers gravity and quantum information science; and **(right)** seniority. Note that respondents could select more than one subfield. uary 30)," "February 15 (astronomy deadline)." These options were chosen after extensive informal interaction across the community, which indicated little support for bringing the deadline either before the winter break or into March. The results for this question are shown in Fig. 4. A large majority of respondents indicate that the current January 7th deadline is either "not preferred" (399 respondents, 68%) or "administratively impossible" (44 respondents, 7%). The degree of preference increases for later dates, with the February 15th option being the most preferred. The "End-of-January" choice was, by a small margin, the least disliked; it had the smallest contribution of "not preferred" (83 respondents, 14%) and "administratively impossible" (9 respondents, 1.5%). For the February 15th option, 86 respondents (15%) indicated "not preferred" and 22 respondents (3.7%) chose "administratively impossible." There is no significant difference when breaking the data down according to seniority. These qualitative trends hold true also among the 10% of respondents that chose to remain anonymous, but with milder hierarchies. For the same range of options, Fig. 5 shows the first choice of the respondents for a joint deadline. A majority prefers the mid-February option, followed by the end-of-January and middle-of-January options. The current January 7th deadline is the least preferred among the options proposed, by a rather wide margin. The results are again similar when broken down by seniority. Fig. 6 shows the first-choice option, broken down according to geographic region. The two largest groups, Europe and North America are broadly consistent with one another. The responses from Asia are more mixed, and the January 7th deadline appears to enjoy more support than it does in Europe and North America. This may be because of the Lunar New Year, which tends to fall at the end of January or mid-February. There appears to be overwhelming support in Oceania to move away from the early January deadline, likely because it falls in their summer break. For both Oceania and Asia, the sample size is small however, and one should exercise caution in over-interpreting these results. For South/Central America (5 respondents) and the Middle East (8 respondents) the sample size was too small to draw any meaningful conclusions. When broken down according to subfield (see Fig. 7), the trends described above broadly hold, with one exception. The support for February 15th is much greater among Figure 4: Level of preference among respondents for a January 7, Mid-January, End-of-January, or February 15 acceptance deadline. Respondents ranked their choice on a scale of: Preferred, Neutral, Not Preferred, and Administratively Impossible. Results are shown for all respondents **(left)**, as well as broken down by faculty **(middle)** and junior (grad/postdoc) **(right)**. those working on astroparticle physics and/or cosmology. This is not surprising, given the better match with the astrophysics deadline on February 15th. Finally, it important to note that community unity on the acceptance deadline appears to be of critical importance for many. This was already evident from our many private interactions with colleagues before launching the survey, and the survey questions were therefore deliberately and carefully formulated with this concern in mind. To measure this sentiment to some extent, we asked respondents to indicate whether they agree with the statement: _"If a new agreement is drafted, I would prefer it to include a clause stating that the agreement only go into effect after a large fraction of the institutions which signed the original 2007 agreement have signed on."_ An overwhelming majority responded in the affirmative, as shown in Fig. 8, indicating that the _existence_ of a joint deadline outweighs the particular deadline date for a large fraction of the community. Figure 5: Respondents’ first choice for the postdoc acceptance deadline, among the four choices proposed: January 7, Mid-January, End-of-January, and February 15. Results are shown for all respondents **(left)** as well as broken down by faculty **(middle)** and junior (grad/postdoc) **(right)**. Figure 6: First choice for the acceptance deadline, among the options proposed, broken down by geography and seniority. The geographic regions correspond to North America (N Amer.), Europe, Asia, and Oceania. We caution the reader that the number of respondents in Asia and Oceania is nearly 10% less than in North America and Europe. Results for the Middle East and Central/South America are not shown given the very small number of respondents from those regions. Figure 7: First choice for the acceptance deadline, broken down by subfield and seniority. See Tab. 1 for the number of respondents included in each subfield. ommon Concerns We received a total of 138 write-in comments, which we categorised broadly in the following manner: * Concerns related to the time it takes for visas/work permits to be issued * Dates of fellowship decisions * Alignment with other fields/subfields * Concerns about CERN * Holidays in early January * Concern about long gap between application deadlines and decision deadline * The need to have "best practices" * School/child care registration concerns * Making sure the deadline does not fall on a weekend * General statement of support We note that not all of these categories are decisively positive or negative; for example, we received comments in support of a later deadline because of Epiphany on January 6th, and comments against it because of Lunar New Year. We summarise, with the assistance of Croox-NET, some of the particular issues brought up, without attempting to draw a conclusion from them. Visa Issues:We received 13 comments about various concerns related to visa applications. Particular examples that were highlighted were the H1B visa process in the US and non-EU visa processes in Europe. For example, two respondents deemed it impossible to obtain a work-permit for non-EU citizens after February 15th in time for a September/October start, due to Belgian immigration rules. In addition, some comments expressed the concern that postponing the deadline could result in potential disadvantages for vulnerable groups. Ensuring equal opportunities for candidates from countries without visa waivers is of utmost importance. Moreover, it is crucial to consider the situation of Figure 8: Respondents’ opinion as to whether a clause should be included in any new accord that requires the agreement to only go into effect after a large fraction of the original institutions that signed the 2007 accord have signed the new one. Responses are divided by seniority. international applicants who do not receive offers in the initial round and consequently face a waiting period before commencing the offer/negotiation process. Schooling and Family Issues:We received 5 comments about issues related to schooling of postdocs' children and other family-related issues. It was emphasized that deadlines should be arranged in a way that allows recipients sufficient time to discuss their options with their families and advisors. One important reason for this is the challenge associated with finding daycare spots for young children. Another highlighted issue is the proximity of the public-school enrollment deadline to the postdoc acceptance deadline. Moreover, many school districts require families to secure housing before allowing enrollment. A further reason for this is that partners of applicants also require adequate time to find employment in the new location. As one respondent said, the overarching focus of any postdoc agreement should be on providing flexibility and options to recipients, taking into consideration the profound impact of deadlines on their personal, professional, and family lives. Holiday Schedules:We received 22 comments generally related to how the postdoc hiring season overlaps with the holiday schedule making it hard for faculty to communicate with administrators, postdocs to communicate with mentors, and everyone to be able to have time with their family. School and daycare closures over the holidays make working over the holidays difficult for people with caring responsibilities. It was also noted that the schedule for holidays over the December-February period is very location dependent. For instance, countries where Epiphany is a publicly celebrated holiday are still on holiday until January 7th, with staff often absent for a little longer. Institutions in the Southern hemisphere are out-of-sync with their Northern hemisphere counterparts. Additionally, the long summer break is over December-January. This issue is further complicated by those holiday schedules that are based on the lunar calendar and shift around annually. It was also noted by some respondents that _application_ deadlines should not be shifted back by any substantial amount. A related issue that was brought up in 2 comments is that a fixed date for the deadline occasionally falls on a weekend. This problem could be removed by defining the deadline not as _e.g._ January 30th but instead as the _e.g._ the last Wednesday in January. However, even this has complications since, for instance, the third Monday in February is a holiday (Presidents' Day) in the US. Concern about CERN:We received 4 comments about CERN offers, which typically go out early. A few participants expressed the belief that CERN also imposes a response deadline prior to January 7th, thereby not abiding by the existing accord. This is incorrect, as CERN has always respected the accord. Moreover, the following statement was offered by Gian Giudice on behalf of CERN: CERN has an early application deadline for theory postdocs because the selection process requires a first stage done at the individual national level. For this reason, CERN has been making postdoc offers as early as November. However, CERN has _always_ respected the January 7th response deadline, and intends to respect any new deadline that will be agreed upon by the physics community. CERN welcomes the proposal to shift the January 7th deadline to a later date. In order to adapt better to such changes, CERN is considering to move its current application deadline to a later date, closer to those of other institutions. Best Practices:We had 6 comments advocating for the development of a set of "best practices" for postdoc hiring in the field. These included a _minimum_ amount of time that a candidate should be allowed between an offer and a deadline. This would be especially important for offers made after the deadline in the postdoc accord. Suggestions of 1 or 2 weeks were made. It was also suggested that applicants should be strongly encouraged to never hold more than 2 or 3 offers simultaneously, for any extended period, and that postdoc advisors/mentors should encourage this behavior. Some respondents furthermore emphasized that Academic Jobs Online (AJO) [7] provides a central application site and that it would be helpful for applicants as well as letter writers if all institutions utilized it, rather than using their own bespoke interfaces. Finally, it was pointed out that while interviewing candidates to determine how well-matched they are to an institution is beneficial, candidates should not be asked to make a commitment before receiving an offer. Fellowship Outcomes:A deadline for regular postdoc decisions after the deadlines associated with prestigious fellowships (_e.g._ Marie Sklodowska-Curie Actions) allows applicants with outstanding applications for these fellowships to make decisions with full knowledge of their options. It is also beneficial to institutions since it removes the risk of their position being vacated shortly after acceptance. This may be of particular importance if the fellowship is cross-disciplinary and the leading HET candidate is not selected because they have accepted a non-fellowship position. Tab. 2 contains a list of multi-disciplinary fellowships known to us, with their respective deadlines. We will keep an updated list of multidisciplinary fellowships on our web page [4] and invite the community to submit such fellowships to us at [email protected]. Alignment:There were 20 comments about alignment with other deadlines. Several respondents identified the need for alignment between postdoc application deadlines and the \begin{table} \begin{tabular}{l|l} Fellowship & Acceptance deadline \\ \hline Berkeley Miller & Typically the third week of January \\ Harvard Society of Junior Fellows & End of January \\ Marie Sklodowska-Curie Actions & Offers made mid/late February \\ MIT Pappalardo & Currently January 7th \\ NASA Hubble & February 15th \\ Princeton PCTS & No official acceptance deadline; \\ & informal encouragement to respond by early January \\ UC Presidential and Chancellor & Offers at the end of February \\ Los Alamos fellowship & Offers early February \\ \end{tabular} \end{table} Table 2: List of special fellowships, known to us, and their deadlines. The current January 7th deadline for the Pappalardo fellowship is specifically set to align with the existing HET deadline. timing of permanent job/research assistant professor/fellowship searches to prevent star candidates from turning down multiple postdoc offers for permanent positions. Others focused on the coordination of deadlines between fields--interdisciplinary applicants working in both particle physics and either astrophysics or mathematics. Maximising alignment favors the deadlines in February over those in January. ## 5 Conclusion We conducted a community survey regarding the desirability of January 7th as the common postdoc offer acceptance deadline. We will continue to collect responses on our survey until **July 21st, 2023** and will update this note if we observe a significant shift in the results. As of June 13, 2023, 588 physicists have participated in the survey, broadly representing the various subfields, geographical areas and levels of seniority present in the HET community. The January 7th deadline is broadly disliked and later options such as end-of-January or mid-February gather a significantly greater degree of support. In addition, the later options are in-line with the established deadlines of the mathematics and astrophysics communities. Community unity on a common deadline is widely considered to be very important, both among respondents to the survey and from private conversations with colleagues. Moreover, a number of important concerns were brought up, both with respect to the existing January 7th deadline as well as with potential alternatives. We believe that the results of this survey, as they are in July 2023, indicate that a revision of the January 7th common deadline is favored by the community. That is, provided that most concerns raised can be addressed adequately, while keeping in mind that no choice will be able to alleviate all concerns perfectly, including the status quo. In the next few months, we therefore aim to draft an updated accord, compile answers to frequently asked questions and compile a list of "best practices" for the postdoc hiring process, which will be independent of the accord itself. Our aim is to have a new accord completed and ratified before the Fall 2023 application cycle. They will appear on the following webpage: [https://het-postdoc-accord.github.io/information/](https://het-postdoc-accord.github.io/information/) The website contains frequently asked questions and a periodically-updated list of supporters. In the meanwhile, we welcome further responses to the survey until **July 21st, 2023** from those who have not yet responded. We also continue to welcome further comments, concerns, and suggestions, which can be sent to [email protected]. ## Acknowledgments We are grateful to Felix Yu for providing us with the raw data from the Postdoc Rumor Mill. We thank the many colleagues who gave us informal feedback on the deadline issue itself, how to proceed prudently and on the design of the survey. We are particularly grateful to Robert McGehee, Nathaniel Craig, Gian Giudice, Joachim Kopp, Rachel Houtz, and Zhen Liu for providing exceptionally detailed input and/or for bringing additional subtleties to our attention.
2308.12215
The Challenges of Machine Learning for Trust and Safety: A Case Study on Misinformation Detection
We examine the disconnect between scholarship and practice in applying machine learning to trust and safety problems, using misinformation detection as a case study. We survey literature on automated detection of misinformation across a corpus of 248 well-cited papers in the field. We then examine subsets of papers for data and code availability, design missteps, reproducibility, and generalizability. Our paper corpus includes published work in security, natural language processing, and computational social science. Across these disparate disciplines, we identify common errors in dataset and method design. In general, detection tasks are often meaningfully distinct from the challenges that online services actually face. Datasets and model evaluation are often non-representative of real-world contexts, and evaluation frequently is not independent of model training. We demonstrate the limitations of current detection methods in a series of three representative replication studies. Based on the results of these analyses and our literature survey, we conclude that the current state-of-the-art in fully-automated misinformation detection has limited efficacy in detecting human-generated misinformation. We offer recommendations for evaluating applications of machine learning to trust and safety problems and recommend future directions for research.
Madelyne Xiao, Jonathan Mayer
2023-08-23T15:52:20Z
http://arxiv.org/abs/2308.12215v3
# [SoK] The Challenges of Machine Learning for Trust and Safety: ###### Abstract We examine the disconnect between scholarship and practice in applying machine learning to trust and safety problems, using misinformation detection as a case study. We systematize literature on automated detection of misinformation across a corpus of 270 well-cited papers in the field. We then examine subsets of papers for data and code availability, design missteps, reproducibility, and generalizability. We find significant shortcomings in the literature that call into question claimed performance and practicality. Detection tasks are often meaningfully distinct from the challenges that online services actually face. Datasets and model evaluation are often non-representative of real-world contexts, and evaluation frequently is not independent of model training. Data and code availability is poor. Models do not generalize well to out-of-domain data. Based on these results, we offer recommendations for evaluating machine learning applications to trust and safety problems. Our aim is for future work to avoid the pitfalls that we identify. ## 1 Introduction Online services face a daunting task: There is an unceasing deluge of user-generated content, on the order of hundreds of thousands of posts per minute on certain well-known social media platforms [259]. Some of that content is false, hateful, harassing, extremist, or otherwise problematic. How can platforms reliably and proactively identify these "trust and safety" issues? Machine learning has proven an attractive approach in the academic literature, leading to large bodies of scholarship on misinformation detection, toxic speech classification, and other core trust and safety challenges [146, 341, 342]. The conceptual appeal of machine learning is that it could address the massive scale of user-generated content on large platforms and the capacity constraints of small platforms. Recent work claims impressive performance statistics, and that near-term deployment is practical. As a case in point, OpenAI recently announced that it was deploying a content moderation system based on GPT-4 to detect problematic content online; according to the company, performance on detection tasks exceeds that of human moderators with "modest training." Even so, OpenAI's head of safety systems admitted that human advisors would still be necessary to "adjudicate borderline cases" [339]. This accords with our finding that, in practice, trust and safety functions at online services remain largely manual--driven by user reports and carried out by human moderators. Additionally, thresholds (e.g., detection precision or recall) for determining where and when automated processes require human intervention are underspecified [336, 337]. In this work, we investigate the significant disconnect between scholarship and practice in applications of machine learning to trust and safety problems. Our project is inspired by recent research that has identified shortcomings in machine learning applications for many problem domains, including information security [315, 320]. We use misinformation detection as a case study for trust and safety problems, because the topic has recently generated a rich literature with diverse methods and claimed successes. Misinformation detection also has substantive complexities that are common for trust and safety problems: linguistic and cultural nuance, sensitivity to context, and rapidly evolving circumstances. Evaluating responses to misinformation is societally urgent. The veracity of published news is vitally important to democratic processes, public health, and press integrity. This was particularly evident in 2016, during the U.S. presidential election, and again in 2020, in the first year of the COVID-19 pandemic. During those years, false and misleading information was rampant on well-known social media platforms [287, 316, 317, 319]. In response to public outory over coordinated misinformation campaigns during the 2016 presidential election, Twitter and Facebook announced new measures in 2017 to counteract the spread of misinformation, including automated misinformation detection methods that would surface potentially false rumors for further vetting by third-party fact-checking organizations [323, 324, 325]. These machine learning approaches do not appear to have been successful. Leaked Facebook documents from the time of the COVID-19 pandemic revealed that the platform's "ability to detect VH [vaccine hesitancy] comments is bad in English--and basically non-existent elsewhere" [336]. Employees acknowledged that automated interventions--including machine learning classifiers--were grossly under-equipped to counter vaccine-related misinformation. Meanwhile, in the academic literature, publications routinely report optimistic performance figures for machine learning models to detect misinformation. In the literature review that we conduct for this work, among publications
2310.07334
Multi-Scale Dynamics of the Interaction Between Waves and Mean Flows: From Nonlinear WKB Theory to Gravity-Wave Parameterizations in Weather and Climate Models
The interaction between small-scale waves and a larger-scale flow can be described by a multi-scale theory that forms the basis for a new class of parameterizations of subgrid-scale gravity waves (GW) in weather and climate models. The development of this theory is reviewed here. It applies to all interesting regimes of atmospheric stratification, i.e. also to moderately strong stratification as occurring in the middle atmosphere, and thereby extends classic assumptions for the derivation of quasi-geostrophic theory. At strong wave amplitudes a fully nonlinear theory arises that is complemented by a quasilinear theory for weak GW amplitudes. The latter allows the extension to a spectral description that forms the basis of numerical implementations that avoid instabilities due to caustics, e.g. from GW reflection. Conservation properties are discussed, for energy and potential vorticity, as well as conditions under which a GW impact on the larger-scale flow is possible. The numerical implementation of the theory for GW parameterizations in atmospheric models is described, and the consequences of the approach are discussed, as compared to classic GW parameterizations. Although more costly than the latter, it exhibits significantly enhanced realism, while being considerably more efficient than an approach where all relevant GWs are to be resolved. The reported theory and its implementation might be of interest also for the efficient and conceptually insightful description of other wave-mean interactions, including those where the formation of caustics presents a special challenge.
Ulrich Achatz, Young-Ha Kim, Georg Sebastian Voelker
2023-10-11T09:24:52Z
http://arxiv.org/abs/2310.07334v1
# Multi-Scale Dynamics of the Interaction Between Waves and Mean Flows: ###### Abstract The interaction between small-scale waves and a larger-scale flow can be described by a multi-scale theory that forms the basis for a new class of parameterizations of subgrid-scale gravity waves (GW) in weather and climate models. The development of this theory is reviewed here. It applies to all interesting regimes of atmospheric stratification, i.e. also to moderately strong stratification as occurring in the middle atmosphere, and thereby extends classic assumption for the derivation of quasi-geostrophic theory. At strong wave amplitudes a fully nonlinear theory arises that is complemented by a quasilinear theory for weak GW amplitude. The latter allows the extension to a spectral description that forms the basis of numerical implementations that avoid instabilities due to caustics, e.g. from GW reflection. Conservation properties are discussed, for energy and potential vorticity, as well as conditions under which a GW impact on the larger-scale flow is possible. The numerical implementation of the theory for GW parameterizations in atmospheric models is described, and the consequences of the approach are discussed, as compared to classic GW parameterizations. Although more costly than the latter, it exhibits significantly enhanced realism, while being considerably more efficient than an approach where all relevant GWs are to be resolved. The reported theory and its implementation might be of interest also for the efficient and conceptually insightful description of other wave-mean interactions, including those where the formation of caustics presents a special challenge. ## I Introduction With horizontal and vertical wavelengths down to at most 1km and 100m respectively, mesoscale atmospheric waves such as internal gravity waves (GWs) will not all be simulated explicitly by operational climate models within the foreseeable future. However, without taking their influence into account, climate models miss essential circulation aspects even on the planetary scale [1; 2; 3]. Hence they must be parameterized, requiring a solid theory for the interaction between the waves and a mean flow. Corresponding studies have led in recent years to the emergence of a new class of GW parameterizations (GWP). The present review is to give an overview on these developments, from theoretical investigations to the implementation of a new GWP into a state-of-the-art climate model. For this purpose we first review the dynamics of large-amplitude, locally monochromatic GWs, then discuss spectra of weak-amplitude GWs, next describe the GW impact in general on the mean flow, touch on conservation properties, and finally give a sketch of the hence resulting numerical developments. ## II Large-amplitude locally monochromatic waves [4; 5; 6; 7] have done fundamental studies of the interaction between GWs and mean flow that have more recently been extended by [8; 9; 10] to consider a wider range of atmospheric stratification, higher GW harmonics, and to give a deepened account on the conditions for GW impacts on the mean flow. Whatever the considered GW amplitudes, we will consider GWs in interaction with a synoptic-scale flow in a hydrostatic reference atmosphere, with profiles \(\tilde{\theta}(z),\rho(z),\pi(z)\) of potential temperature, density, and Exner pressure, respectively, that only depend on altitude \(z\). We also assume an \(f\)-plane with a constant Coriolis parameter. One can assume that the ratio between Coriolis frequency \(f\) and Brunt-Vaisala frequency \(N=\sqrt{(g/\theta)d_{z}\theta}\), with \(g\) the gravitational acceleration, is \(f/N=\mathcal{O}[\epsilon^{(5-\alpha/2)}]\), where \(\alpha=0,1\) denotes moderately strong or weak stratification, and \(\epsilon=\mathcal{O}(1/10)\) is a small parameter. Observations [11; 12] indicate that, in the spectrum of GWs, most energy is carried by the waves that are in scale just below the synoptic scale, which we assume to be resolved by climate models of interest. A careful analysis then turns out that, with \(T_{00}\) a typical atmospheric temperature, representative horizontal and vertical length scales and a representative time scale of such GWs are \[(L_{w},H_{w})=\left[\epsilon^{(2+\alpha)/2},\epsilon^{7/2}\right]\sqrt{RT_{0 0}}/f\qquad T_{w}=1/f \tag{1}\] whence the wind scales are obtained in the large-amplitude case as advective wind scales \((U_{w},W_{w})=(L_{w},H_{w})/T_{w}\). Waves of such amplitude are close to locally inducing static instability, i.e. negative vertical derivatives of total potential temperature, which would cause them to break. The corresponding mean-flow synoptic scales are \((L_{s},H_{s},T_{s})=(L_{w},H_{w},T_{w})/\epsilon\), i.e. \(\epsilon\) is our scale-separation parameter. Note that the synoptic-scale wind scales are \((U_{s},W_{s})=(U_{w},W_{w})\) so that \(U_{s}/(L_{s}f)=\epsilon\), i.e. \(\epsilon\) also is the Rossby number of the synoptic-scale flow [9; 10]. The next step is to use the wave wind, length, and time scales, and \(T_{00}\), to non-dimensionalize the equations of motion \[D_{t}{\bf u}+f{\bf e}_{z}\times{\bf u} = -c_{p}\theta\nabla_{h}\pi \tag{2}\] \[D_{t}w = -c_{p}\theta\partial_{z}\pi-g\] (3) \[D_{t}\theta = 0\] (4) \[D_{t}\pi+\frac{R}{c_{V}}\pi\nabla\cdot{\bf v} = 0 \tag{5}\] leading to \[\varepsilon^{2+\alpha}\left(D_{t}{\bf u}+f_{0}{\bf e}_{z}\times{ \bf u}\right) = -\frac{c_{p}}{R}\theta\nabla_{h}\pi \tag{6}\] \[\varepsilon^{T}D_{t}w = -\frac{c_{p}}{R}\theta\partial_{z}\pi-\epsilon\] (7) \[D_{t}\theta = 0\] (8) \[D_{t}\pi+\frac{R}{c_{V}}\pi\nabla\cdot{\bf v} = 0 \tag{9}\] where \(D_{t}=\partial_{t}+{\bf v}\cdot\nabla\) indicates the material derivative, \({\bf u}\) and \(w\) are the horizontal and vertical components of the total wind \({\bf v}\), respectively. \(c_{p}\) and \(c_{V}=c_{p}-R\) are the specific heat capacities at constant pressure and volume, respectively, with \(R\) the ideal gas constant of dry air. \(f_{0}=1\) is a non-dimensional placeholder for the Coriolis frequency. We then introduce slow variables \(({\bf X},T)=\varepsilon({\bf x},t)\), and insert into the non-dimensional equations the WKB expansions for a superposition of a hydrostatic reference atmosphere, a synoptic-scale flow, and a wave field with its higher harmonics, \[{\bf v} = \sum_{j=0}^{\infty}\varepsilon^{j}{\bf V}_{0}^{(j)}({\bf X},T) \tag{10}\] \[+\Re\sum_{\beta=1}^{\infty}\sum_{j=0}^{\infty}\varepsilon^{j}{ \bf V}_{\beta}^{(j)}({\bf X},T)\varepsilon^{j\beta\phi({\bf X},T)/\varepsilon}\] \[\theta = \sum_{j=0}^{\alpha}\varepsilon^{j}\overline{\Theta}^{(j)}(Z)+ \varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Theta_{0}^{(j)}({ \bf X},T)\] (11) \[+\varepsilon^{1+\alpha}\Re\sum_{\beta=1}^{\infty}\sum_{j=0}^{ \infty}\varepsilon^{j}\Theta_{\beta}^{(j)}({\bf X},T)e^{j\beta\phi({\bf X},T )/\varepsilon}\] \[\pi = \sum_{j=0}^{\alpha}\varepsilon^{j}\overline{\Pi}^{(j)}(Z)+ \varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Pi_{0}^{(j)}({\bf X },T)\] (12) \[+\varepsilon^{2+\alpha}\Re\sum_{\beta=1}^{\infty}\sum_{j=0}^{ \infty}\varepsilon^{j}\Pi_{\beta}^{(j)}({\bf X},T)e^{j\beta\phi({\bf X},T)/\varepsilon}\] where \(\overline{\Theta}^{(j)}\) and \(\overline{\Pi}^{(j)}\) are due to the reference atmosphere, all terms proportional to the phase factors \(\exp i\beta\phi/\varepsilon\) are contributions from the wave (subscript \(\beta=1\) for the basic wave, and \(\beta\geq 2\) for its \(\beta\)th higher harmonic), and the rest constitutes the synoptic-scale part (subscript 0). Both the wave amplitudes and the synoptic-scale flow are only slowly varying in space and time, as are the local wavenumbers \(\beta{\bf k}=\beta\nabla\phi/\varepsilon=\beta\nabla_{\bf X}\phi=\left({\bf e }_{x}\partial_{x}+{\bf e}_{y}\partial_{y}+{\bf e}_{z}\partial_{Z}\right)\phi\) and frequencies \(\beta\omega=-\beta\partial_{t}\phi/\varepsilon=-\beta\partial_{t}\phi\). The leading orders of the expansions follow from a dimensional analysis of the basic equations. For the synoptic-scale flow they agree in the weak-stratification case \(\alpha=1\) with standard quasigeostrophic scaling [e.g. 13]. Inserting (10) - (12) into (6) - (9) and sorting by terms with equal powers in \(\varepsilon\) and in the phase factor \(\exp i\phi/\varepsilon\) one finds from the leading orders in \(\varepsilon\) that the mean flow is to leading order horizontal, and that it is in hydrostatic and geostrophic balance, i.e. \[W_{0}^{(0)} = 0 \tag{13}\] \[\partial_{Z}\Pi_{0}^{(0)} = \frac{R/c_{p}}{\overline{\Theta}^{(0)}}\left[B_{0}^{(0)}-\alpha( \overline{\Theta}^{(\alpha)}/\overline{\Theta}^{(0)})^{2}\right]\] (14) \[f_{0}{\bf e}_{z}\times{\bf U}_{0}^{(0)} = -\frac{c_{p}}{R}\overline{\Theta}^{(0)}\nabla_{{\bf X},h}\Pi_{0}^{(0)} \tag{15}\] with \(B_{0}^{(0)}=\Theta_{0}^{(0)}/\overline{\Theta}^{(0)}\) a non-dimensional synoptic-scale buoyancy, and \(\nabla_{{\bf X},h}={\bf e}_{x}\partial_{X}+{\bf e}_{y}\partial_{Y}\) the horizontal gradient operator in the slow spatial variables. The leading-order results for the wave field are that frequency and wave number satisfy a dispersion relation \(\omega=\Omega({\bf k},{\bf X},T)\) for either geostrophic modes or gravity waves, i.e. \[\Omega({\bf k},{\bf X},T)=\left\{\begin{array}{ll}{\bf k}\cdot{\bf U}_{0}^{(0 )},&\mbox{GM}\\ {\bf k}\cdot{\bf U}_{0}^{(0)}\pm\sqrt{\frac{N_{0}{}^{2}k_{h}^{2}+f_{0}^{2}m^{2} }{\varepsilon^{4}k_{h}^{2}+m^{2}}},&\mbox{GW}\end{array}\right. \tag{16}\] where the dependence on \({\bf X}\) and \(T\) enters via that of the mean-flow leading-order horizontal wind \({\bf U}_{0}^{(0)}\) and the non-dimensional Brunt-Vaisala frequency \(N_{0}^{2}=dz\overline{\Theta}^{(\alpha)}/\overline{\Theta}^{(0)}\). The components of the wavenumber vector are defined so that \({\bf k}=k{\bf e}_{x}+l{\bf e}_{y}+m{\bf e}_{z}\), and \(k_{h}=\sqrt{k^{2}+l^{2}}\) is the absolute magnitude of the horizontal wavenumber \({\bf k}_{h}=k{\bf e}_{x}+l{\bf e}_{y}\). The dispersion relations entail the eikonal equations \[\left(\partial_{T}+{\bf e}_{g}\cdot\nabla_{\bf X}\right)\omega = \partial_{T}\Omega \tag{17}\] \[\left(\partial_{T}+{\bf e}_{g}\cdot\nabla_{\bf X}\right){\bf k} = -\nabla_{\bf X}\Omega \tag{18}\] for the development of wave number and frequency, with \({\bf e}_{g}=\nabla_{\bf k}\Omega=\left({\bf e}_{x}\partial_{x}+{\bf e}_{y} \partial_{y}+{\bf e}_{z}\partial_{m}\right)\Omega\) the local group velocity. The wave amplitudes satisfy the polarization relations \[{\bf U}_{\beta}^{(0)} = \frac{\beta^{2}{\bf k}_{h}\hat{\omega}-if_{0}{\bf e}_{z}\times \beta{\bf k}_{h}}{\beta^{2}\hat{\omega}^{2}-f_{0}^{2}} \tag{19}\] \[\times\left(1-\varepsilon^{4}\frac{\beta^{2}\hat{\omega}^{2}}{N_{0} ^{2}}\right)\frac{B_{\beta}^{(0)}}{i\beta m}\] \[W_{\beta}^{(0)} = \frac{i\beta\hat{\omega}}{N_{0}^{2}}B_{\beta}^{(0)}\] (20) \[\frac{c_{p}}{R}\overline{\Theta}^{(0)}\Pi_{\beta}^{(0)} = \left(1-\varepsilon^{4}\frac{\beta^{2}\hat{\omega}^{2}}{N_{0} ^{2}}\right)\frac{B_{\beta}^{(0)}}{i\beta m} \tag{21}\] with \(\hat{\omega}=\omega-{\bf k}\cdot{\bf U}_{0}^{(0)}\) the intrinsic frequency. One also finds that for GWs \(B_{\beta}^{(0)}=0\) if \(\beta>1\), and only for \(j>0\) one can have \(B_{\beta}^{(j)}\neq 0\) if \(\beta>1\). Hence GW higher harmonics do not contribute to the leading order, but potentially only to the next but leading order. This is not the case for the GM higher harmonics, but in the following we ignore this option and only consider wave fields without GM contribution. A central result from the next-order terms in \(\epsilon\) is the wave-action conservation equation \[\partial_{T}\mathcal{A}+\nabla_{\mathbf{X}}\cdot(\epsilon_{g}\mathcal{A})=0 \tag{22}\] for the GW wave-action density \(\mathcal{A}=E_{gw}/\hat{\omega}\), with \[E_{gw}=\frac{\overline{R}^{(0)}}{2}\left(\frac{|\mathbf{U}_{1}^{(0)}|^{2}}{2}+ \epsilon^{4}\frac{|W_{1}^{(0)}|^{2}}{2}+\frac{1}{N_{0}}^{2}\frac{|B_{1}^{(0)}| ^{2}}{2}\right) \tag{23}\] the GW energy density, where \(\overline{R}^{(0)}(Z)\) is the leading-order reference-atmosphere density, decaying strongly from the ground to higher altitudes. The derivation of this result uses the geostrophic and hydrostatic equilibrium (14) - (15) of the synoptic-scale flow. For the leading-order contributions to the synoptic-scale flow one obtains the prognostic equations \[(1-\alpha)\left(\partial_{T}+\mathbf{U}_{0}^{(0)}\cdot\nabla_{ \mathbf{X},h}\right)\Pi_{0}^{(0)}\] \[+\frac{R}{c_{V}}\frac{\overline{\Pi}^{(0)}}{\overline{P}^{(0)}} \nabla_{\mathbf{X}}\cdot\left(\overline{P}^{(0)}\mathbf{V}_{0}^{(1)}\right)=0 \tag{24}\] \[\left(\partial_{T}+\mathbf{U}_{0}^{(0)}\cdot\nabla_{\mathbf{X},h }\right)\Theta_{0}^{(0)}+W_{0}^{(1)}\overline{\Theta}^{(0)}N_{0}^{2}\] \[=-\frac{1}{\overline{R}^{(0)}}\nabla_{\mathbf{X},h}\cdot\mathbf{T}\] (25) \[\left(\partial_{T}+\mathbf{U}_{0}^{(0)}\cdot\nabla_{\mathbf{X},h }\right)\mathbf{U}_{0}^{(0)}+f_{0}\mathbf{e}_{z}\times\mathbf{U}_{0}^{(1)}\] \[=-\frac{c_{P}}{R}\overline{\Theta}^{(0)}\nabla_{\mathbf{X},h}\Pi _{0}^{(1)}\] \[\quad-\frac{c_{P}}{R}\left[\alpha\overline{\Theta}^{(\alpha)}+(1 -\alpha)\Theta_{0}^{(0)}\right]\nabla_{\mathbf{X},h}\Pi_{0}^{(0)}\] \[\quad-\frac{1}{\overline{R}^{(0)}}\nabla_{\mathbf{X}}\cdot \mathbf{M}+\frac{1-\alpha}{\overline{P}^{(0)}}f_{0}\mathbf{e}_{z}\times \mathbf{T} \tag{26}\] where \(\overline{P}^{(0)}=\overline{R}^{(0)}\overline{\Theta}^{(0)}\) is the leading-order mass-weighted potential temperature of the reference atmosphere. While there is no direct GW impact on the synoptic-scale Exner pressure, GW entropy-flux and momentum-flux convergence appear in prognostic equations for mean-flow potential temperature and horizontal momentum. With \(\hat{\mathbf{e}}_{g}=\nabla_{\mathbf{X}}\hat{\omega}=\mathbf{e}_{g}-\mathbf{U }_{0}^{(0)}\) being the intrinsic group velocity, the entropy fluxes are \[\mathbf{T}=\frac{\overline{R}^{(0)}}{2}\Re\left(\mathbf{U}_{1}^{(0)}\Theta_{ 1}^{(0)\ast}\right)=\mathbf{e}_{z}\times\mathbf{k}_{h}\mathcal{A}\,\hat{c}_{ gz}\frac{N_{0}^{2}f_{0}}{\hat{\omega}^{2}-f_{0}^{2}}\overline{\Theta}^{(0)} \tag{27}\] and the momentum-flux tensor \[\mathbf{M}=\frac{\overline{R}^{(0)}}{2}\Re\left(\mathbf{V}_{1}^{(0)}\mathbf{ U}_{1}^{(0)\ast}\right) \tag{28}\] has the elements \[M_{11} = k\mathcal{A}\hat{c}_{gx}\frac{\hat{\omega}^{2}}{\hat{\omega}^{2} -f_{0}^{2}}+l\mathcal{A}\hat{c}_{gy}\frac{f_{0}^{2}}{\hat{\omega}^{2}-f_{0}^{2}} \tag{29}\] \[M_{12}=M_{21} = k\mathcal{A}\hat{c}_{gy}=l\mathcal{A}\hat{c}_{gx}\] (30) \[M_{22} = l\mathcal{A}\hat{c}_{gy}\frac{\hat{\omega}^{2}}{\hat{\omega}^{2} -f_{0}^{2}}+k\mathcal{A}\hat{c}_{gx}\frac{f_{0}^{2}}{\hat{\omega}^{2}-f_{0}^{2}}\] (31) \[M_{31} = \frac{k\mathcal{A}\hat{c}_{gx}}{1-f_{0}^{2}/\hat{\omega}^{2}}\] (32) \[M_{32} = \frac{l\mathcal{A}\hat{c}_{gx}}{1-f_{0}^{2}/\hat{\omega}^{2}} \tag{33}\] The last GW term in the horizontal momentum equation (26) is the so-called elastic term that appears only at moderately strong stratification, i.e. for \(\alpha=0\). It also vanishes in the absence of rotation. Note that the results above represent a fully nonlinear theory where the nonlinear advection terms turn out to not contribute to the leading orders of the wave equations because the solenoidality property \[\mathbf{k}\cdot\mathbf{V}_{\beta}^{(0)}=0 \tag{34}\] of the wave velocity field, following directly from the leading-order wave part of the Exner-pressure equation, eliminates self-advection of the wave field to leading order. ## III A spectrum of weak-amplitude waves The eikonal equations (17) and (18) can lead to a breakdown of the local monochromaticity assumed in the derivation of the wave-action equation (22), e.g. if an initial locally monochromatic GW field has a spatial dependence of wave number so that neighboring regions have group velocities leading to caustics where rays cross. This calls for a theory describing the dynamics of the superposition of several wave fields. However, because of the nonlinear advection terms, the superposition of wave fields with strong amplitudes close to breaking, as assumed above, does not even allow the derivation of dispersion and polarization relations anymore. In the locally monochromatic case the advection terms do not contribute to the leading orders of the wave equations, because of the solenoidality property (34). In the case of a superposition of several wave fields, however, mutual advection of the different wave fields only does not contribute to the leading order of the equations if the wave amplitudes are sufficiently weak, i.e. when the wave fields in (10) - (12) are multiplied by a factor \(\varepsilon^{n}\) with, e.g., \(n=1\), termed _weakly nonlinear_ case or \(n=2\), the _quasilinear_ case. One sets \[{\bf v} = \sum_{j=0}^{\infty}\varepsilon^{j}{\bf V}_{0}^{(j)}({\bf X},T) \tag{35}\] \[+\varepsilon^{n}\Re\sum_{\beta}\sum_{j=0}^{\infty}\varepsilon^{j}{ \bf V}_{\beta}^{(j)}({\bf X},T)e^{i\phi_{\beta}({\bf X},T)/\varepsilon}\] \[\theta = \sum_{j=0}^{\alpha}\varepsilon^{j}\overline{\Theta}^{(j)}(Z)+ \varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Theta_{0}^{(j)}({\bf X },T)\] (36) \[+\varepsilon^{n+1+\alpha}\Re\sum_{\beta}\sum_{j=0}^{\infty} \varepsilon^{j}\Theta_{\beta}^{(j)}({\bf X},T)e^{i\phi_{\beta}({\bf X},T)/\varepsilon}\] \[\pi = \sum_{j=0}^{\alpha}\varepsilon^{j}\overline{\Pi}^{(j)}(Z)+ \varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Pi_{0}^{(j)}({\bf X },T)\] (37) \[+\varepsilon^{n+2+\alpha}\Re\sum_{\beta}\sum_{j=0}^{\infty} \varepsilon^{j}\Pi_{\beta}^{(j)}({\bf X},T)e^{i\phi_{\beta}({\bf X},T)/ \varepsilon}\] where the index \(\beta\) indicates different wave fields. Those could also include higher harmonics of another wave field, but due to the weak wave amplitudes these do not contribute to all relevant orders. Take first a locally monochromatic wave field, where the wave amplitudes are non-zero only for a single \(\beta\). Inserting (35) - (37) into the equations of motion (6) - (9), sorting by terms with equal powers in \(\varepsilon\), and separating wave part and mean-flow part by averaging over scales sufficiently longer than the wave scales one retrieves the mean-flow results (13) - (15). From the leading-order wave contributions one again obtains the dispersion relations \(\omega_{\beta}=\Omega({\bf k}_{\beta},{\bf X},T)\) for either geostrophic modes or gravity waves, entailing also the eikonal equations, i.e. \[\left(\partial_{T}+{\bf c}_{g,\beta}\cdot\nabla_{\bf X}\right) \omega_{\beta} = \partial_{T}\Omega \tag{38}\] \[\left(\partial_{T}+{\bf c}_{g,\beta}\cdot\nabla_{\bf X}\right){ \bf k}_{\beta} = -\nabla_{\bf X}\Omega \tag{39}\] With the replacements \(\beta({\bf k},\hat{\omega})\rightarrow({\bf k}_{\beta},\hat{\omega}_{\beta})\) one also obtains the polarization relations (19) - (21). In the _quasilinear case_\(n=2\) one obtains from the next-order terms in \(\varepsilon\) the wave-action conservation equation \[\partial_{T}{\cal A}\beta_{\beta}+\nabla_{\bf X}\cdot\left({\bf c}_{g,\beta}{ \cal A}\beta_{\beta}\right)=0 \tag{40}\] for the GW wave-action density \({\cal A}_{\beta}=E_{gw,\beta}/\partial_{\beta}\), with \[E_{gw,\beta}=\frac{\overline{R}^{(0)}}{2}\left(\frac{|{\bf U}_{\beta}^{(0)}|^{ 2}}{2}+\varepsilon^{4}\frac{|W_{\beta}^{(0)}|^{2}}{2}+\frac{1}{{N_{0}}^{2}} \frac{|B_{\beta}^{(0)}|^{2}}{2}\right) \tag{41}\] the energy density of the GW field indicated by \(\beta\). One then observes that leading and next-order wave terms giving the results above are strictly linear in the wave fields. Hence the superposition (35) - (37) is a solution as well, with each component satisfying its own set of eikonal equations (39) and (38) and wave-action equation (40). Defining for this superposition the spectral wave-action density \[{\cal N}\left({\bf X},{\bf k},T\right)=\sum_{\beta}{\cal A}_{\beta}\left({\bf X },T\right)\delta\left[{\bf k}-{\bf k}_{\beta}\left({\bf X},T\right)\right] \tag{42}\] one can show from the eikonal equations and the wave-action equations that it satisfies the conservation equation \[\partial_{T}{\cal N}+\nabla_{\bf X}\cdot\left({\bf c}_{g,{\cal N}}\right)+ \nabla_{\bf k}\cdot\left(\dot{\bf k}{\cal N}\right)=0 \tag{43}\] where we have introduced the more compact notation \(\dot{\bf k}=-\nabla_{\bf X}\Omega\). This equation indicates that the integral of phase-space wave-action density over the total phase-space volume is conserved. Moreover, \({\bf c}_{g}=\nabla_{\bf k}\Omega\) and the definition of \(\dot{\bf k}=-\nabla_{\bf X}\Omega\) imply that the six-dimensional phase-space velocity is non-divergent, \[\nabla_{\bf X}\cdot{\bf c}_{g}+\nabla_{\bf k}\cdot\dot{\bf k}=0 \tag{44}\] Hence the conservation equation (43) can also be written \[\partial_{T}{\cal N}+{\bf c}_{g}\cdot\nabla_{\bf X}{\cal N}+\dot{\bf k}\cdot \nabla_{\bf k}{\cal N}=0 \tag{45}\] In contrast to wave-action density \({\cal A}=\sum_{\beta}{\cal A}_{\beta}=\int d^{3}k{\cal N}\) in position space, the spectral wave-action density is conserved along rays in phase space satisfying \(d_{T}({\bf x},{\bf k})=({\bf c}_{g},{\bf k})\). We also note that the superposition (42) allows for an arbitrary number of components with arbitrary wavenumbers each, so that effectively spectral wave-action density can also be a truly continuous function of wavenumber. In the _weakly nonlinear case_\(n=1\) the wave-action equation is to be supplemented by the effect of wave-wave interactions, of either GWs with GWs or GWs with GMs. To the best of our knowledge, the corresponding scattering integrals have only been worked out fully for Boussinesq dynamics without mean flow [14; 15], and first steps for Boussinesq dynamics with non-vanishing mean flows have been taken by [16]. In the atmospheric context they have so far been simply ignored. Both in the quasilinear and in the weakly nonlinear case the prognostic equations for the mean flow are still (24) - (26), but now the contributing fluxes are due to the full spectrum, i.e. \[{\bf T}=\varepsilon^{2n}\int d^{3}k{\bf c}_{z}\times{\bf k}_{h}\,{\cal N}\,{ \cal\hat{c}}_{gz}\frac{N_{0}^{2}f_{0}}{\partial^{2}-f_{0}^{2}}\overline{ \Theta}^{(0)} \tag{46}\] and the momentum-flux tensor has the elements \[M_{11} = \varepsilon^{2n}\int d^{3}k\left(\frac{k{\cal N}\,{\cal\hat{c}}_{ gx}}{1-f_{0}^{2}/\partial^{2}}+\frac{l{\cal N}\,{\cal\hat{c}}_{gy}}{\partial^{2} /f_{0}^{2}-1}\right) \tag{47}\] \[M_{12}=M_{21} = \varepsilon^{2n}\int d^{3}kk{\cal N}\,{\cal\hat{c}}_{gy}= \varepsilon^{2n}\int d^{3}k{\cal N}\,{\cal\hat{c}}_{gx}\] (48) \[M_{22} = \varepsilon^{2n}\int d^{3}k\left(\frac{l{\cal N}\,{\cal\hat{c}}_{ gy}}{1-f_{0}^{2}/\partial^{2}}+\frac{k{\cal N}\,{\cal\hat{c}}_{gx}}{\partial^{2}/f_{0}^{2}-1}\right)\] (49) \[M_{31} = \varepsilon^{2n}\int d^{3}k\frac{k{\cal N}\,{\cal\hat{c}}_{gz}}{1-f_ {0}^{2}/\partial^{2}}\] (50) \[M_{32} = \varepsilon^{2n}\int d^{3}k\frac{l{\cal\hat{c}}_{gz}}{1-f_{0}^{2}/ \partial^{2}} \tag{51}\] Note that these fluxes only contribute to leading order in the large-amplitude case. It is known, however, that their effects accumulate over longer times so that they cannot be ignored. The cleanest way to handle this would be to introduce a correspondingly slow time scale. For simplicity we avoid this step and just keep the wave impacts on the mean flow as outlined above. In the following we will use (46) - (51) also for the monochromatic large-amplitude case, where it is understood that \(n=0\) and that \(\mathcal{N}(\mathbf{X},\mathbf{k},T)=\mathcal{A}(\mathbf{X},T)\delta\left[ \mathbf{k}-\mathbf{k}(\mathbf{X},T)\right]\), with \(\mathbf{k}\) as an argument of \(\mathcal{N}\) not depending on \(\mathbf{X}\) and \(T\) anymore. ## IV Wave impact on the balanced mean flow Because of the geostrophic and hydrostatic equilibrium (14) - (15) the synoptic-scale mean flow satisfies an extended quasigeostrophic theory. Using these equilibrium conditions, one can derive from the prognostic equations (24) - (26) a single prognostic equation \[\left(\partial_{T}+\mathbf{U}_{0}^{(0)}\cdot\nabla_{\mathbf{X},h }\right)P_{0}^{(0)}\] \[=-\partial_{X}\left(\frac{1}{\overline{R}^{(0)}}\nabla_{\mathbf{ X}}\cdot\mathcal{H}\right)+\partial_{Y}\left(\frac{1}{\overline{R}^{(0)}} \nabla_{\mathbf{X}}\cdot\mathcal{G}\right) \tag{52}\] for the quasigeostrophic potential vorticity (QGPV) \[P_{0}^{(0)} = \nabla_{\mathbf{X},h}\left(\frac{c_{p}}{R}\frac{\overline{ \Theta}^{(0)}\Pi_{0}^{(0)}}{f_{0}}\right) \tag{53}\] \[+\frac{f_{0}}{\overline{R}^{(0)}}\partial_{Z}\left[\frac{ \overline{R}^{(0)}}{N_{0}^{2}}\partial_{Z}\left(\frac{c_{p}}{R}\overline{ \Theta}^{(0)}\Pi_{0}^{(0)}\right)\right]\] with \(\mathcal{G}=\int d^{3}k\mathbf{\hat{e}}_{g}k\mathcal{N}\) and \(\mathcal{H}=\int d^{3}k\mathbf{\hat{e}}_{g}l\mathcal{N}\) the fluxes of the zonal and meridional components of pseudomomentum \(\mathbf{p}_{h}=\int d^{3}k\mathbf{k}_{h}\mathcal{N}\). Inverting QGPV yields the leading-order synoptic-scale Exner-pressure fluctuations \(\Pi_{0}^{(0)}\) whence one can obtain, using geostrophic and hydrostatic equilibrium, the leading-order horizontal wind and potential-temperature fluctuations of the synoptic-scale flow. QGPV is forced by the vertical curl of the pseudomomentum-flux convergences, and in the absence of waves it is conserved. ## V Summary of the results in dimensional form The practitioner needs the results above in their dimensional form. These are as follows: A re-dimensionalization of the GW dispersion relation in (70), by the substitutions \[\hat{\omega} \rightarrow \hat{\omega}T_{w}=\hat{\omega}/f \tag{54}\] \[\overline{\Theta}^{(\alpha)} \rightarrow \left\{\begin{array}{ll}\overline{\theta}/T_{00}&\mbox{if}\, \alpha=0\\ \left(\overline{\theta}/T_{00}-\overline{\Theta}^{(0)}\right)/\varepsilon& \mbox{if}\,\alpha=1\end{array}\right.\] (55) \[Z \rightarrow \varepsilon_{Z}/H_{w}\] (56) \[(k,l,m) \rightarrow [L_{w}(k,l),H_{w}m]\] (57) \[f_{0} \rightarrow f/f\] (58) \[\mathbf{U}_{0}^{(0)} \rightarrow \langle\mathbf{u}\rangle/U_{w} \tag{59}\] leads to the dimensional GW dispersion relation \[\hat{\omega}^{2}=(\omega-\mathbf{k}_{h}\cdot\langle\mathbf{u}\rangle)^{2}= \frac{N^{2}k_{h}^{2}+f^{2}m^{2}}{k_{h}^{2}+m^{2}} \tag{60}\] with \(N^{2}=(g/\overline{\theta})d_{z}\overline{\theta}\), and the trivial reformulation \[\hat{\omega}_{\beta}^{2}=(\omega_{\beta}-\mathbf{k}_{h,\beta}\cdot\langle \mathbf{u}\rangle)^{2}=\frac{N^{2}k_{h,\beta}^{2}+f^{2}m_{\beta}^{2}}{k_{h,\beta }^{2}+m_{\beta}^{2}} \tag{61}\] for the weakly nonlinear and quasilinear case with a superposition of spectral components. Substituting \[\left(\mathbf{U}_{\beta}^{(0)},W_{\beta}^{(0)},B_{\beta}^{(0)},\Pi_{\beta}^{(0 )}\right)\rightarrow\left(\frac{\mathbf{u}_{\beta}^{\prime}}{U_{w}},\frac{w_ {\beta}^{\prime}}{W_{w}},\frac{\theta_{\beta}^{\prime}}{\varepsilon^{1+\alpha }\overline{\theta}},\frac{\pi_{\beta}^{\prime}}{\varepsilon^{2+\alpha}}\right) \tag{62}\] one obtains from (19) - (21) the dimensional polarization relations \[\mathbf{u}_{\beta}^{\prime} = \frac{\mathbf{k}_{h,\beta}\hat{\omega}_{\beta}-if\mathbf{e}_{z} \times\mathbf{k}_{h,\beta}}{\hat{\omega}_{\beta}^{2}-f^{2}}\left(1-\frac{\hat{ \omega}_{\beta}^{2}}{N^{2}}\right)\frac{b_{\beta}^{\prime}}{im_{\beta}} \tag{63}\] \[w_{\beta}^{\prime} = \frac{i\hat{\omega}_{\beta}}{N^{2}}b_{\beta}^{\prime}\] (64) \[c_{p}\overline{\theta}\pi_{\beta}^{\prime} = \left(1-\frac{\hat{\omega}_{\beta}^{2}}{N^{2}}\right)\frac{b_{\beta }^{\prime}}{im_{\beta}} \tag{65}\] where \(b_{\beta}^{\prime}=g\,\theta_{\beta}^{\prime}/\overline{\theta}\) is the dimensional buoyancy of the \(\beta\)th GW component. The corresponding results for the locally monochromatic large-amplitude case are obtained from (63) - (65) by dropping the \(\beta\)-index. Likewise, we obtain for the locally monochromatic case and for the quasilinear spectral case, respectively, the dimensional GW wave-action equations \[\partial_{t}\mathcal{A}+\nabla\cdot\left(\mathbf{c}_{g}\mathcal{A}\right)=0 \qquad\partial_{t}\mathcal{A}_{\beta}+\nabla\cdot\left(\mathbf{c}_{g,\beta} \mathcal{A}_{\beta}\right)=0 \tag{66}\] where \(\left(\mathbf{c}_{g},\mathbf{c}_{g,\beta}\right)=\left(\nabla_{\mathbf{k}} \omega,\nabla_{\mathbf{k}}\omega_{\beta}\right)\) is the GW group velocity, and \(\left(\mathcal{A},\mathcal{A}_{\beta}\right)=\left(E_{w}/\hat{\omega},E_{w, \beta}/\hat{\omega}_{\beta}\right)\) the GW wave action, with \[\left(E_{w},E_{w,\beta}\right)=\frac{\overline{\rho}}{2}\left(\frac{|\mathbf{v} ^{\prime}|^{2}}{2}+\frac{|\mathbf{v}^{\prime}|^{2}}{2N^{2}},\frac{\left|\mathbf{v }_{\beta}^{\prime}\right|^{2}}{2}+\frac{\left|b_{\beta}^{\prime}\right|^{2}}{2N^ {2}}\right) \tag{67}\] the wave energy. \(\overline{\rho}\) is the reference-atmosphere density. The wave-action equations together with the dimensional forms \[\left(\partial_{t}+\mathbf{c}_{g}\cdot\nabla_{\mathbf{x}}\right) \mathbf{k} = -\nabla_{\mathbf{x}}\Omega \tag{68}\] \[\left(\partial_{t}+\mathbf{c}_{g,\beta}\cdot\nabla_{\mathbf{x}} \right)\mathbf{k}_{\beta} = -\nabla_{\mathbf{x}}\Omega \tag{69}\] of the eikonal equations (17) and (38), where \[\Omega(\mathbf{k},\mathbf{x},t) = \mathbf{k}\cdot\langle\mathbf{u}\rangle(\mathbf{x},t)\pm\sqrt{\frac{ N^{2}(z)k_{h}^{2}+f^{2}m^{2}}{k_{h}^{2}+m^{2}}} \tag{70}\] leads to the spectral wave-action equations \[\partial_{t}\mathcal{N}+\nabla_{\mathbf{x}}\cdot\left(\mathbf{e}_{g} \mathcal{N}\right)+\nabla_{\mathbf{k}}\cdot\left(\dot{\mathbf{k}}\mathcal{N} \right) = 0 \tag{71}\] \[\partial_{t}\mathcal{N}+\mathbf{e}_{g}\cdot\nabla_{\mathbf{x}} \mathcal{N}+\dot{\mathbf{k}}\cdot\nabla_{\mathbf{k}}\mathcal{N} = 0 \tag{72}\] for the spectral wave-action density \[\mathcal{N}\left(\mathbf{x},\mathbf{k},t\right)=\sum_{\beta}\mathcal{A}_{ \beta}\left(\mathbf{x},t\right)\delta\left[\mathbf{k}-\mathbf{k}_{\beta}\left( \mathbf{x},t\right)\right] \tag{73}\] in the quasilinear case and \(\mathcal{N}\left(\mathbf{x},\mathbf{k},t\right)=\mathcal{A}\left(\mathbf{x}, t\right)\delta\left[\mathbf{k}-\mathbf{k}\left(\mathbf{x},t\right)\right]\) in the locally monochromatic large-amplitude case. Here as well we have defined \(\mathbf{k}=-\nabla_{\mathbf{x}}\Omega\). In deriving (72) from (71) one again exploits the non-divergence \[\nabla_{\mathbf{x}}\cdot\mathbf{e}_{g}+\nabla_{\mathbf{k}}\cdot\dot{\mathbf{ k}}=0 \tag{74}\] of the phase-space velocity. The GW impact on the synoptic-scale mean flow, indicated by angle brackets \(\left\langle\dots\right\rangle\), is captured within synoptic scaling by supplementing the entropy equation by GW entropy-flux convergence, \[\left(\partial_{t}+\left\langle\mathbf{v}\right\rangle\cdot\nabla\right) \left\langle\theta\right\rangle=-\nabla\cdot\left\langle\mathbf{u}^{\prime} \theta^{\prime}\right\rangle \tag{75}\] and the horizontal-momentum equation by GW momentum-flux convergence and the elastic term \[\left(\partial_{t}+\left\langle\mathbf{v}\right\rangle\cdot \nabla\right)\left\langle\mathbf{u}\right\rangle+f\mathbf{e}_{z}\times\left \langle\mathbf{u}\right\rangle\] \[=-c_{p}\langle\theta\rangle\nabla_{h}\langle\pi\rangle-\frac{1}{ \overline{\rho}}\nabla\cdot\left(\overline{\rho}\langle\mathbf{v}^{\prime} \mathbf{u}^{\prime}\rangle\right)+\frac{f}{\overline{\theta}}\mathbf{e}_{z} \times\nabla\cdot\left\langle\mathbf{u}^{\prime}\theta\right\rangle \tag{76}\] where the entropy flux can be obtained from the spectral wave-action density via \[\left\langle\mathbf{u}^{\prime}\theta^{\prime}\right\rangle=\int d^{3}k \,\mathbf{e}_{z}\times\mathbf{k}_{h}\,\mathcal{N}\,\mathcal{E}_{gz}\frac{N^{2 }f}{\overline{\phi}^{2}-f^{2}}\,\overline{\overline{\rho}} \tag{77}\] and the mass-specific momentum-flux tensor has the elements \[\left\langle u^{\prime}u^{\prime}\right\rangle = \int d^{3}k\left(\frac{\mathcal{N}\,\mathcal{E}_{gx}}{1-f^{2}/ \overline{\phi}^{2}}+\frac{I\mathcal{N}\,\mathcal{E}_{gy}}{\overline{\phi}^{ 2}/f^{2}-1}\right) \tag{78}\] \[\left\langle u^{\prime}v^{\prime}\right\rangle=\left\langle v^{ \prime}u^{\prime}\right\rangle = \int d^{3}k\,k\,\mathcal{N}\,\mathcal{E}_{gy}=\int d^{3}k\,l \,\mathcal{N}\,\mathcal{E}_{gx}\] (79) \[\left\langle v^{\prime}v^{\prime}\right\rangle = \int d^{3}k\left(\frac{\mathcal{N}\,\mathcal{E}_{gy}}{1-f^{2}/ \overline{\phi}^{2}}+\frac{k\,\mathcal{N}\,\mathcal{E}_{gx}}{\overline{\phi}^ {2}/f^{2}-1}\right)\] (80) \[\left\langle w^{\prime}u^{\prime}\right\rangle = \int d^{3}k\frac{k\,\mathcal{N}\,\mathcal{E}_{gx}}{1-f^{2}/ \overline{\phi}^{2}}\] (81) \[\left\langle w^{\prime}v^{\prime}\right\rangle = \int d^{3}k\frac{l,\mathcal{N}\,\mathcal{E}_{gz}}{1-f^{2}/ \overline{\phi}^{2}} \tag{82}\] In equations (75) - (76) the mean flow is understood to be its full expansion \[\left\langle\mathbf{v}\right\rangle = U_{w}\,\sum_{j=0}^{\infty}\varepsilon^{j}\mathbf{V}_{0}^{(j)}( \mathbf{X},T) \tag{83}\] \[\left\langle\theta\right\rangle = T_{00}\left[\sum_{j=0}^{\infty}\varepsilon^{j}\overline{\Theta} ^{(j)}(Z)+\varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Theta_{0}^ {(j)}(\mathbf{X},T)\right]\] (84) \[\left\langle\pi\right\rangle = \sum_{j=0}^{\alpha}\varepsilon^{j}\overline{\Pi}^{(j)}(Z)+ \varepsilon^{1+\alpha}\sum_{j=0}^{\infty}\varepsilon^{j}\Pi_{0}^{(j)}(\mathbf{ X},T) \tag{85}\] so that (75) is in the two leading orders in \(\varepsilon\) consistent with (13) and (25), and that (76) is in the two leading orders consistent with (15) and (26). The fluxes (77) - (82) are to be coded into weather-forecast and climate models, but the leading-order synoptic-scale mean-flow dynamics can also be expressed by the QG potential-vorticity equation \[\left(\partial_{t}+\left\langle\mathbf{u}\right\rangle\cdot\nabla_{h}\right)P=- \partial_{x}\left(\frac{1}{\overline{\rho}}\nabla\cdot\mathcal{H}\right)+ \partial_{y}\left(\frac{1}{\overline{\rho}}\nabla\cdot\mathcal{G}\right) \tag{86}\] with \(\mathcal{G}=\int d^{3}k\,\mathbf{\hat{e}}_{g}k\mathcal{N}\) and \(\mathcal{H}=\int d^{3}k\,\mathbf{\hat{e}}_{g}l\,\mathcal{N}\) the fluxes of the zonal and meridional components of GW pseudomomentum \(\mathbf{p}_{h}=\int d^{3}k\,\mathbf{k}_{h}\mathcal{N}\), \[P=\nabla_{h}^{2}\psi+\frac{1}{\overline{\rho}}\frac{\partial}{\partial z}\left( \overline{\rho}\frac{f^{2}}{N^{2}}\frac{\partial\psi}{\partial z}\right) \tag{87}\] and \(\psi=c_{p}\overline{\theta}_{0}\langle\delta\pi\rangle/f\) the streamfunction, where \(\langle\delta\pi\rangle=\varepsilon^{1+\alpha}\Pi_{0}^{(0)}\) is the leading-order synoptic-scale Exner-pressure fluctuations, and \(\overline{\theta}_{0}=T_{00}\overline{\Theta}^{(0)}\) is the leading-order reference-atmosphere potential temperature. The latter depends on \(z\) only in the case with moderately strong stratification (\(\alpha=0\)), while it is a constant in the weakly stratified case (\(\alpha=1\)). The streamfunction also yields the leading-order synoptic-scale horizontal wind via geostrophic equilibrium, \[\left\langle\mathbf{u}\right\rangle=\mathbf{e}_{z}\times\nabla_{h}\psi \tag{88}\] and the leading-order synoptic-scale potential temperature fluctuations \(\langle\delta\theta\rangle=\varepsilon^{1+\alpha}T_{00}\Theta_{0}^{(0)}\), via hydrostatic equilibrium, \[g\frac{\langle\delta\theta\rangle}{\overline{\theta}_{0}}=f\frac{\partial\psi}{ \partial z}+\left\{\begin{array}{ll}g\left(\frac{\overline{\theta}-\overline{ \theta}_{0}}{\overline{\theta}_{0}}\right)^{2}&\mbox{if}\,\alpha=1\\ -N^{2}f\psi/g&\mbox{if}\,\alpha=0\end{array}\right. \tag{89}\] ## VI Conservation properties Neither GW energy is conserved nor is QGPV. The interaction between GWs and synoptic-scale mean flow leads to an exchange between the two so that the corresponding conserved quantity comprises contributions from both components. ### Energy We begin with energy. As can be shown in the derivation of the wave-action equation, GW energy \[E_{w}=\int d^{3}k\,\partial\mathcal{N} \tag{90}\] satisfies \[\partial_{t}E_{w}=-\nabla\cdot\mathbf{F}_{w}-\left(\mathbf{e}_{z}\mathcal{G}+ \mathbf{e}_{y}\mathcal{H}\right)\cdot\nabla\left\langle\mathbf{u}\right\rangle \tag{91}\] with \[\mathbf{F}_{w}=\int d^{3}k\mathbf{c}_{g}\hat{\omega}\mathcal{N} \tag{92}\] the wave-energy flux. The last term describes the exchange with the synoptic-scale mean flow. The latter has an energy density \[E_{s}=\frac{\overline{\mathcal{B}}}{2}\left[|\nabla_{h}\mathbf{\psi}|^{2}+\frac{f^{ 2}}{N^{2}}\left(\frac{\partial\mathbf{\psi}}{\partial z}\right)^{2}\right] \tag{93}\] where the first part is the kinetic energy density and the second part is the density of available potential energy. As can be derived from the QGPV equation (86), it obeys \[\partial_{t}E_{s}-\nabla_{h}\cdot[\overline{\mathcal{P}}\psi\hat{ \partial}_{t}\left(\nabla_{h}\mathbf{\psi}\right)]-\partial_{z}\left(\overline{ \mathcal{P}}\psi\frac{f^{2}}{N^{2}}\partial_{z}\partial_{t}\mathbf{\psi}\right)\] \[-\nabla_{h}\cdot(\overline{\mathcal{P}}\psi\langle\mathbf{u} \rangle P)=\partial_{x}\left(\psi\nabla\cdot\mathcal{H}\right)-\partial_{y} \left(\psi\nabla\cdot\mathcal{G}\right)\] \[-\nabla\cdot[\langle\mathbf{u}\rangle\cdot(\mathbf{e}_{x} \mathcal{G}+\mathbf{e}_{y}\mathcal{H})]+(\mathbf{e}_{x}\mathcal{G}+\mathbf{e }_{y}\mathcal{H})\cdot\nabla\langle\mathbf{u}\rangle \tag{94}\] Hence the prognostic equation for the total energy \(E_{s}+E_{w}\), \[\partial_{t}\left(E_{s}+E_{w}\right)-\nabla_{h}\cdot[\overline{ \mathcal{P}}\psi\left(\partial_{t}\partial\nabla_{h}\mathbf{\psi}+\langle \mathbf{u}\rangle P\right)]\] \[-\partial_{z}\left(\overline{\mathcal{P}}\psi\frac{f^{2}}{N^{2}} \partial_{z}\partial_{t}\mathbf{\psi}\right)\] \[=-\nabla\cdot\left[\mathbf{F}_{w}-\mathbf{e}_{x}\psi\nabla \cdot\mathcal{H}+\mathbf{e}_{y}\psi\nabla\cdot\mathcal{G}\right.\] \[\left.\qquad\qquad+\langle\mathbf{u}\rangle\cdot(\mathbf{e}_{x} \mathcal{G}+\mathbf{e}_{y}\mathcal{H})\right] \tag{95}\] contains only flux terms, so that under suitable boundary conditions the volume-integrated total energy is conserved. ### Potential Vorticity For the derivation of a potential-vorticity conservation property one needs a prognostic equation for pseudomomentum \(\mathbf{p}_{h}=\int d^{3}k\,\mathbf{k}_{h}\mathcal{N}\). As can be derived with the help of the wave-action-density equation (71), this prognostic equation is \[\left(\partial_{t}+\langle\mathbf{u}\rangle\cdot\nabla\right) \mathbf{p}_{h}=-\nabla\cdot(\mathcal{G}\mathbf{e}_{x}+\mathcal{H}\mathbf{e}_{ y})-\nabla_{h}\langle\mathbf{u}\rangle\cdot\mathbf{p}_{h} \tag{96}\] The vertical curl of this yields \[\left(\partial_{t}+\langle\mathbf{u}\rangle\cdot\nabla\right) \left(\mathbf{e}_{z}\cdot\nabla\times\mathbf{p}_{h}\right)=-\partial_{x} \nabla\cdot\mathcal{H}+\partial_{y}\nabla\cdot\mathcal{G} \tag{97}\] Comparing with the QGPV equation (86) one obtains the conservation equation \[\left(\partial_{t}+\langle\mathbf{u}\rangle\cdot\nabla\right) \Pi=0\qquad\Pi=P-\mathbf{e}_{z}\cdot\nabla\times\frac{\mathbf{p}_{h}}{ \overline{\rho}} \tag{98}\] for an extension \(\Pi\) of quasigeostrophic potential vorticity that contains contributions from the synoptic-scale flow, that are linear in the synoptic-scale streamfunction, and the negative of the gravity-wave pseudovorticity \(\mathbf{e}_{z}\cdot\nabla\times\mathbf{p}_{h}/\overline{\rho}\), that is nonlinear in the gravity-wave amplitudes. This conservation property can only be broken by non-conservative effects, e.g. GW sources or dissipative GW breaking. ### Non-Acceleration Of considerable consequence for the properties of GW parameterizations is the direct consequence \[\left(\partial_{t}+\langle\mathbf{u}\rangle\cdot\nabla\right)P=-\left( \partial_{t}+\langle\mathbf{u}\rangle\cdot\nabla\right)\left(\mathbf{e}_{z} \cdot\nabla\times\frac{\mathbf{p}_{h}}{\overline{\rho}}\right) \tag{99}\] of (98). Hence the leading-order synoptic-scale mean flow is not influenced by GWs if GW pseudovorticity is a Lagrangian invariant of the flow. The latter is the case, e.g., if * GW amplitudes and wavenumbers are steady, * GW pseudovorticity does not vary horizontally, and * the GWs are not affected by sources or sinks. Classic GW parameterizations assume steady-state GW fields and they do not take horizontal variations of the GWs fields into account. Hence they rely exclusively on GW sources and GW breaking as processes leading to a GW impact on the resolved flow. ## VII Consequences for GW parameterizations ### Numerical Implementation A numerical implementation of spectral wave-action dynamics could either be done in a Eulerian finite-volume formulation, based on (71), or in a Lagrangian approach starting from (72). In the latter [16, 17, 18, 19] one sub-divides the part of phase space with non-zero wave-action density into rectangular so-called ray volumes. Wave-action density is conserved Figure 1: Illustration of the Lagrangian discretization of the propagation of a phase-space volume, here in the sub-space spanned only by \(m\) and \(z\). It is subdivided into small rectangular ray volumes. Each ray volume propagates with a mean velocity averaged from the phase-space velocities at the center of the upper and lower edge of the ray volume. Different group velocities \(c_{gx}\) at the edges lead to a stretching or squeezing of the ray-volume extent \(\Delta z\) in \(z\)-direction. The extent \(\Delta m\) in \(m\) direction is then adjusted so that the area \(\Delta\Delta m\) is conserved. The procedure in the \(x-k\) and \(y-l\) sub-spaces is the same. Reproduced from J. Muraschko, M. Fruman, U. Achatz, S. Hickel, and Y. Toledo, Quart. J. R. Met. Soc. 141, 676 (2015), with permission along trajectories (called rays) satisfying \[d_{t}(\mathbf{x},\mathbf{k})=(\mathbf{c}_{g},\hat{\mathbf{k}}) \tag{100}\] Hence, following each point of a ray volume along the ray passing through it one obtains its propagation as a time-dependent volume within which wave-action density \(\mathcal{N}\) is conserved. Because of the non-divergence (74) of the phase-space (ray) velocity the volume content of this volume is conserved as well. In the numerical discretization one assumes that each ray volume keeps a rectangular shape, but it is allowed to be stretched and squeezed in a volume-preserving manner. Because (74) holds separately in the two-dimensional spaces spanned by \(x\) and \(k\), \(y\) and \(l\), or \(z\) and \(m\), i.e. \[\partial_{x}c_{gx}+\partial_{\hat{k}}\hat{k} = 0 \tag{101}\] \[\partial_{y}c_{gy}+\partial_{\hat{l}}\hat{l} = 0\] (102) \[\partial_{z}c_{gz}+\partial_{m}\hat{m} = 0 \tag{103}\] this is done so that the areas \(\Delta x\Delta k\), \(\Delta y\Delta l\) and \(\Delta z\Delta m\) are conserved, with \(\Delta x\), \(\Delta y\), \(\Delta z\), \(\Delta k\), \(\Delta l\), and \(\Delta m\) the ray volume extent in the six respective phase-space directions. Fig. 1 illustrates this for the space spanned by \(z\) and \(m\). Volume deformations away from an original rectangular shape are to be represented by splitting a corresponding volume into a sufficiently large number of rectangular ray volumes. GW dissipation is simulated by a classic saturation approach based on [20] and adapted by [18; 21] to spectral wave-action dynamics. A static-instability breaking threshold, with a tuning factor close to 1, is determined by the criterion that, within a resolved-flow volume cell, the constructive interference of all ray volumes can lead to a total GW signal with a negative vertical derivative in potential temperature larger than the positive derivative given by the stratification of the resolved flow. Once this threshold is exceeded turbulence is invoked, causing turbulent viscosity and diffusivity that provide a dissipative right-hand side to the spectral wave-action equation (72) so that the GW field is kept at the threshold of static instability. The interaction between the parameterized GWs and the resolved large-scale flow takes two directions. The contribution of the large-scale winds \(\left\langle\mathbf{u}\right\rangle\) to the group velocity \(\mathbf{c}_{g}\) and to the wavenumber velocity \(\hat{\mathbf{k}}\) influences the development of the wave-action density. In the other direction, GWs influence the resolved flow in its thermodynamics via the divergence of the GW entropy flux (77). They also change its momentum via the divergence of GW momentum flux (78) - (82), and via the elastic term, i.e. the last term in (76). That term can be obtained from the GW entropy-flux convergence as well. The wavenumber integrals in the fluxes are presently estimated to first-order accuracy, by evaluating the integrand at the center of the ray volume and multiplying it by its wavenumber volume. So far implementations have been into finite-volume dynamical model cores for resolved-flow dynamics. In this context the fluxes are then projected onto the finite-volume cell faces and finally used there. It would seem attractive to avoid the projection of the fluxes from the Lagrangian GW model onto the resolved-flow finite volume cells, and also the interpolation of the resolved-flow winds to the location of each ray volume, by a straightforward finite-volume implementation of the spectral wave-action equation (71) in flux form. This alternative has been studied by [18]. It turns out that the Lagrangian approach is computationally much more efficient. Two factors contribute to this: Firstly, in a finite-volume approach, one must span a six-dimensional phase space, with often a substantial fraction of cells not contributing essentially to the GW fluxes. Secondly, GW refraction and reflection are only captured well in the finite-volume approach if the resolution in wavenumber space is excessively high. ### Comparison with Classic GWP The approach described above on the simulation of the interaction between subgrid-scale GWs and a resolved flow, in climate models but also in weather-forecast codes, differs in various regards from the approach presently applied by the weather and climate centers: First, we point out that present-day GW parameterizations use the observation that the GW impact on the resolved-flow QGPV in (86) could also be obtained by dropping the GW entropy-flux convergence from the mean-flow entropy equation (75), by removing the elastic term from the mean-flow momentum equation (76), and by replacing the momentum flux there by the pseudo-momentum flux, i.e. by using \[\left(\partial_{t}+\left\langle\mathbf{v}\right\rangle\cdot\nabla\right) \left\langle\mathbf{\theta}\right\rangle=0 \tag{104}\] and \[\left(\partial_{t}+\left\langle\mathbf{v}\right\rangle\cdot \nabla\right)\left\langle\mathbf{u}\right\rangle+f\mathbf{c}_{z}\times\left\langle \mathbf{u}\right\rangle\] \[=-c_{p}\langle\mathbf{\theta}\rangle\nabla_{h}\langle\pi\rangle- \frac{1}{\overline{\beta}}\nabla\cdot\left(\mathcal{G}\mathbf{c}_{x}+\mathcal{H }\mathbf{c}_{y}\right) \tag{105}\] and then starting from there the asymptotic analysis of synoptic-scale flow dynamics. Therefore the common approach is to only force the resolved-flow momentum equation, by GW pseudomomentum-flux convergence instead of GW momentum-flux convergence, and to not have the GW parameterization acting on the thermodynamics. It is computationally simpler and also has a certain conceptual attractiveness because all of the GW impact can be attributed to the pseudomomentum-flux divergence. From the theory, this could be a viable approach, but it assumes that the resolved flow is geostrophically and hydrostatically balanced. Given the general trend to increasingly finer model grids, with also larger-scale unbalanced GWs more and more being part of the resolved flow, this is however an issue that has been investigated by [22]. Indeed they show, in comparisons between idealized GW resolving simulations and coarse-grid simulations with a GW parameterization either using the general (direct) approach or the classic (pseudomomentum) approach, that the former is much more reliable in the simulation of the GW-mean-flow interaction. The second and third aspect relate to simplifications in the implementation that are mostly due to considerations of computational efficiency: Because code parallelization typically works in vertical columns, i.e. different processors are allotted different horizontal locations, it is computationally less expensive to use parameterizations for subgrid-scale processes that do not couple different horizontal positions. In the context here this amounts to ignoring the impact of horizontal mean-flow gradients on the horizontal wavenumbers, i.e. assuming \(\dot{k}=\dot{l}=0\), and to ignoring all horizontal group-velocity components, and hence to replacing the spectral wave-action equations (71) and (72) by \[\partial_{t}\mathcal{N}+\partial_{z}\left(c_{gz}\mathcal{N} \right)+\partial_{m}\left(\dot{m}\mathcal{N}\right) = 0 \tag{106}\] \[\partial_{t}\mathcal{N}+c_{gz}\partial_{z}\mathcal{N}+\dot{m} \partial_{m}\mathcal{N} = 0 \tag{107}\] This neglects horizontal GW propagation, and it also neglects the response of the horizontal wave number to horizontal variations of the resolved flow. Moreover, in this so-called single-column approximation one also neglects horizontal GW fluxes in the GW impact on the large-scale flow, i.e. one approximates (105) by \[\left(\partial_{t}+\left\langle\mathbf{v}\right\rangle\cdot \nabla\right)\left\langle\mathbf{u}\right\rangle+f\mathbf{e}_{z}\times\left\langle \mathbf{u}\right\rangle\] \[=-c_{p}\langle\theta\rangle\nabla_{h}\langle\pi\rangle-\frac{1}{ \bar{\rho}}\partial_{z}\left(\mathcal{G}_{z}\mathbf{e}_{x}+\mathcal{H}_{z} \mathbf{e}_{y}\right) \tag{108}\] Finally, in a further attempt to gain in efficiency, one neglects, in the so-called steady-state approximation, the time dependence of the wave-action density, i.e. one uses instead of (106) \[\partial_{z}\left(c_{gz}\mathcal{N}\right)+\partial_{m}\left(\dot{m}\mathcal{ N}\right)=0 \tag{109}\] or, integrating in vertical wavenumber \[\partial_{z}\int dmc_{gz}\mathcal{N}=0 \tag{110}\] Given a lower boundary condition, obtained from some description of the GW sources the latter equation is integrated vertically to obtain a profile of wave-action density that agrees with what one would get, from a steady lower boundary condition, after a period sufficiently long for rays to propagate from the lower boundary to the model top. Effectively this is then an equilibrium profile in agreement with instantaneous GW propagation from the lower boundary to the model top. In practice, one always has a discrete sum of spectral components that are emitted from some parameterized source. Hence one recurs to the representation (73) and solves the single-column and steady-state simplification \[\partial_{z}\left(c_{gz,\beta}\mathcal{A}_{\beta}\right)=0 \tag{111}\] of (66) for each spectral component. As follows from the non-acceleration result discussed in subsection VI.3 the classic approach using the single-column and steady-state approximations, the first neglecting GW horizontal propagation and horizontal GW fluxes, and the second neglecting GW transience, relies exclusively on GW dissipation as a process allowing a GW impact on the resolved flow. The relevance of GW transience has been investigated by [18; 19; 21]. As shown by [18], the resolved-flow impact induced by the breaking of a single GW packet is often described quite incorrectly if GW transience is not taken into account. The consequence of this for GW parameterization in a global climate model has been studied by [19; 21]. The result is that the statistics of simulated GW fluxes, exhibiting in measurements distributions with long tails of strong fluxes, are affected significantly by a steady-state assumption, to the point that the distributions are distorted significantly away from the observational findings. The consequences of the single-column approximation have been investigated in the mean time as well (Volker et al 2023, Kim et al 2023, both to be submitted elsewhere). As with regard to the neglect of wave transience, they appear to be of leading order as well. The horizontal distribution of GW fluxes in the middle atmosphere (above 15km altitude) is very different between the single-column and the general parameterization approach so that also the large-scale winds differ significantly. In the tropics this leads to a significantly changed variability, with the quasi-biennial oscillation [23, e.g.] differing in its period conspicuously between simulations using the two approaches. Hence it appears that the general approach is considerably more realistic. ## VIII Final discussion The multi-scale theory outlined above has led to the development of an extended approach, in weather and climate models, for the parameterization of subgrid-scale GWs. As is clear by now it is considerably more realistic than classic GW parameterizations, where the GW impact has been simplified by focusing on GW pseudomomentum, where GW transience is neglected, and where the effects of horizontal GW propagation and of horizontal GW fluxes are neglected. This comes at a computational prize: Simulations with the new approach are slower, by about an order of magnitude at global-model resolutions tested so far, than those using a classic GW parameterization. Yet, they are more realistic, and if one wanted to resolve the GWs that turn out to matter, simulations would be and are [24; 25, e.g.] by many orders of magnitude slower than even this [21]. Hence the approach discussed here seems to be a reasonable compromise between the realism of GW permitting simulations, certainly of their own value as a data source for many studies, and the efficiency of climate simulations using classic GW parameterizations. Moreover, in view of the increasing complexity of ever higher resolved weather and climate models [26] there is an ever-increasing need for a hierarchy of models that help us gain conceptual insight into the complex atmosphere [27]. In this hierarchy the more general approach for GW parameterizations should have its place. Open issues remain, both on the theoretical and on the applied side. One aspect that seems to deserve consideration is the interaction between GWs and the geostrophic vortical mode that is the other constituting component of mesoscale atmospheric dynamics [9; 12; e.g.]. Present approaches for the description of this interaction, using tools of wave-turbulence theory [15], do not take the presence of a leading-order mean flow into account. In the ocean context this is possible, but in the atmosphere mean winds enter to leading order. This also holds for another aspect of relevance, the so-far neglect of GW-GW interactions. It is not clear how relevant this process will be in the end, and to the best of our knowledge this has not been investigated yet. Here as well available theories [14] suffer from the neglect of a mean flow. It would be interesting to obtain a corresponding extension of the theory, e.g. following the route indicated by [16]. Such work would close a conceptual gap that we still seem to have in the weakly nonlinear regime between the quasilinear regime (allowing a spectral approach) and the large-amplitude regime (where only locally monochromatic GW fields are possible). Finally, the handling of GW breaking by the saturation approach is very crude and a theory encompassing GW-turbulence interaction still needs to be derived from the basic equations. Similar considerations hold for the emission of GWs by various processes, be it flow over orography, convection, or emission by jets and fronts, to just name the most often discussed candidates. In all of these instances closed mathematical descriptions are still to be derived from multi-scale approaches, that hopefully would be able to replace present schemes. Finally, we think that in the numerical implementation of the general approach the last word has not been spoken. It would be helpful if experts in numerical mathematics gave it a closer look, especially with regard to accuracy and efficiency. Finally we also want to point out that it might be of interest to consider and extend the tools and techniques outlined here for other challenging problems of wave-mean-flow-interaction theory. The quasi-biennial oscillation of the equatorial zonal-mean zonal winds [23], e.g., is not only due to GWs but also to larger-scale tropical Kelvin and Rossby-Gravity waves. Moreover, Kelvin waves also interact with GWs in the tropics [28]. The interaction between mesoscale GWs, larger-scale tropical waves and the zonal-mean flow could be an interesting problem to be studied using these methods. Another field of application could be caustics in general, e.g. in nonlinear acoustics [29], quantum mechanics [30], general relativity [31], or plasma physics [32], where the spectral approach outlined here could help the development of closed and comparatively simple treatments. ###### Acknowledgements. UA thanks the German Research Foundation (DFG) for partial support through the research unit "Multiscale Dynamics of Gravity Waves" (MS-GWaves, grants Grants AC 71/8-2, AC 71/9-2, and AC 71/12-2) and CRC 301 "TPChange" (Project-ID 428312742, Projects B06 "Impact of small-scale dynamics on UTLS transport and mixing" and B07 "Impact of cirrus clouds on tropopause structure"). YHK and UA thank the German Federal Ministry of Education and Research (BMBF) for partial support through the program Role of the Middle Atmosphere in Climate (ROMIC II: QUBICC) and through grant 01LG1905B. UA and GSV thank the German Research Foundation (DFG) for partial support through the CRC 181 "Energy transfers in Atmosphere an Ocean" (Project Number 274762653, Projects W01 "Gravity-wave parameterization for the atmosphere" and S02 "Improved Parameterizations and Numerics in Climate Models."). UA is furthermore grateful for support by Eric and Wendy Schmidt through the Schmidt Futures VESRI "DataWave" project.
2301.11217
An eXtended Reality Offloading IP Traffic Dataset and Models
In recent years, advances in immersive multimedia technologies, such as extended reality (XR) technologies, have led to more realistic and user-friendly devices. However, these devices are often bulky and uncomfortable, still requiring tether connectivity for demanding applications. The deployment of the fifth generation of telecommunications technologies (5G) has set the basis for XR offloading solutions with the goal of enabling lighter and fully wearable XR devices. In this paper, we present a traffic dataset for two demanding XR offloading scenarios that are complementary to those available in the current state of the art, captured using a fully developed end-to-end XR offloading solution. We also propose a set of accurate traffic models for the proposed scenarios based on the captured data, accompanied by a simple and consistent method to generate synthetic data from the fitted models. Finally, using an open-source 5G radio access network (RAN) emulator, we validate the models both at the application and resource allocation layers. Overall, this work aims to provide a valuable contribution to the field with data and tools for designing, testing, improving, and extending XR offloading solutions in academia and industry.
Diego Gonzalez Morin, Daniele Medda, Athanasios Iossifides, Periklis Chatzimisios, Ana Garcia Armada, Alvaro Villegas, Pablo Perez
2023-01-26T16:53:27Z
http://arxiv.org/abs/2301.11217v1
# An extended Reality Offloading IP Traffic Dataset and Models ###### Abstract In recent years, advances in immersive multimedia technologies, such as extended reality (XR) technologies, have led to more realistic and user-friendly devices. However, these devices are often bulky and uncomfortable, still requiring letter connectivity for demanding applications. The deployment of the fifth generation of telecommunications technologies (5G) has set the basis for XR offloading solutions with the goal of enabling lighter and fully wearable XR devices. In this paper, we present a traffic dataset for two demanding XR offloading scenarios that are complementary to those available in the current state of the art, captured using a fully developed end-to-end XR offloading solution. We also propose a set of accurate traffic models for the proposed scenarios based on the captured data, accompanied by a simple and consistent method to generate synthetic data from the fitted models. Finally, using an open-source 5G radio access network (RAN) emulator, we validate the models both at the application and resource allocation layers. Overall, this work aims to provide a valuable contribution to the field with data and tools for designing, testing, improving, and extending XR offloading solutions in academia and industry. XX ExIended Reality, 5G Networks, Offloading, Dataset, Traffic Models ## 1 Introduction Lindenibaly, the advances in immersive multimedia technologies introduced in the last five years are impressive. Extended reality (XR) technologies, which include virtual (VR) and augmented reality (AR) technologies, made huge leaps forward both in terms of realism and user interaction [1]. A significant factor in the current turmoil on the topic has undoubtedly been the remarkable interest of related companies such as Meta (formerly Facebook), Microsoft, and Sony. In particular, Meta has decided to focus heavily on the metaverse concept, thus, boosting the interest in these technologies, their potential use cases, and related issues [2]. Due to their inherent characteristics, immersive use cases nowadays play a significant role in the development of a great multitude of enabling technologies. The recent interest in XR technologies has led to enormous investment increments, which have enabled lighter and cheaper devices to reach unprecedented levels of resolution. For example, the Vario XR-31 head mounted display (HMD) provides a visual resolution of 70 pixels per degree, matching the human eye's resolution. Footnote 1: [https://varojo.com/products/xr-3/](https://varojo.com/products/xr-3/) Advanced XR requires not only ultra-realistic resolution but also the implementation or improvement of other algorithms that aim to enhance user experience [3]. This goal requires complex and computationally expensive algorithms, such as semantic segmentation [4, 5], to run in real time. Therefore, to expand the current limits of XR technologies, HMDs must have access to high-end hardware with powerful graphical processing units for ultra-realistic rendering supported by machine learning (ML) processes. For this reason, advanced XR HMDs such as the Vario XR-3 are tethered, uncomfortable and expensive. Consequently, next-generation wireless systems, such as 5G and 6G, must support XR technologies that have, as a whole, quickly become one of the killer use case families [6]. The goal, toward this end, is to offload XR heavy processing tasks to a nearby server, or a multi-access edge computing (MEC) platform, to loosen the in-built hardware requirements of XR HMDs while increasing their overall computing capabilities. However, XR offloading is a complex task with extreme requirements in terms of latency and throughput [7, 8], which requires a well-designed and configured network. It is not trivial for developers and researchers to have access to fully developed XR offloading implementations. The current trend is to rely on pre-recorded or modeled traffic data, which are then fed to various simulation environments or actual wireless access network deployments. Pre-recorded traffic traces allow using extremely realistic data with simple use case-agnostic tools, such as tcpreplay 2. On the other hand, traffic models allow the generation of longer traffic traces while providing greater flexibility than pre-recorded traffic data. Even though it is true that the traffic characteristics for each XR use case can be very diverse, thus making it difficult to define a general-purpose model, access to modeled or pre-recorded XR traffic data can considerably accelerate and simplify the testing and prototyping steps. Footnote 2: [https://tcpreplay.appneta.com/](https://tcpreplay.appneta.com/) A number of previous works deal with immersive multimedia traffic capture and modeling or present ready-to-use models. Authors in [9] provide details on specific use cases employing AR and VR and how one can approximately model their behaviors using the models from 5G-PPP [10, 11]. In [12], the authors modeled augmented reality downlink traffic using a classical two-state Markovian process. In [13], a complete framework aimed to model XR application is presented, alongside an accurate statistical analysis and an ad-hoc traffic generator algorithm. Furthermore, the work carried out in [13] has been exploited to create a VR traffic generator framework for the ns-3 simulator [14]. Authors in [15] model 3GPP-compliant traffic cases for next-generation mobile network applications, which include advanced gaming, but no explicit XR case is considered. Generally speaking, the previous works mostly focus on providing models for the downlink traffic. However, as described in [7, 8], advanced XR technologies require multiple complex algorithms to run simultaneously in order to provide the user with a sufficiently high level of interaction, immersiveness, and experience. These algorithms, in many cases, require high-end hardware and therefore, can be considered potential offloading candidates. They require to be continuously fed with the sensor streams captured by the XR HMD, which can be as heavy as or more than ultra-realistic rendered frames. Aligned with this idea, 3GPP has recently included very detailed traffic models for AR and XR in Release 17, differentiated according to the type of data streamed [16]. While the considered VR use cases are still centered only on distributed rendering solutions with a special focus on downlink traffic, AR traffic models also consider complex and heavy uplink traffic. In this work, we aim to provide realistic traffic traces and their associated models for two separate state-of-the-art XR offloading scenarios, both for downlink and uplink. Our goal is to complement and improve the models proposed in [16]. First, the proposed scenarios, complement the ones described in [13, 16]. Besides this, we provide the raw data, uploaded to [17], which can be useful for many researchers not only to use it as it is for simulation or prototyping purposes but for generating other models more suitable for their use. We also provide a set of XR traffic models obtained from the traces. Differently from other models in the state of the art, we also model the inter-packet arrival time, which, as we show, can be extremely relevant for XR offloading resource allocation algorithms design. The scenarios under consideration include full XR offloading and egocentric human segmentation, both sitting on the very edge of the current state of the art. Therefore, we believe that our contribution will provide valuable tools to design, test, improve or extend wireless network solutions both in academia and industry. To our knowledge, this is the first work that provides both an accurate traffic dataset and validated models for the mentioned use cases and applications. Our main contributions can be summarized as: * An XR offloading traffic dataset for two different relevant offloading scenarios, captured for multiple streaming resolutions; * XR traffic models obtained from the captured traces, including the inter-packet arrival time, not available in most of the models provided in the state of the art; * A thorough validation of the proposed models using a realistic 5G radio access network (RAN) emulator, showing how an accurate inter-packet arrival time can considerably improve the quality of the models for specific applications. The remaining of this paper is organized as follows: Section 2 summarizes the two reference XR offloading scenarios; Sections 3 and 4 describe the offloading architecture and the traffic capture methodology employed in the use cases, respectively; in Section 5 we focus on the statistical modeling of the cases by using the previously captured traffic; furthermore, in Section 6 we summarize artificial traffic generation with the use of the developed statistical models that we employ, in Section 7, in validation experiments that are carried out by means of simulation, in order to verify the behavioral compliance of the modeled traffic with the real captured one; lastly, final conclusions are drawn in Section 8. ## 2 XR Offloading Scenarios Our goal is to capture a relevant IP traffic dataset for two demanding XR offloading scenarios, that is, full XR offloading (scenario A) and egocentric human segmentation algorithm offloading (scenario B). In scenario A, all the processing but the sensor capture is moved from the XR device to a nearby server. Differently from [16], we consider the VR HMD to be a very light device in charge of only capturing the sensor data. The sensor data are streamed to the server, where they get processed. The sensor info is used to render a new high-definition VR frame which is sent back to the device. This is a very relevant use case for advanced and future networks, which can enable ultra-light and wearable XR devices. In our case, we consider the sensor data to be generated by a stereo camera feed and inertial sensors. The inertial sensors traffic can be neglected as its associated throughput is much lower than the stereo camera feed throughput [7, 8]. This is an extremely demanding use case as the round trip times should lay below the frame update period, i.e., around 11 ms for a device running at 90 Hz. While there are some techniques to slightly expand this time budget, such as XR time warp [18], the latency requirements are still tight, especially for ultra-high definition XR scenes rendering, encoding, and transmission. Scenario B focuses on the particular case of egocentric body segmentation [19], since this is a promising state-of-the-art solution for XR applications. The upstream traffic includes the stereo camera traffic while the server is sending back simple binary masks to the device in which the white pixels correspond to the user's body. The received masks are used by the XR device to render only the pixels corresponding to the user's body within the VR scene. While still a demanding offloading use case, the overall requirements are much lower than in Scenario A, since the downlink stream is just composed of single-channel binary masks. ## 3 Offloading Architecture Our offloading architecture, described in [20], relies on two main agents to share data between different peers. On one hand, we have Alga, which connects individual peers. On the other hand, we have Polyp, a data re-router, and replicator, in charge of transmitting the data from one source to one or multiple listening peers. We implemented a publisher-subscriber approach based on topics. When a client subscribes to a topic, Polyp is in charge of re-routing and replicating all the data of the topic toward this client. Similarly, when a client publishes data to a topic, Polyp ensures that these data are transmitted to all the peers subscribed to this topic. Our architecture allows direct communication between end clients without having to use Polyp. Polyp itself is a peer that can subscribe or publish to a topic. Alga is in charge of creating all the necessary connections and transmitting the data. The general representation of our architecture is depicted in Fig. 1. The first version of this offloading architecture, implemented Alga using TCP for IP traffic transmission. Besides, to efficiently avoid TCP disadvantages, we sent each frame separately encoded in JPEG. This architecture served us to use our ML egocentric body segmentation algorithm, running on a nearby server, with a commercial XR HMD, the Meta Quest 23. However, joint JPEG encoding and TCP transmission, while useful in many scenarios due to their associated reliability, as described in [20], were not originally designed to support high throughput and low latency. Therefore, we extended Alga's functionality incorporating H.264 video encoding [21] and RTP (real-time transport protocol) over UDP transmission [22]. To encode the sensor streams in H.264 and pack the data in RTP frames the architecture uses GStreamer 4. Footnote 3: [https://www.meta.com/en/quest/products/quest-2/](https://www.meta.com/en/quest/products/quest-2/) Footnote 4: [https://gstreamer.freedesktop.org/](https://gstreamer.freedesktop.org/) For traffic control reasons and to preserve compatibility with Polyp's in-built functionalities, we need to have control over the individual video frames and attach the metadata associated with them, such as the destination topic, timestamps, etc. This metadata can also be useful for performance analysis or bottleneck detection. To achieve this goal, we use RTP extended headers. Thus, the metadata is added to each video frame as an RTP extended header, which can be decoded and read on the receiving end. This is achieved using GStreamer in-built functionality. Alga's data flow for both TCP and RTP/UDP modes is depicted in Fig. 1. From the sending peer, the frames are fed to Alga in raw RGB format. Alga injects the raw frame along with its associated metadata into the GStreamer encoding and transmitting pipeline. If there are multiple peers subscribed to the same topic, the traffic is replicated and routed by Polyp, leaving this traffic untouched and just accessing the headers to read the target destination. In both this case and the case of direct traffic transmission between end peers using just Alga, the GStreamer pipeline receives and decodes the RTP frame. Once decoded, the frame can be accessed by the application layer. ## 4 Traffic Capture Methodology As described in Section 3, our offloading architecture implementation has already been tested on a full end-to-end offloading solution using a commercial XR device, the Meta Quest 2. However, we decided to use a high-end laptop to emulate the XR offloading IP traffic for two main reasons: * our architecture is optimized for wireless offloading via WiFi or advanced RAN networks such as 5G. We need to capture the data on the transmitting peer to avoid any overhead introduced by the wireless transmission, traffic routing, congestion, etc. These potential sources of overhead can lead to latencies, jitter, or packet loss which strongly depend on the used configuration, wireless technology, and other external factors. It is out of the scope of this work to model the network behavior and its associated configuration. However, we could not find an efficient manner to capture the IP traffic being transmitted from the XR device. * the Meta Quest 2 is not capable of handling demanding XR offloading use cases due to its limited computation capabilities. Our target is to cover XR offloading use cases which are still not possible with current XR or wireless access points technologies. Following these considerations, all the data were captured using a high-end laptop, with an Intel Core i7-10870H CPU @ 2.20 GHz x 16, and 16 GB of RAM, running Ubuntu 18.04 LTS. The offloading architecture was set up and configured identically to an actual XR offloading deployment. For simplification and as we were not using an actual XR HMD, the IP traffic from each stream (scenario A and scenario B uplink Fig. 1: The proposed offloading architecture strategy and simplified data flow for a general multi-peer scenario (top). Alga data flow for both TCP-JPEG and RTP-H-264 implemented transmission pipelines (bottom). and downlink streams) was captured via separate capture runs. Therefore, we used prerecorded data to capture the IP traffic. According to the scenarios described in Section 2 we recorded data for the following streams: * Uplink stereo camera stream**: This corresponds to the frontal stereo camera data, which are transmitted in both offloading scenarios A and B. The recorded data were obtained from the same stereo camera used in the end-to-end offloading solution of [19]. We recorded a continuous stereo video stream of 2560 \(\times\) 960 resolution at 60 Hz, the maximum supported by the camera. While XR devices are expected to run at rates above 90 Hz, the sensor data are not required to be updated so fast [7, 8]. The prerecorded data had a length of 15 minutes. * Downlink rendered frames**: This corresponds to the immersive frames rendered on the server in scenario A. In this case, we used a high-definition stereo video from a first person video game. The recorded video has a resolution of 3840 \(\times\) 1920 and an update rate of 90 Hz. * Downlink segmentation masks**: This corresponds to the binary pixel classification output by the egocentric body segmentation ML algorithm in scenario B. From the stream 1, we estimated the black and white binary single channel masks for each frame using the segmentation algorithm described in [23]. Therefore, the resolution and update rate is the same as in stream 1 (2560 \(\times\) 960 @ 60 Hz). To expand and add extra value to the presented dataset, we downscaled the three streams to different sets of resolutions/update rates that can be useful for potential researchers and applications. A summary of all the resolutions and update rates we used to generate the traffic data is shown in Table I. Each of the resolution/frames per second (FPS) and transmission direction (uplink or downlink) streams was captured separately. To capture the IP traffic, the client reads the individual raw frames, one by one, from the selected stream and sends them using the described architecture. As both the client and server run on the same machine to accelerate and simplify the capture process, we set a streaming client connected to a server, which just discards the incoming packets using GStreamer's _Fakesink_ module to avoid any additional overhead. There is no instance of Polyp and both client and server are directly connected using Alga in H.264-RTP mode: the raw frames are encoded using H.264 and packetized as RTP frames to be transmitted via UDP, in localhost. The IP traffic was captured using Wireshark, generating an individual packet capture (PCAP) file for each capture run. The final simplified capturing setup is depicted in Fig. 2. Each capture run had a duration of 10 minutes, for a total of 110 minutes of data. ## 5 Traffic Modeling In addition to releasing the PCAP files publicly, we made a systematic effort to statistically model the most relevant video streaming IP traffic parameters: i) RTP frame size, defined as the size of each individual RTP frame, ii) inter-frame interval, that is, the time between individual RTP frames and iii) inter-packet interval, i.e., the time between successive packets within an individual RTP frame. These parameters are depicted schematically in Fig. 3. The main goal is to allow potential researchers, in the context of wireless communication systems analysis and evaluation, to generate realistic XR IP traffic, online or offline, based on the models derived. ### _Data Pre-processing_ The PCAP files are large and contain a lot of information that can be useful in future works, such as the transmitted bytes themselves or other relevant metadata. To derive the traffic models, we store the payload, timestamps and the new RTP frame marker bit of the individual IP packets coming into the arbitrary port used for transmission. This bit information is necessary to identify a new frame. Data pre-processing takes place in two steps, as follows. In the first step, we obtain a list of all the captured IP packets, ordered according to their timestamp. For each packet, we keep the payload in bytes, the timestamp, and a custom boolean indicator, i.e., a combination of the bit marker and the timestamp separation, which determines if \begin{table} \begin{tabular}{c c c c} & \multicolumn{2}{c}{Resolution @ FPS} \\ \hline & Stream 1 & Stream 2 & Stream 3 \\ \hline High & 2560 \(\times\) 960 @ 60 & 3840 \(\times\) 1920 @ 90 & 2560 \(\times\) 960 @ 60 \\ Medium & 1920 \(\times\) 720 @ 60 & 3840 \(\times\) 1920 @ 72 & 1920 \(\times\) 720 @ 60 \\ Low & 1280 \(\times\) 480 @ 60 & 1280 \(\times\) 480 @ 60 \\ \hline \end{tabular} \end{table} TABLE I: Uplink and downlink resolutions and frame rates used to generate the proposed XR IP traffic dataset Fig. 3: Simplified IP packets (black arrows) representation packed in several RTP frames. The RTP frame size, inter-frame, and inter-packet interval times are illustrated. Fig. 2: Simplified representation of the final setup used to capture the XR offloading IP traffic dataset. the IP packet initiates a new RTP frame or not. This first pre-processing step is implemented using Python and Scapy5 library to parse the PCAP file. Footnote 5: [https://scapy.net/](https://scapy.net/) In the second step, we go through all the IP packets and group them in individual RTP frames according to the custom boolean indicator. Then we estimate, for each RTP frame, the total size in bytes (RTP frame size), the time in between consecutive frames (inter-frame intervals), and the time in between consecutive IP packets (inter-packet intervals). These parameters are stored in three separate arrays and saved as an NPY (Python NumPy format) file. These NPY files are the ones used to model the IP traffic. This second step is implemented in Python as well. These two steps are applied to all the captured PCAP files. The final outputs are stored as individual files to easily identify each capture run. In Table II we show the basic statistics, i.e., mean value, standard deviation, and 95th percentile of all the captured data cases, for the frame size, inter-frame interval, inter-packet interval, and IP packet size. The packet size information is useful to generate synthetic data from the fitted models. ### _Prior Data Analysis_ Before taking any modeling decisions, we studied the histograms of the pre-processed data. In particular, we plotted the histograms for all the parameters to be modeled for all the captured data. In Fig. 4 we present examples of the RTP frame size, inter-frame interval, and inter-packet interval histograms, for the high-resolution Stream 1 and 3 (at 60 Hz), as well as Stream 2 at 90 Hz. We observe, in all streams, that the inter-frame intervals are evenly distributed around a mean value that coincides with the frame update period according to the selected FFS value. Due to variable rate encoding, which guarantees low latency, the coding rate and the frame size may include peaks and variations. For the 60 Hz captured data, this is not an issue since the encoder is faster than the frame update period for all cases and resolutions. However, for very high resolutions and frame update rates, the coding rate needs to dynamically adapt, resulting in frame sizes with more than one peak, as shown for Stream 2 in Fig. 4. This also affects the standard deviation of the inter-frame interval, which is reduced in Stream 2 cases due to the stricter encoding time requirements. Regarding the potential distributions to model the target parameters, we observe, that in both Stream 1 and 3, these parameters can be modeled as unimodal continuous distributions. On the other hand, we observe that the distribution of the RTP frame sizes of Stream 2 presents two local maxima. These local maxima are smaller for the higher frame update rate (90 Hz) depicted in Fig. 4. Nevertheless, we decided to model Stream 3 RTP frame sizes as continuous unimodal distributions as well and check if they provide a sufficiently good fitting before testing multimodal distributions. In Fig. 4 we can observe that the inter-packet interval distribution seems not to be unimodal, since slight changes in convexity appear. However, the inter-packet intervals lay in the order of the microsecond, as shown in Table II. On that scale, many external sources can affect the measured value, such as the operating system particular operations, Wireshark processing, etc. Again, modeling these possible external factors that can affect the inter-packet intervals is out of the scope of this work. Therefore, we choose to move forward with the simple approach of modeling the inter-packet interval time as a unimodal continuous distribution. ### _IP Traffic Models_ There is a wide range of well-established and commonly used continuous distributions for the parameters under consideration. To find the best candidate distributions that fit our data, we used Python's Scipy library [24]. Scipy is capable of modeling more than 90 different continuous distributions. We decided to fit all the distributions available \begin{table} \begin{tabular}{l l c c c} \multicolumn{5}{c}{Stream 1 – Uplink stereo camera} \\ \hline & Resolution & Mean & Std. Dev. & 95th perc. \\ \hline \multirow{2}{*}{Frame size (bytes)} & Low & 34602.44 & 9529.36 & 55735 \\ & Medium & 86149.87 & 19936.04 & 132384 \\ & High & 232084.33 & 28141.99 & 269008 \\ \hline \multirow{2}{*}{Inter-frame interval (ms)} & Low & 16.76 & 0.26 & 17.12 \\ & Medium & 16.76 & 0.50 & 17.53 \\ & High & 16.80 & 2.57 & 21.29 \\ \hline \multirow{2}{*}{Inter-packet interval (μs)} & Low & 3.94 & 6.08 & 17.10 \\ & Medium & 3.53 & 5.47 & 17.27 \\ & High & 4.55 & 11.02 & 6.43 \\ \hline \multirow{2}{*}{IP packet size (bytes)} & Low & 1280.79 & 356.58 & 1428 \\ & Medium & 1364.81 & 244.83 & 1428 \\ & High & 1403.88 & 154.31 & 1428 \\ \hline \multicolumn{5}{c}{Stream 2 – Downlink rendered frames} \\ \hline & Update rate & Mean & Std. Dev. & 95th perc. \\ \hline Frame size & 72 Hz & 207968.42 & 122929.70 & 396402 \\ & 90 Hz & 163548.89 & 116837.86 & 339396 \\ \hline Inter-frame & 72 Hz & 13.88 & 0.05 & 13.94 \\ & 90 Hz & 11.11 & 0.04 & 11.17 \\ \hline Inter-packet interval (μs) & 72 Hz & 3.41 & 9.18 & 4.85 \\ & 90 Hz & 3.66 & 9.08 & 6.91 \\ \hline IP packet size & 72 Hz & 1400.04 & 171.38 & 1428 \\ & 90 Hz & 1392.66 & 191.91 & 1428 \\ \hline \multicolumn{5}{c}{Stream 3 – Segmentation masks} \\ \hline & Resolution & Mean & Std. Dev. & 95th perc. \\ \hline \multirow{2}{*}{Frame size (bytes)} & Low & 4968.50 & 2175.03 & 7708 \\ & Medium & 8273.98 & 3921.00 & 13970 \\ & High & 24378.90 & 11440.59 & 43458 \\ \hline \multirow{2}{*}{Inter-frame interval (ms)} & Low & 16.76 & 0.20 & 17.05 \\ & Medium & 16.75 & 0.61 & 17.77 \\ & High & 17.10 & 3.30 & 22.44 \\ \hline \multirow{2}{*}{Inter-packet interval (μs)} & Low & 7.01 & 6.17 & 15.04 \\ & Medium & 5.87 & 9.61 & 15.34 \\ \multicolumn{5}{c}{High} & High & 7.54 & 24.80 & 24.63 \\ \hline \multirow{2}{*}{IP packet size (bytes)} & Low & 749.83 & 517.39 & 1428 \\ & Medium & 933.53 & 527.48 & 1428 \\ \hline \end{tabular} \end{table} TABLE II: RTP frame size, inter-frame interval, inter-packet interval and individual IP packet sizes basic statistics from the captured data and evaluate their goodness of fit using the Kolmogorov-Smirnov (KS) test [25]. The KS test quantifies the distance between the empirical CDF \(F_{n}(x)\) of a sample and the fitted CDF of an arbitrary distribution \(F(x)\) as \[\text{KS}=\sup_{x}|F_{n}(x)-F(x)|, \tag{1}\] where \(\sup_{x}\) is the supremum of all the set of distances across \(x\) values. The lower the KS test value, the better the fitting of the candidate distribution with the captured data. The KS test results of the 15 best-fitted distributions for each parameter and stream type are depicted in Fig. 5, sorted from best (left) to worst (right). We observe that Johnson's \(S_{U}\) distribution [26] obtains the best mean KS value across all the captured data. This distribution was proposed by N. L. Johnson in 1949 and has been historically used in finances. The key characteristic of Johnson's \(S_{U}\) distribution is its flexibility which originates from its four parameters that allow the distribution to be either symmetric or asymmetric. The probability density function (pdf) is expressed as \[\begin{split} f(x,\gamma,\delta,\lambda,\varepsilon)& =\frac{\varepsilon}{\delta\cdot m(x,\gamma,\delta)}\phi\big{\{} \gamma+\delta\log(x)\\ &\cdot[k(x,\gamma,\delta)+m(x,\gamma,\delta)]\big{\}},\end{split} \tag{2}\] Fig. 4: Histograms (in blue) and cumulative distribution functions (CDF, in red) from the captured data for Stream 1 (top), 2 (middle), and 3 (bottom) for the target parameters: inter-packet interval times (left), inter-frame interval times (center) and frame sizes (right). where \[k(x,\gamma,\delta)=\frac{x-\gamma}{\delta},\quad m(x,\gamma,\delta)=\sqrt{k(x, \gamma,\delta)^{2}+1}, \tag{3}\] with \(\gamma\) and \(\delta\) being the location and scale parameters, respectively, \(\lambda\) and \(\varepsilon\) the Johnson's \(S_{U}\) specific shape parameters, and \(\phi(\cdot)\) the pdf of the normal distribution. By further inspecting Fig. 5, we notice that for the RTP frame sizes, the only case that Johnson's \(S_{U}\) does not provide the best fit is for the Stream 2 @ 90 Hz, for which the Exponential Normal distribution is the best. However, as we can see in Fig. 6 the practical differences between the two distributions for Stream 2 @ 90 Hz are small enough. Besides, even if Johnson's \(S_{U}\) fit is not as accurate as in the other RTP frame size distributions (see Fig. 7), the measured KS values obtained are low enough, with a good fit for the larger packet sizes. Therefore, we decided to model and evaluate the RTP frame sizes using the Johnson's \(S_{U}\) distribution for all the captured data. Similarly, we decided to use Johnson's \(S_{U}\) distribution to model also the inter-frame and inter-packet intervals. The parameters of Johnson's \(S_{U}\) distribution for all the traffic parameters under consideration and captured data are summarized in Table A.1 of the Appendix. ## 6 Realistic Traffic Generation Our next goal is to build a tool that allows the generation of realistic XR offloading IP traffic. Such a tool is useful for researchers and application developers to generate and use synthetic data for analysis or incorporate it into complex link-level or system-level simulations. While other video XR traffic [16] state-of-the-art models only consider the frame size and inter-frame interval for generating synthetic data, we believe that including the inter-packet interval data extends the applicability of our models to a wider range of research efforts. For instance, when designing novel or advanced resource allocation techniques, an accurate inter-packet interval model might be extremely useful and lead to better and more appropriate solutions. To create synthetic data we have to generate random values from the fitted distributions. Towards this end, we used Scipy's _rcs_ in-build function which generates random values from a specific distribution. In addition, we need the size of the individual RTP packets. In the real captured data this is not constant, as shown in Table II, in terms of the IP packet sizes, especially for Stream 3, since the way the segmentation mask is coded and organized in RTP packets varies from the regular color video stream (1 and 2). In general, the packets of each RTP frame have a fixed size chosen in the encoding/RTP framing pipeline (1442 bytes in our case). The first (including the RTP header) and the last are usually different. Depending on the chosen pipeline and configuration there may be smaller packets also in between, as in our case. However, these phenomena happen rarely as Fig. 5: Best KS test scoring models (from left to right) for the target parameters to be modelled: RTP frame sizes (left), inter-frame interval times (center), and inter-packet interval times (right). Fig. 6: Johnson’s \(S_{U}\) and exponential normal fitted distributions’ probability density functions (PDF) and CDFs for the Stream 2 and 90 Hz case. we can observe in the packet size histograms. The significant difference between the mean and the maximum packet sizes in low throughput streams, such as Stream 3, is expected because the number of packets between the first and last within an RTP frame is small (smaller than 5 in the low resolution Stream 3 case). Therefore, we consider two IP packet size options: i) the mean size value, as in Table II or ii) the 95\({}^{\text{th}}\) percentile value. We refer to case i) as _Mean Packet_ and case ii) as _Max Packet_. Once we have the generators and packet sizes, we can easily define a procedure for synthetic realistic IP traffic generation, as described in Alg. 1. For each RTP frame among the \(N_{RTP}\) to be generated, we begin by getting its size \(s_{RTP}\) from the selected RTP generator, and by choosing the IP packet size \(s_{IP}\) equal to Max Packet or Mean Packet. Then we compute the total number of packets \(N_{IP}\) simply by dividing \(s_{RTP}\) by \(s_{IP}\). We continue by generating \(N_{IP}\) packets of size \(s_{IP}\), each with a specific timestamp \(t_{s}\). The timestamp is computed by adding a random inter-packet interval \(\Delta t_{IP}\) to the previous packet timestamp. Once all \(N_{IP}\) packets are generated, a new randomly picked inter-frame interval \(\Delta t_{IF}\) is added to the current timestamp. The random values are generated from the modelled distributions. The above procedure is repeated for each RTP frame. The described algorithm can be easily implemented in any programming language and therefore used in any simulation environment. Additionally, it can be used to create synthetic traffic traces by storing the generated packets in a separate PCAP file and utilize them at a later time. ## 7 Validation Experiments In this section, we test the traffic generated with the methodology described in the previous section, over a realistic RAN scenario, to determine its ability to accurately mimic the behavior of the captured XR offloading data traffic. To do this, we first compare the average throughput obtained from the captured data with the corresponding generated synthetic data obtained. Then, we study the behavior of the different synthetic data models in terms of application layer throughput and latency in the most relevant offloading scenarios using a real-time 5G RAN emulator. Finally, we thoroughly examine the impact of the type of traffic model used on resource allocation by comparing synthetic data from different models with actual XR traffic. The first step is to compare the generated mean throughput of the synthetic data using the modeled Johnson's \(S_{U}\) distribution with the captured data. The mean throughput results of the captured and synthetic data, for both Max Packet and Mean Packet cases (IP packet sizes), are shown in Table III. The differences between the synthetic and captured data throughput are also included as percentage differences. We can observe that the throughput differences are low in all cases, with a peak of 1.29% for Stream 2 and 72 Hz case. All other cases present differences below 1%, for both IP packet sizes, with an average error of 0.435%, and 0.438%, for Max Packet, and Mean Packet case, respectively. The next evaluation step is to compare the behavior of both synthetic and captured data on a realistic advanced 5G RAN deployment. Towards this end, we used the open-source 5G RAN real-time emulator, named FikoRE [17, 27]. FikoRE has been specifically designed for application layer researchers and developers to test their solutions on a realistic RAN setup. It supports the simulation of multiple Fig. 7: Histograms (in blue) and CDFs (yellow) from the captured data for Stream 1 for the parameters: inter-packet interval times (left), inter-frame interval times (center) and frame sizes (right). On top, the Johnson’s \(S_{U}\) tilted distribution’s PDF (red) and CDF (green). background user equipment (UE) while handling high actual IP traffic throughput (above 1 Gbps). For our validation experiments, FikoRE runs as a simulator since we are not injecting actual IP traffic, but the traces from the captured or synthetic data. We tested the two scenarios described in Section 2, with the following setup: * Full Offloading:** On the downlink side, we chose to evaluate the 72 Hz rendered frames stream since it represents the current rendering offloading possibilities of commercial XR devices such as the Meta Quest 2. The Meta Quest 2 devices are capable of performing offloaded rendering, via a WLAN network, to a laptop in charge of rendering the immersive scene. The recommended setup is 72 Hz, being the rendering resolution of 1832 x 1920 per eye, which is slightly smaller than our captured data for the rendering frames stream. The uplink corresponds to the sensor stream (Stream 1) with a stereo resolution of 1920 x 720. * Egocentric Human Body Segmentation**: Successful deployment of this scenario was achieved in previous works [19]. While our deployment uses smaller resolutions, we evaluated the scenario in which both Stream 1 and 3 use a resolution of 1920 x 720. Both offloading scenarios were evaluated in three different network configurations: * Multiple background UEs and a single immersive UE with proportional fair (PF)**: In this scenario we simulated multiple UEs which are transmitting 5 Mbps of traffic in each direction. The throughput is not continuous, but is synthetically generated using the video streaming models from [28] applicable for streaming applications such as Netflix. The emulator is set up with a single carrier of 100 MHz bandwidth on the 26.5 GHz millimeter wave (mm-wave) frequency band. Resource allocation takes place based on the PF metric [29], using 1:1 (downlink:uplink) time division duplexing (TDD). We tested this network with a single immersive UE and 0, 20, 40, 60, 80 and 100 background UEs with 5 Mbps traffic in each direction. The network starts saturating around 80 simultaneous UEs. * Multiple immersive UEs with PF**: In this scenario we have multiple immersive UEs, all using the same synthetic data. The throughput per UE is much higher than in Configuration A, so we increased the total bandwidth to 200 MHz in order to be able to simulate more UEs before reaching network UE saturation. * Multiple immersive UEs with maximum throughput (MT)**: this setup is identical to Configuration B only changing the resource allocation metric used from PF to MT [29]. All three configurations have in common the simulation parameters included in Table IV. Each individual simulation run has a duration of 500 seconds and is repeated for each combination of configuration, number of UEs, offloading scenario (A and B), and type of data (synthetic with both packet types and captured data). In all cases, there is a "principal" immersive UE closer to the emulated gNB than the other simulated UEs, from which we obtained the measurements used in this analysis. The goal is to study \begin{table} \begin{tabular}{c c} \multicolumn{2}{c}{Simulation Parameters} \\ \hline TDD Configuration & 1(UL):1(DL) \\ Modulation & 256-QAM \\ Frequency Band & 26.5 GHz \\ UE MIMO Layers & 2 \\ Allocation Type & 0 \\ Allocation Configuration & 1 \\ Scenario & Rural Macrocell \\ \hline \end{tabular} \end{table} TABLE IV: Common simulation parameters used in all the experiment runs \begin{table} \begin{tabular}{c c c c c c} \multicolumn{5}{c}{Stream 1 – Uplink stereo camera} \\ \hline & \begin{tabular}{c} Captured \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Max Packet model \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Mean Packet model \\ Error (\%) \\ \end{tabular} & \begin{tabular}{c} Mean Packet model \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} \\ Error (\%) \\ \end{tabular} \\ \hline Low & 16.51 & 16.49 & 0.12 & 16.49 & 0.12 \\ Med. & 41.11 & 41.15 & 0.09 & 40.96 & 0.36 \\ High & 110.55 & 110.50 & 0.05 & 110.27 & 0.25 \\ \hline \end{tabular} \begin{tabular}{c} Stream 2 – Downlink rendered frames \\ \hline & \begin{tabular}{c} Captured \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Max Packet model \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Mean Packet model \\ Error (\%) \\ \end{tabular} & \begin{tabular}{c} Mean Packet model \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} \\ Error (\%) \\ \end{tabular} \\ \hline 72 Hz & 119.79 & 121.36 & 1.29 & 120.83 & 0.86 \\ 90 Hz & 117.76 & 117.74 & 0.02 & 118.60 & 0.71 \\ \hline \end{tabular} \begin{tabular}{c} Stream 3 – Downlink segmentation mask \\ \hline & \begin{tabular}{c} Captured \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Max Packet model \\ (Mbps) \\ \end{tabular} & \begin{tabular}{c} Mean Packet model \\ Error (\%) \\ \end{tabular} \\ \hline Low & 2.37 & 2.35 & 0.89 & 2.36 & 0.51 \\ Med. & 3.95 & 3.92 & 0.66 & 3.94 & 0.35 \\ High & 11.4 & 11.44 & 0.38 & 11.44 & 0.32 \\ \hline \end{tabular} \end{table} TABLE III: Mean throughput of the generated synthetic data in comparison with the captured traffic’s throughput Fig. 8: Mean downlink throughput measured for Scenario A and B along with configuration C for the captured and synthetic data. and compare the behavior of each type of IP traffic data at the application level, so we evaluated the application layer throughput and latency. The throughput is measured as the total mean throughput transmitted by all UEs. The latency, is measured only for the principal immersive UE. All the stochastic models, including the initial position of the non-principal UEs, have the same random seed across the experiments. The principal UE is placed 100 m away from the gNB to ensure it has priority regardless of the metric used for allocation, while the rest are placed randomly, at a longer distance. The application layer mean throughput results obtained for the downlink transmission of Scenario A for Configuration C, are depicted in Fig. 8. It is evident that the difference between the real, and the modeled data, for the total of UEs, is very low. We observe that from 8 UEs onward, the network starts saturating and the throughput does not increase linearly. This is because UEs with worse channel quality get fewer allocation grants. The measured latency behaves similarly showing low differences. Furthermore, similar results were obtained for all other configurations and scenarios. Overall, the throughput and latency differences between the captured and synthetic data obtained from FikoRE simulations are gathered in Table V. These differences are expressed by the relative mean error across emulation runs with different numbers of UEs. We observe that they are very low, below \(2\%\), in all cases. Besides, the differences between the Max Packet and Mean Packet cases of the IP packet sizes are negligible, with a mean difference of less than \(0.04\%\). These results validate the goodness of the fitting of the proposed models for application-level simulations. As a further step, we assess how well the synthetic traffic data generated with our model behave on the lower layers of the stack compared to the captured data. In particular, we study the resource allocation differences when using the captured or synthetic data as input for the simulator. Besides, we highlight the necessity of an accurate model which includes also the inter-packet intervals, contrary to the models proposed in [16]. In this context, we generated synthetic data using a simple Normal model using the statistical metrics from the captured data included in Table II. However, instead of generating multiple IP packets within an RTP frame, we generated all the bits within the RTP frame in the same timestamp. By doing so we do not only highlight the necessity of an accurate model in terms of RTP frame size and inter-frame interval, but also the relevance of the inter-packet interval models. We refer to this simpler model as "Norm" model. For this validation step, we also used FikoRE which is capable of logging every single allocation step, that is, how the resources are allocated at each subframe. We use this information to compare the differences in terms of the allocated throughput, for each resource block (RB) within the allocation grid. More specifically, we measure the number of bits allocated for each RB and each UE. The number of RBs along the time and frequency axes depend on the bandwidth and selected numerology. The allocation error is estimated by comparing the bits allocated to each RB and UE when using the synthetic data from different models and using the actual XR traffic. We can build, for each UE, the allocation matrices illustrated in Fig. 9. These matrices express the resource allocation differences, or allocation errors, between a selected model and the actual XR traffic in bits per second, so the metric does not depend on the total duration of the simulation run. To estimate the allocation error of the entire grid as a percentage of the total transmitted error, we use \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{6}{c}{Scenario A: Full offloading} & \multicolumn{6}{c}{Scenario B: Deep learning offloading} \\ \hline \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} & \multicolumn{2}{c}{Throughput error} & \multicolumn{2}{c}{Latency error} \\ \hline Downlink & Uplink & Downlink & Uplink & Downlink & Uplink & Downlink & Uplink \\ \hline Max Packet size (\%) & 0.85 & 0.06 & 0.07 & 0.27 & 0.28 & 0.06 & 1.48 & 0.23 \\ Mean Packet size (\%) & 0.59 & 0.13 & 0.58 & 0.38 & 0.35 & 0.13 & 1.55 & 0.51 \\ \hline \hline \end{tabular} \end{table} TABLE V: Emulated application level throughput and latency comparison between captured and synthetic traffic. The synthetic experiments are repeated using the maximum and mean packet sizes the formula \[e(\%)=100\frac{\sum_{i=1}^{K}|t_{c}(i)-t_{m}(i))|}{\sum_{i=1}^{K}t_{c}(i)}, \tag{4}\] where \(t_{m}(i)\) and \(t_{c}(i)\) denote the allocated throughput of the model being evaluated, and the captured data, respectively, for the \(i\)th RB (\(1\leq i\leq K\)) along the total simulation time, with \(K\) the total number of RBs. To really understand how the different sources of traffic data are being allocated, we decided to simulate a single UE, the principal one, in each run. By doing this, we avoid the effects of the selected configuration (such as the allocation metric, UEs channel quality, etc.) that directly affect the resource allocation procedure and can lead to inaccurate conclusions. Using the same configuration parameters described in Table IV, we tested multiple combinations of total bandwidth and numerology \(\mu\), which directly affect how the resource allocation grid is built, for a single immersive UE. In specific, we tested bandwidths of 40 MHz with \(\mu=1\), 100 MHz with \(\mu=2\), 200 MHz with \(\mu=2\), and 200, 400, 800 MHz with \(\mu=3\). Each simulation run had a duration of 500 seconds. The simulations were repeated for each configuration, scenario (A and B), and source of data (captured, Jonhson's \(S_{U}\) with Max Packet size, and Norm). The synthetic data generated using the Mean Packet size presented no evident differences with the Max Packet size option. Observing the measured allocation errors depicted in Fig. 10 we can extract several conclusions. First, we notice that the allocation error is considerably higher for the Norm simpler model compared to the proposed Johnson's \(S_{U}\) model. Besides, the error difference increases rapidly in favor of Johnson's \(S_{U}\) model as we configure the emulator with more total bandwidth. Increasing the numerology also negatively impacts the performance of the Norm model. For low bandwidths, the error difference is low, as an RTP frame does not fit in a single subframe and has to be transmitted along several subframes. Therefore, the entire resource allocation grid gets saturated and the allocation differences, being estimated in comparison with the total allocated throughput in each RB, become hard to measure. On the contrary, for higher bandwidths, not all the RBs are allocated for each RTP frame and the differences become more noticeable. Our intuition is that the difference that we observe for higher bandwidths could also be observed if we could discard the saturated subframes. In addition, the allocation error of the proposed Johnson's \(S_{U}\) models remains almost constant along the test configurations, which clearly is not the case for the Norm model. Thus, we get a strong hint of the importance of obtaining accurate models which include the inter-packet intervals, especially for high numerologies and bandwidths, in designing successful resource allocation techniques. ## 8 Conclusions This work provided realistic traffic traces and associated models for XR offloading scenarios to complement and improve upon the models proposed in previous works, such as [16]. We proposed two XR offloading scenarios that are at the cutting edge of the current state of the art. The first scenario represents a full offloading solution in which the XR HMD captures and transmits sensor data to a nearby server or MEC facility for processing and rendering ultra-high definition immersive frames, which are then transmitted back to the device. The second scenario focuses on offloading heavy ML algorithms, such as the real-time egocentric human body segmentation algorithm, which allows users to see themselves within the virtual scene. The traffic data were captured using a recently introduced offloading architecture, described in [20], with ad Fig. 10: Measured allocation errors between the captured and synthetic data using our emulation tool for each transmission direction (UL and DL) and validation scenario (A and B) for different numerology and bandwidth configurations. All simulations were done for a single UE. Fig. 9: Example of allocation error matrices for both transmission directions (UL and DL) between the captured and synthetic data. The error is estimated for the entire grid. This case corresponds to Scenario A and Configuration B. ditional functionality presented in this work. To avoid any uncontrollable overhead that we did not aim to model, the data have been captured on the sender side, using a local host network. The IP traffic was captured for multiple resolutions for both the uplink and downlink streams and both offloading scenarios. The collected data were cleaned and post-processed, and we conducted a thorough analysis to determine the most appropriate modeling approach. We modeled the three main components of video traffic, that is, the frame size, inter-frame interval, and inter-packet interval, using continuous unimodal distributions. While many video or XR traffic models, such as [16], do not include inter-packet interval information, we consider it a crucial feature to include in XR traffic models, especially for resource allocation techniques design and optimization, as demonstrated in our validation experiments. We fitted multiple continuous distributions to the data for all resolutions and found that the Johnson's \(S_{U}\) distribution provided the best fit, as determined by using the KS test. The Johnson's \(S_{U}\) distribution was fitted for all target parameters, scenarios, and resolutions. With these models, we generated synthetic data and used them in validation experiments with an open-source 5G RAN emulator [17]. These experiments compared the performance of the captured and synthetic data at both the application and resource allocation layers. At the application layer, we found that our models can generate realistic XR traffic data for the proposed scenarios. In the resource allocation layer, we demonstrated the importance of including inter-packet interval time for designing advanced resource allocation techniques specifically optimized for XR offloading. In conclusion, the data and models presented in this work can be effectively used for the design, testing, improvement, and expansion of wireless network solutions in both academia and industry. They offer a comprehensive approach to studying extended reality (XR) offloading scenarios and provide insight into the importance of considering inter-packet interval times for resource allocation techniques. Overall, we believe that this work provides a useful contribution to the field of wireless networks and XR technology. ## Appendix ### XR IP Traffic Models Parameters The Johnson's \(S_{U}\) distribution fitted parameters for the different types of streams and resolutions are summarized in Table VI. There fitted Johnson's \(S_{U}\) distributions model the frame sizes, inter-frame intervals and inter-packet intervals of the captured RTP traffic. The given parameters can directly be used to generate realistic synthetic traffic.
2302.10459
Development of a Mathematical Model for Harbor-Maneuvers to Realize Modeling Automation
A simulation environment of harbor maneuvers is critical for developing automatic berthing. Dynamic models are widely used to estimate harbor maneuvers. However, human decision-making and data analysis are necessary to derive, select, and identify the model because each actuator configuration needs an inherent mathematical expression. We proposed a new dynamic model for arbitrary configurations to overcome that issue. The new model is a hybrid model that combines the simplicity of the derivation of the Taylor expansion and the high degree of freedom of the MMG low-speed maneuvering model. We also developed a method to select mathematical expressions for the proposed model using system identification. Because the proposed model can easily derive mathematical expressions, we can generate multiple models simultaneously and choose the best one. This method can reduce the workload of model identification and selection. Furthermore, the proposed method will enable the automatic generation of dynamic models because it can reduce human decision-making and data analysis for the model generation due to its less dependency on the knowledge of ship hydrodynamics and captive model test. The proposed method was validated with free-running model tests and showed equivalent or better estimation performance than the conventional model generation method.
Yoshiki Miyauchi, Youhei Akimoto, Naoya Umeda, Atsuo Maki
2023-02-21T05:56:19Z
http://arxiv.org/abs/2302.10459v1
# Development of a Mathematical Model for Harbor-Maneuvers to Realize Modeling Automation ###### Abstract A simulation environment of harbor maneuvers is critical for developing automatic berthing. Dynamic models are widely used to estimate harbor maneuvers. However, human decision-making and data analysis are necessary to derive, select, and identify the model because each actuator configuration needs an inherent mathematical expression. We proposed a new dynamic model for arbitrary configurations to overcome that issue. The new model is a hybrid model that combines the simplicity of the derivation of the Taylor expansion and the high degree of freedom of the MMG low-speed maneuvering model. We also developed a method to select mathematical expressions for the proposed model using system identification. Because the proposed model can easily derive mathematical expressions, we can generate multiple models simultaneously and choose the best one. This method can reduce the workload of model identification and selection. Furthermore, the proposed method will enable the automatic generation of dynamic models because it can reduce human decision-making and data analysis for the model generation due to its less dependency on the knowledge of ship hydrodynamics and captive model test. The proposed method was validated with free-running model tests and showed equivalent or better estimation performance than the conventional model generation method. ## 1 Introduction Research and development of Maritime Autonomous Surface Ship (MASS) are active, including automatic berthing. The development of controllers for MASS--including parameter tuning, verification, and certification by class society--, ultimately requires verification in a real ship. Still, a significant part of development in a simulation environment would be helpful from a cost and safety. A dynamic model of a ship's maneuver (i.e., maneuvering model or system-based mathematical model) is widely used to estimate ship maneuvering in numerical simulations from the standpoint of computation time. The harbor maneuvers of a ship is an inclusive maneuver that includes: leaving and entering the port, approach maneuvers to the berth, berthing, and also unherthing; hence it contains various types of motions, i.e., forward, astern, acceleration and deceleration, turn, pivot-turn, stop, crabbing, position-keeping. A dynamic model for harbor maneuvers is generally more complex than that for ocean navigation because the flow field and usage of actuators are much more complex and because the ship is navigating close to obstacles, the high estimation accuracy is required. Dynamic models for harbor maneuvers can be divided into two categories how they address the complexity; either a complex model that switches and combines sub-model with various condition, or a sim pler one that uses unified mathematical expression and parameters for all state for practicality. The former method is represented by the low-speed maneuvering MMG model [e.g., 1, 2], and the latter is represented by Abkowitz model with variable propeller rotation [3] and Fossen's model for dynamic positioning [4]. ### Problem Definition Generating a dynamic model of a particular ship generally consists of the following four tasks. Therefore, in this paper, the term "model generation" refers to the entire process that includes the following four tasks, and, a "user" is a person who generates a dynamic model and utilizes the dynamic model in a simulation environment. Derivation of the model's mathematical expression.The user derives a new dynamic model if the existing dynamic model is not sufficient. The derivation here refers to the derivation of the mathematical expressions of the dynamic model. This step is unnecessary if the user thinks an existing dynamic model is sufficient. Model selection.The user selects the dynamic model to be used. The selection is based on the findings of previous studies or results of maneuvering basin tests conducted by the user. Parameter identification.The user obtains model parameters for the selected model from maneuvering basin tests, databases, or empirical formulae. Validate dynamic model.The user performs a maneuvering simulation by reflecting the obtained model parameters in the dynamic model. The dynamic model is validated by comparing the simulation results with know maneuvers: a maneuvering trial of a ship or a free-running model ship. ### Difficulties of Existing Modeling Methods Let us assume that the user's goal is to build a maneuvering simulation environment that can simulate harbor maneuvers. If the hull-form of the ship is fixed and the accuracy of maneuvering simulation is required, model parameters are often identified by the captive model test [5, 6, 7]. The standard method of the MMG model [7] also uses captive model tests to identify model parameters. Moreover, even for harbor maneuvers, several studied [8, 9, 10, 11, 12, 13, 14, 15] on berthing control have used captive model tests to identify model parameters. However, existing dynamic models and captive-model-tests-based scheme seems problematic because the user's workload to generate the model is large. In particular, the following are some examples: Special testing facility.The model parameters are obtained from hydrodynamic forces measured in a towing tank or a maneuvering basin; hence, the model cannot be generated without using such a facility. Recently, Computational Fluid Dynamics (CFD) has been used to alternate captive model tests [16, 17, 18]. However, powerful computational resources and CFD expertise are required. Cost of parameter identificationThe time and cost required for the experiment become non-negligible due to the increased number of test conditions caused by the complexity of the model of harbor maneuvers. Selection of the dynamic model.So far, several dynamic models have been proposed that are applicable to harbor maneuvers (see Section 1.3), but none of them can be called the de-facto standard yet. Therefore, the user must select an appropriate model (and its mathematical expression) for their vessel of interest. Selection can be done by comparing various models with a known maneuver or comparing estimated hydrodynamic forces with captive model tests. Diversity of Actuator Configurations.A ship's actuator system includes a wide variety of configura tions, e.g., a single-propeller and single-rudder ship, a twin-propeller and twin-rudder ship, a ship with multiple azimuth thrusters, and so on. The mathematical expression of the dynamic model is unique to each actuator configuration; thus, the user needs to prepare the model and implement a numerical code for each actuator configuration. **Knowledge of Ship hydrodynamics and model test.** The user must have appropriate knowledge of captive model tests, hydrodynamics on ship maneuvering, and the dynamic model itself to identify model parameters from captive model tests. However, for the users who aim to build automatic berthing systems, those knowledge is not necessarily their area of expertise. In such cases, a different person who is an expert in that area should be asked for support. ### Related Works Among the difficulties stated in Section 1.2, "special testing facility" and "cost of parameter identification" can address by the System Identification (SI) technique. On the SI for ship maneuvering, SI identifies model parameters from a set of ship trajectories. SI for ship maneuvering was first introduced by Astrom and Kollstrom [19], and Abkowitz [3] in the late 1970s and early 1980s, and then numerous studies have been done for both from full-scale[3, 19, 20, 21] ship and model ship [22, 23, 24, 25]. However, research on SI have been conducted mainly on dynamic models for IMO standard maneuvers [26], ocean voyage, and autopilot application. Only a few studies have been conducted on SI for dynamic models on harbor maneuvers. The authors previously studied [1] the feasibility of SI for low-speed maneuvering MMG model for single-propeller, single rudder ship. The MMG model used in the previous study [1] can represent characteristics of harbor maneuvers, e.g., large drift angle, thrust reduction on propeller reversal, and rudder characteristics dependency on propeller wake. Model parameters were identified from trajectories for the scaled free-running model. SI of low-speed maneuvering MMG model for single-propeller, single rudder ship was also done by Sawada et al. [27]. In [27], they used full-scale ship trial data as train data and succeeded in developing a berthing controller with a simulation environment using the identified model. Using Fossen's model [4], SI of an urban ferry with azimuth thrusters was done by [28]. All of the above previous research [1, 27, 28] used dynamic models which categorized as the Hydrodynamic model [29]. The Hydrodynamic model is a model that was derived and designed to identify the model parameters based on hydrodynamic assumption, observation, and measurement. The other kind of model is the Response model [29], which considers only the relationship between the ship's control input and response, i.e., the motion of the ship. The most popular Response model would be Nomoto's KT model [30]. Meanwhile, application of neural networks (NN) as a Response model has been actively studied in the last two decades [31, 32, 33]. Models using NN are highly capable of modeling the nonlinear dynamics; however, to the best of our knowledge, Wakita et al. [34] is the only case of SI using NN for harbor maneuvers. ### Objective and Contribution of the study We recognize that to overcome the remaining difficulties--selection of a dynamic model, diversity of actuator configurations, and knowledge of ship hydrodynamics--a new Response-model-type dynamic model is necessary. The hydrodynamic model has a favorable feature that the physical meaning of the formulae is understandable; therefore, flow field and forces can be observed from the model. However, it needs to derive model formulae with due consideration of hydrodynamics for each configuration. On the other hand, as a simulation environment for R&D of automatic berthing, how the ship moves, i.e., the relationship between the control input and response, is important. Whether the dynamic model can represent the flow field and hydrodynamic forces is desirable but not mandatory. Hence, the response model, especially a model using NN, is an option to overcome those issues; however, behavior on the extrapolation region of train data is question able because the model is not understandable and needs treatment to impose a constraint to be physically reasonable (e.g., [34]). Therefore, we aim to propose a new Response model whose formulae are easily derivable for arbitrary actuator configurations, have enough degree of freedom to express harbor maneuvers, and not having the downside of NN models. In addition, we aim to propose a method to select formulae of the proposed model which does not depend on captive model tests and knowledge of ship hydrodynamics. Our previous work [35] proposed a 4-quadrant Abkowitz model for single-propeller, single-rudder ships. This previous model was based on Taylor expansion, so the polynomials are easily derived. In addition, the previous model has four sets of model parameters to model the change of characteristics caused by propeller reversal and astern conditions. In this study, we expand the previous model [35] for arbitrary actuators and develop a method for model generation using the SI. By utilizing SI, we will be able to be less dependent on captive model tests and ship hydrodynamics in model generation. Furthermore, the proposed method enables the automatic generation of dynamic models because human intervention in model derivation, selection, and identification can be reduced. Our previous literature [36] described the initial results of the investigations done in this paper. This paper presents the results more extensively, with more details, and with several revisions. The major contributions of this study are as follows: 1. We proposed a new dynamic model, from which the model formula is easily derived according to simple rules and is complex enough to handle complex phenomena of harbor maneuvers. The new model is a hybrid model that combines the simplicity of the derivation of the Abkowitz model, based on Taylor expansion, and the complexity of the MMG low-speed maneuvering model. 2. We proposed a method for selecting a model that does not require captive model tests and knowledge of ship hydrodynamics and modeling. Because the proposed model can easily derive multiple formulae, we can generate multiple models using system identification and selects the model formulae that best fit the ship's trajectory. Therefore, we can reduce the effort of parameter identification and model selection for arbitrary actuator configurations. ## 2 Abkowitz-MMG hybrid model ### Overview of the proposed model This section describes the dynamic model proposed in this study (hereafter referred to as the "proposed model"). The proposed model is a hybrid model that combines the simplicity of the Abkowitz model with the complexity of the MMG low-speed model. Requirements of the model.First, we describe the requirements of the proposed model and ideas of our approach to those requirements. The requirements of the proposed model were derived based on the issues raised in the introduction. The requirements are as follows. **Requirement 1.** The model complexity can be increased easily. This allows for handling various levels of complexity of the flow field of harbor maneuvers. **Requirement 2.** The user can easily derive the model's expression even if the configuration of the actuator of the subject ship changes. **Requirement 3.** The model must be accurate enough to evaluate the controller by maneuvering simulation, i.e., the same level of accuracy as existing methods. **Requirement 4.** The model can represent significant characteristic changes of fluid forces at berthing and unherthing (e.g., flow separation, propeller reversal). **Requirement 5.** Model selection and parameter identification of the proposed model does not require in-depth knowledge of dynamic models or detailed flow observation. **Requirement 6**.: The model does not require captive model tests through its generation process. **Requirement 7**.: The model has robustness to input state and control input over the operation range of harbor maneuvers. In other words, the divergence of acceleration should not occur even for unknown data within the range of typical harbor maneuvers. Basic ideas on the model design.We designed the proposed model to meet the above requirements with the following ideas. **Idea 1**.: Following Abkowitz's polynomial model [37] (hereafter, Abkowitz model), the model is derived by Taylor expansion of the hydrodynamic forces. The complexity of the model can be easily increased by increasing the highest degree of the polynomial obtained by Taylor expansion or the number of terms employed (Requirement 1). If the actuator configuration changes (Requirement 2), we only need to add the actuator features (e.g., propeller revolution, rudder angle, azimuth thruster angle) to the Taylor expansion variables. **Idea 2**.: Following the concept of low-speed maneuvering MMG model, the model parameters are set to different values for each representative condition (e.g., propeller forward/reverse rotation). In addition, the same mathematical expression is maintained for all operating conditions, thereby preserving the simplicity (Requirements 1, 2 and 5) while ensuring that the accuracy (Requirement 3) and the complexity (Requirement 4) could be achieved. **Idea 3**.: Utilize the System Identification (SI) and use ship's trajectory (i.e., the time history of motion and actuator usage) as a training dataset to identify the model parameters. Suppose the user can easily derive several different mathematical expressions of the proposed model (Requirement 1), the user can select a dynamic model by choosing the one that best fits the motion data for validation. This procedure allows the user to select the appropriate model without requiring observation of the flow field or knowledge of the model (Requirement 5). It also achieves captive model test free. **Idea 4**.: The stability of the proposed model is not ensured because the model's expressions and parameter exploration domain are not based on hydrodynamics. Therefore, we intended to maintain the stability of the proposed model by the objective function and the dataset to be used in the optimization process. Details of the objective function and dataset are described in Section 3.1 and Section 3.3, respectively. ### Derivation of Proposed Model In this section, we derive the mathematical expression of the proposed model based on the basic idea described in the previous section. First, the subject ship is a scaled model ship of a coastal vessel equipped with a single-propeller, VecTwin type twin-rudder, and bow thruster. The ship fixed coordinate system \(\mathrm{O}-xy\) and space fixed coordinate system \(\mathrm{O}_{0}-x_{0}y_{0}\) are defined as Fig. 1 and equations of motion for the three degrees of freedom used in the standard MMG model [7] are: \[(m+m_{x})\dot{u}-(m+m_{y})v_{m}r-x_{G}mr^{2} =X \tag{1}\] \[(m+m_{y})\dot{v}_{m}+(m+m_{x})ur+x_{G}m\dot{r} =Y\] \[(I_{zz}+J_{zz}+x_{G}^{2}m)\dot{r}+x_{G}m(\dot{v}_{m}+ur) =N\enspace.\] Fig. 1: Coordinate systems. Here, notations for physical parameters are: \(m\) is ship's mass, \(I_{zz}\) is the moment of inertia at the center of gravity (CG), \(m_{x}\) and \(m_{y}\) are added mass, \(J_{zz}\) is added inertia, \(x_{G}\) is longitudinal distance to CG from midship, \(X,\ Y,\ N\) are forces and moment acting on the body except added mass. Notations for kinematic variables are: \(\psi\) is ship heading, \(u\) and \(v_{\rm m}\) are longitudinal and lateral speed of motion on \(O-xy\) system, \(r\) is yaw angular velocity, \(\beta\) is drift angle, and \((\dot{\cdot})\) represent the time derivative. Notations for actuator features are: \(n_{\rm P}\) and \(n_{\rm BT}\) are revolution numbers of propeller and bow thruster, \(\delta_{\rm s},\ \delta_{\rm p}\) are rudder angles of starboard and port side rudder. Notations for wind are: \(U_{\rm T}\) and \(\gamma_{\rm T}\) are true wind speed and direction, \(U_{\rm A}\) and \(\gamma_{\rm A}\) are apparent wind speed and direction, respectively. Next, the right-hand side of Eq. (1) is expressed as a polynomial by Taylor expansion. The classical Abkowitz model assumes the perturbation motion of a ship from a certain forward speed \(u=U_{0},\ v=0,\ r=0\). Then, expand the hydrodynamic forces or their dimensionless forms by the state variables \(\mathbf{s}_{t}\), the control vectors \(\ \mathbf{a}_{t}\) and their time derivatives \(\dot{\mathbf{s}}_{t},\ \dot{\mathbf{a}}_{t}\). To apply Abkowitz model to harbor maneuvers, we use Taylor expansion with ideas below: **Idea 5**.: Assume that hydrodynamic forces and moments are function of \(\mathbf{s}_{t}\), \(\mathbf{a}_{t}\) and added mass component on left hand side of Eq. (1). The users can include other added mass components and \(\dot{\mathbf{a}}_{t}\) in the dynamic model if necessary, however generally those are neglected in MMG model. **Idea 6**.: No scale conversion is performed since we assumed that the user estimates the dynamic model directly from the motion of a full-scale ship. This means that the hydrodynamic forces are expanded by Taylor expansion without non-dimensionalization. This is also because to avoid zero division caused by the non-dimensionalization1 of the hydrodynamic forces at \(U=0\). Footnote 1: \(X^{\prime}=X/(0.5\rho LdU^{2}),\ Y^{\prime}=Y/(0.5\rho LdU^{2}),\ N^{\prime}= N/(0.5\rho L^{2})dU^{2})\) **Idea 7**.: Assuming slow-speed motion, hydrodynamic forces are expanded around the origin (\(\mathbf{s}_{t}=0,\ \mathbf{a}_{t}=0\)) by Taylor's expansion. **Idea 8**.: Hydrodynamic forces are decomposed into the above-water (wind) and below-water (underwater) contributions to account for the wind force, which is relatively important in harbor maneuvers. This decomposition is appropriate because the relative wind velocity governs the wind force while ship's state and control input governs the underwater contribution. While the underwater contribution is Taylor expanded,the wind force is modeled using an existing wind force model. Accordingly, let us start the derivation of the dynamic model from the decomposition of the hydrodynamic forces on the right-hand side of the Idea 8 into wind pressure forces \(X_{\rm A},\ Y_{\rm A},\ N_{\rm A}\) and other hydrodynamic forces. Next, let us focus on hydrodynamic forces acting below the water surface. Since the subject ship has a bow thruster, the following is assumed. * Bow thruster force \(Y_{\rm BT}\), \(N_{\rm BT}\) is separated with the hydrodynamic forces of the hull+rudder+propeller \(X_{\rm HPR},\ Y_{\rm HPR},\ N_{\rm HPR},\) because bow thruster is located at the bow, away from the rudder and propeller, so we assume that the influence of the flow interaction between bow thruster and rudder/propeller is small. If this were a stern thruster, the coupling with the propeller and rudder would need to be considered. * Bow thruster is assumed that it does not generate force in the X direction. From these assumptions, we yield the following: \[X =X_{\rm HPR}+X_{\rm A} \tag{2}\] \[Y =Y_{\rm HPR}+Y_{\rm BT}+Y_{\rm A}\] (3) \[N =N_{\rm HPR}+N_{\rm BT}+N_{\rm A}\ . \tag{4}\] The wind force \(X_{\rm A},\ Y_{\rm A},\ N_{\rm A}\) is modeled by exsiting polynomial-type model (see Section 2.3) proposed by [38]. The coefficients of the wind force model were optimized simultaneously with the coefficients of \(X_{\rm HPR},\ Y_{\rm HPR},\ N_{\rm HPR}\) by SI. As a first step of research, bow thruster was neglected, although the subject ship is equipped with a bow thruster; therefore, always \(Y_{\rm BT}=N_{\rm BT}=0\). Next, we select the variable vector \(\mathbf{x}=(\mathbf{s}_{t},\ \mathbf{a}_{t})\) for Taylor expansion of \(X_{\rm HPR},\ Y_{\rm HPR},\ N_{\rm HPR}\). We selected following variables for state variables vector \(\mathbf{s}_{t}\) and control input vector \(\mathbf{a}_{t}\), where the subscript \(t\) is the value at time \(t\): \[\mathbf{s}_{t} \equiv(u,\ v_{\mathrm{m}},\ r)\in\mathbb{R}^{3} \tag{5}\] \[\mathbf{a}_{t} \equiv(n_{\mathrm{P}},\ \sin(\delta_{\mathrm{s}}),\ \cos(\delta_{ \mathrm{s}}),\ \sin(\delta_{\mathrm{p}}),\ \cos(\delta_{\mathrm{p}}))\in\mathbb{R}^{5}\enspace. \tag{6}\] Here, we use trigonometric functions of the rudder angles \(\delta_{\mathrm{s}},\ \delta_{\mathrm{p}}\) for the port and starboard rudder features. Most Abkowitz models use the rudder angle \(\delta\), but we must consider that large rudder angles are used in harbor maneuvers. If the user wants to derive the dynamic model for other ship configurations, add appropriate features to \(\mathbf{a}_{t}\). For example, for a ship equipped with a stern thruster, the revolution number of the stern thruster can be added to \(\mathbf{a}_{t}\). Likewise, the revolution number of the azimuth thruster and the \(\sin\) and \(\cos\) functions of the operating angle can be used for an azimuth thruster. #### 2.2.1 Derivation of Mathematical expressions by Taylor expansion Taylor expansion is now performed for \(X_{\mathrm{HPR}},\ Y_{\mathrm{HPR}}\), and \(N_{\mathrm{HPR}}\). Hereafter we only show the manipulation for \(X_{\mathrm{HPR}}\), but \(Y_{\mathrm{HPR}}\) and \(N_{\mathrm{HPR}}\) are also derived similarly. \(X_{\mathrm{HPR}}\) is expanded for \(\mathbf{x}\) as follows: \[X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k}) \tag{7}\] \[\quad=\left.X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k})\right|_{\mathbf{x}=0}\] \[\quad+\frac{1}{1!}\left(\sum_{i=1}^{k}x_{i}\left.\frac{\partial}{ \partial x_{i}}\right|_{\mathbf{x}=0}\right)X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k})\] \[\quad+\left.\frac{1}{2!}\left(\sum_{i=1}^{k}x_{i}\frac{\partial}{ \partial x_{i}}\right|_{\mathbf{x}=0}\right)^{2}X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k})\] \[\quad+\left.\frac{1}{3!}\left(\sum_{i=1}^{k}x_{i}\frac{\partial}{ \partial x_{i}}\right|_{\mathbf{x}=0}\right)^{3}X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k})\] \[\quad+\cdots\enspace,\] where \[\mathbf{x} =(x_{1},\ \ldots,\ x_{k}) \tag{8}\] \[=\left(u,\ v_{\mathrm{m}},\ r,\ n_{\mathrm{P}},\ \sin(\delta_{ \mathrm{s}}),\ \cos(\delta_{\mathrm{s}}),\right.\] \[\quad\left.\sin(\delta_{\mathrm{p}}),\ \cos(\delta_{\mathrm{p}}) \right)\in\mathbb{R}^{8}\enspace.\] Ignoring the fluid memory effect, we can assume that \(X_{\mathrm{HPR}}(x_{1},\ \ldots,\ x_{k})\big{|}_{\mathbf{x}=0}=0\). Then the first term on the right-hand side of the Eq. (7) can be ignored. Next, expanding Eq. (7), we obtain, \[X_{\mathrm{HPR}}(\mathbf{s}_{t},\mathbf{a}_{t}) \tag{9}\] \[=\left.\frac{\partial X_{\mathrm{HPR}}}{\partial u}\right|_{\mathbf{ x}=0}u+\left.\frac{\partial X_{\mathrm{HPR}}}{\partial v_{\mathrm{m}}} \right|_{\mathbf{x}=0}v_{\mathrm{m}}+\cdots\] \[+\frac{1}{3!}\left(\left.\frac{\partial^{3}X_{\mathrm{HPR}}}{ \partial u^{3}}\right|_{\mathbf{x}=0}u^{3}+\ldots+\left.\frac{\partial^{3}X_{ \mathrm{HPR}}}{\partial\cos^{3}\delta_{\mathrm{p}}}\right|_{\mathbf{x}=0}\cos^{3} \delta_{\mathrm{p}}\right)\] \[+\cdots\enspace.\] Partial derivatives of Eq. (9) are called hydrodynamic coefficients on Abkowitz model. By using common expressions of hydrodynamic coefficients, we can formulate Eq. (9) as follows, \[X_{\mathrm{HPR}}(\mathbf{s}_{t},\mathbf{a}_{t}) \tag{10}\] \[\quad=X_{u}u+\ldots+X_{\cos(\delta_{\mathrm{p}})}\cos(\delta_{ \mathrm{p}})\] \[\quad+X_{uu}u^{2}+\ldots+X_{\cos(\delta_{\mathrm{p}})}\cos(\delta _{\mathrm{p}})\cos^{2}(\delta_{\mathrm{p}})\] \[\quad+X_{uuuu}u^{3}+\ldots+X_{\cos(\delta_{\mathrm{p}})}\cos( \delta_{\mathrm{p}})\cos(\delta_{\mathrm{p}})\cos^{3}(\delta_{\mathrm{p}})\] \[\quad+\cdots\enspace,\] where \[X_{x_{i}} \equiv\left.\frac{\partial X_{\mathrm{HPR}}}{\partial x_{i}} \right|_{\mathbf{x}=0} \tag{11}\] \[X_{x_{i}x_{j}} \equiv\frac{1}{2!}\left.\frac{\partial^{2}X_{\mathrm{HPR}}}{ \partial x_{i}\partial x_{j}}\right|_{\mathbf{x}=0}\] \[X_{x_{i}x_{j}x_{k}} \equiv\frac{1}{3!}\left.\frac{\partial^{3}X_{\mathrm{HPR}}}{ \partial x_{i}\partial x_{j}\partial x_{k}}\right|_{\mathbf{x}=0}\enspace.\] #### 2.2.2 Selection of the term of polynomial equation For polynomial-type dynamic models, the question is which terms in Eq. (10) should be included in the polynomial. For existing dynamic models, the choice of formulae has been determined by comparison with the results of captive model tests. For example, for of MMG, see[39, 40], and for Abkowitz model see [6]. However, in this study, we assume a captive model test-free dynamic model (Requirement 6). Therefore, the proposed model's order and term selection methods are as follows. **Idea 9**.: All terms are used unless they are obviously unnecessary or need modification. **Idea 10**.: The highest order of the polynomials is defined by the capability of the optimization method and the user's computational resources. The user derives and compares several model formulae with different maximum order. In this study, second- and third-order models were used. The reasons are as follows. The available duration of the test facility for the captive model test-based method bounds the number of identifiable parameters. For example, the original Abkowitz model ([37]) used 65 parameters, and studies by the authors using the MMG low-speed maneuvering model ([1]) used 52 parameters to model the forces acting below the surface of the water. The authors believe that about 100 is a practical upper limit for the number of model parameters. To ensure accuracy with fewer terms, the terms employed should be examined based on theoretical considerations and observations on the flow field. On the other hand, SI can increase the number of parameters as far as the performance of the optimization method allows. For example, CMA-ES used in this study (described in detail inSection 3.5) can optimize up to several hundred dimensions. The proposed model is assumed to be a model that does not require detailed observation of the flow field; thus, terms are removed or modified only for obvious ones. Based on the principles discussed above, we perform modification of Eq. (10). The proposed model is modified with the following three manipulations. Modification A: Selection based on the symmetricity of the ship.The term is selected based on the fact that the ship is symmetrical at the longitudinal center line of the hull. Original Abkowitz model [37] assumed that for a single-propeller, single-rudder ship; hence, \(X\) is assumed as an even function of \({v_{\mathrm{m}}}^{2},\;r,\;\delta\), hence the odd powers of \({v_{\mathrm{m}}},\;r,\;\delta\) were deleted from the polynomial for \(X\). Similarly, because \(Y\) and \(N\) were assumed as an odd function of \({v_{\mathrm{m}}},\;r,\;\delta\) due to symmetry, their even powers were deleted. The proposed model also uses this symmetry assumption. We assume that \(X\) is an even function of \(\mathbf{x}_{\mathrm{asym}}=\{{v_{\mathrm{m}}},\;r,\;\sin(\delta_{\mathrm{s}}),\; \sin(\delta_{\mathrm{p}})\}\), and remove the odd power terms \(\sum_{i=1,3,\ldots}\{{v_{\mathrm{m}}}+\;r+\;\sin(\delta_{\mathrm{s}})+\;\sin( \delta_{\mathrm{p}})\}^{i}\). Also, from the polynomial of \(Y\) and \(N\), the 0th power and even power terms \(\sum_{i=0,2,\ldots}\{{v_{\mathrm{m}}}+\;r+\;\sin(\delta_{\mathrm{s}})+\;\sin( \delta_{\mathrm{p}})\}^{i}\) were removed. The asymmetric motion vector \(\mathbf{x}_{\mathrm{asym}}\) was selected for motions outside the symmetry plane of the hull. However, strictly speaking, \(Y\) and \(N\) are asymmetric and hence not purely odd functions of \(\mathbf{x}_{\mathrm{asym}}\). For example, for a single-propeller ship, turning forces of the port and starboard sides are different due to the rotational flow of propeller. In addition, significant lateral force and yaw moment will occur during propeller reversal of a single-fixed-pitch propeller ship [41, 42, 43]. That asymmetricity can be modeled by not removing the 0th powered term of \(\mathbf{x}_{\mathrm{asym}}\) (e.g., \(Y_{u},Y_{uu},\cdots\)) [37]. However, we decided not to add terms to express asymmetricity to prioritize the simplicity of the model derivation. Modification B: Delete terms containing \(\sin^{2}\) and \(\cos^{2}\).Because \(\sin^{2}+\cos^{2}=1\) duplicates other terms. For example, \[\begin{split} X_{u}u&+X_{u\sin^{2}\delta_{\mathrm{ s}}}u\sin^{2}\delta_{\mathrm{s}}+X_{u\cos^{2}\delta_{\mathrm{s}}}u\cos^{2} \delta_{\mathrm{s}}\\ &=(X_{u}+X_{u\sin^{2}\delta_{\mathrm{s}}}+X_{u\cos^{2}\delta_{ \mathrm{s}}})u\end{split}. \tag{12}\] All the terms in parentheses in Eq. (12) are constants; hence those are unnecessary and must be deleted. **Modification C: Replace squared variables with modulus function.** Replace the squared variable, both on the 2nd order term and 3rd order term, with modulus function. For the \(X\) polynomial, replace \(x_{k}^{2}\) for \(x_{k}\notin\mathbf{x}_{\text{asym}}\) to \(x_{k}|x_{k}|\) and similarly in the \(Y,\ N\) polynomial, \(x_{k}\in\mathbf{x}_{\text{asym}}\), let \(x_{k}^{2}\) be \(x_{k}|x_{k}|\). However, this is not done for odd power terms of non-coupling terms such as \(uuu\) and \(v_{\text{m}}v_{\text{m}}v_{\text{m}}\). The results of the above operations are shown below. Here let us express the obtained polynomial for 3rd order model as the inner product of hydrodynamic coefficient vector \(\mathbf{X}^{(i)}\in\mathbb{R}^{m},\ \mathbf{Y}^{(i)}\in\mathbb{R}^{n}\), or \(\mathbf{N}^{(i)}\in\mathbb{R}^{n}\) and even powers vector \(\mathbf{z}_{\text{even}}\in\mathbb{R}^{m}\) of \(\mathbf{x}_{\text{asym}}\) or odd powers vector \(\mathbf{z}_{\text{odd}}\in\mathbb{R}^{n}\), as follows: \[X_{\text{HPR}} =\mathbf{X}^{(i)}\cdot\mathbf{z}_{\text{even}}\] \[Y_{\text{HPR}} =\mathbf{Y}^{(i)}\cdot\mathbf{z}_{\text{odd}} \tag{13}\] \[N_{\text{HPR}} =\mathbf{N}^{(i)}\cdot\mathbf{z}_{\text{odd}}\ \,\] where \[\mathbf{X}^{(i=1\cdots m)}=\left(X_{u}^{(i)},\ X_{\cos(\delta_{s})}^{ (i)},\ X_{\cos(\delta_{p})}^{(i)},\ X_{n_{\text{P}}}^{(i)},\right.\\ \left.X_{u|u|}^{(i)},\ X_{u\cos(\delta_{s})}^{(i)},\ X_{u\cos( \delta_{p})}^{(i)},\ X_{un_{\text{P}}}^{(i)},\ X_{un_{\text{P}}}^{(i)},\right.\\ \left.X_{v_{\text{m}}r}^{(i)},\ X_{v_{\text{m}}\sin(\delta_{s})}^{ (i)},\ X_{v_{\text{m}}\sin(\delta_{p})}^{(i)},\ X_{rr}^{(i)},\ X_{r\sin(\delta_ {s})}^{(i)},\right.\\ \left.X_{r\sin(\delta_{p})}^{(i)},\ X_{\sin(\delta_{s})\sin(\delta_ {p})}^{(i)},\ X_{\cos(\delta_{s})\cos(\delta_{p})}^{(i)},\right.\\ \left.X_{v_{\text{m}}\delta_{p}}^{(i)},\ X_{\cos(\delta_{s})np}^{(i)},\ X_{n_{ \text{P}}|n_{\text{P}}|}^{(i)},\right.\\ \left.X_{uuu}^{(i)},\ X_{uv_{\text{m}}v_{\text{m}}}^{(i)},\ X_{uv_{ \text{m}}r}^{(i)},\ X_{urr}^{(i)},\ X_{uv_{\text{m}}\sin(\delta_{s})}^{(i)},\right.\\ \left.X_{ur\sin(\delta_{s})}^{(i)},\ X_{u|u|\cos(\delta_{s})}^{(i)},\ X_{v_{ \text{m}}v_{\text{m}}\cos(\delta_{s})}^{(i)},\right.\\ \left.X_{v_{\text{m}}r\cos(\delta_{s})}^{(i)},\ X_{v_{\text{m}}\sin(\delta_ {s})\cos(\delta_{s})}^{(i)},\ X_{rr\cos(\delta_{s})}^{(i)},\right.\\ \left.X_{u\text{m}\sin(\delta_{p})}^{(i)},\ X_{ur\sin(\delta_{p})}^{(i)},\ X_{u\sin( \delta_{s})\sin(\delta_{p})}^{(i)},\right.\\ \left.X_{v_{\text{m}}\cos(\delta_{s})\sin(\delta_{p})}^{(i)},\ X_{r\cos( \delta_{s})\sin(\delta_{p})}^{(i)},\right.\\ \left.X_{r\sin(\delta_{s})\cos(\delta_{s})}^{(i)},\ X_{\sin(\delta_ {s})\cos(\delta_{s})\sin(\delta_{p})}^{(i)},\right.\\ \left.X_{r\sin(\delta_{s})\cos(\delta_{s})}^{(i)},\ X_{\sin(\delta_ {s})\cos(\delta_{s})\sin(\delta_{p})}^{(i)},\right.\\ \left.X_{u|u|\text{np}}^{(i)},\ X_{u\cos(\delta_{s})np}^{(i)},\ X_{u\cos( \delta_{p})np}^{(i)},\right.\\ \left.X_{v_{\text{m}}\sin(\delta_{p})}^{(i)},\ X_{v_{\text{m}}\sin(\delta_ {s})np}^{(i)},\ X_{v_{\text{m}}\sin(\delta_{p})np}^{(i)},\right.\\ \left.X_{rrn_{\text{P}}}^{(i)},\ X_{r\sin(\delta_{s})np}^{(i)},\ X_{r\sin( \delta_{s})n_{\text{P}}}^{(i)},\right.\\ \left.X_{\sin(\delta_{s})\sin(\delta_{p})}^{(i)},\ X_{\cos(\delta_ {s})\cos(\delta_{p})np}^{(i)},\right.\\ \left.X_{\cos(\delta_{s})np}^{(i)}|n_{\text{P}}|,\ X_{\cos(\delta_ {s})np}^{(i)}|n_{\text{P}}|,\ X_{n_{\text{P}}np}^{(i)}\right) \tag{14}\] \[\mathbf{z}_{\text{even}}=\left(u,\ \cos(\delta_{s}),\ \cos(\delta_{\text{p}}),\ n_{\text{P}},\ u|u|,\ u\cos(\delta_{\text{s}}),\\ u\cos(\delta_{\text{p}}),\ un_{\text{P}},\ v_{\text{m}}v_{\text{m}}v_{\text{m}},\ v_{\text{m}}\sin(\delta_{s}),\\ v_{\text{m}}\sin(\delta_{\text{p}}),\ rr,\ r\sin(\delta_{s}),\ r\sin(\delta_{p}),\\ \sin(\delta_{s})\sin(\delta_{\text{p}}),\ \cos(\delta_{\text{s}})\cos(\delta_{\text{p}}),\ \cos(\delta_{\text{s}})n_{\text{P}},\\ \cos(\delta_{\text{p}})n_{\text{P}},\ n_{\text{P}}|n_{\text{P}}|,\\ uuu,\ uv_{\text{m}}v_{\text{m}},\ uv_{\text{m}}r,\ urr,\ uv_{\text{m}}\sin(\delta_{s}),\\ ur\sin(\delta_{s}),\ u|u|\cos(\delta_{s}),\ v_{\text{m}}v_{\text{m}}\cos( \delta_{s}),\\ v_{\text{m}}r\cos(\delta_{s}),\ v_{\text{m}}\sin(\delta_{s})\cos(\delta_{s}),\ rr\cos(\delta_{s}),\\ r\sin(\delta_{s})\cos(\delta_{s}),\ uv_{\text{m}}\sin(\delta_{\text{p}}),\ ur\sin(\delta_{\text{p}}),\\ u\sin(\delta_{\text{s}})\sin(\delta_{\text{p}}),\ v_{\text{m}}\cos(\delta_{s}) \sin(\delta_{\text{p}}),\\ r\cos(\delta_{\text{s}})\sin(\delta_{\text{p}}),\ \sin(\delta_{\text{s}})\cos(\delta_{\text{s}})\sin(\delta_{\text{p}}),\\ u|u|\cos(\delta_{\text{p}}),\ u\cos(\delta_{\text{s}})\cos(\delta_{\text{p}}),\ v_{\text{m}}v_{\text{m}}\cos( \delta_{\text{p}}),\\ v_{\text{m}}r\cos(\delta_{\text{p}}),\ v_{\text{m}}\sin(\delta_{\text{s}}) \cos(\delta_{\text{p}}),\\ v_{\text{m}}\sin(\delta_{\text{s}})\cos(\delta_{\text{p}}),\ v_{\text{m}}v_{\text{m}} \cos(\delta_{\text{p}}),\\ v_{\text{m}}\sin(\delta_{\text{s}})\cos(\delta_{\text{p}}),\ r\sin(\delta_{\text{s}}) \cos(\delta_{\text{p}}),\\ v_{\text{m}}\sin(\delta_{\text{p}})\cos(\delta_{\text{p}}),\ r\sin(\delta_{ \text{s}})\cos(\delta_{\text{p}}),\\ r\sin(\delta_{\text{p}})\cos(\delta_{\text{p}}),\ \sin(\delta_{\text{s}})\sin(\delta_{\text{p}})\cos(\delta_{\text{p}}),\\ u|u|n_{\text{P}},\ u\cos(\delta_{\text{s}})n_{\text{P}},\ u\cos(\delta_{\text{p}})n_{\text{P}},\ u\text{m}_{\text{P}}|n_{\text{P}}|,\\ v_{\text{m}}m_{\text{P}}n_{\text{P}},\ v_{\text{m}}rn_{\text{P}},\ v_{\text{m}}\sin(\delta_{\text{s}})n \[Y^{(i)}_{uuv_{\rm m}},\ Y^{(i)}_{v_{\rm m}v_{\rm m}},\ Y^{(i)}_{uuv },\ Y^{(i)}_{v_{\rm m}|v_{\rm m}|r},\ Y^{(i)}_{v_{\rm m}r|r|},\] \[Y^{(i)}_{rrr},\ Y^{(i)}_{uu\sin(\delta_{\rm s})},\ Y^{(i)}_{v_{ \rm m}|v_{\rm m}|\sin(\delta_{\rm s})},\ Y^{(i)}_{v_{\rm m}r\sin(\delta_{\rm s})},\] \[Y^{(i)}_{r|r|\sin(\delta_{\rm s})},\ Y^{(i)}_{uv_{\rm m}\cos( \delta_{\rm s})},\ Y^{(i)}_{ur\cos(\delta_{\rm s})},\] \[Y^{(i)}_{u\sin(\delta_{\rm s})}\cos(\delta_{\rm s})^{\cdot},\ Y^{ (i)}_{v_{\rm m}\cos(\delta_{\rm s})},\ Y^{(i)}_{r\cos(\delta_{\rm s})\cos(\delta _{\rm s})},\] \[Y^{(i)}_{\sin(\delta_{\rm s})}\cos(\delta_{\rm s})\cos(\delta_{ \rm s}),\ Y^{(i)}_{u\sin(\delta_{\rm p})},\ Y^{(i)}_{u\cos(\delta_{\rm s}) \sin(\delta_{\rm p})},\] \[Y^{(i)}_{v_{\rm m}|v_{\rm m}|\sin(\delta_{\rm p})},\ Y^{(i)}_{v_ {\rm m}r\sin(\delta_{\rm p})},\ Y^{(i)}_{v_{\rm m}\sin(\delta_{\rm s})\sin( \delta_{\rm p})},\] \[Y^{(i)}_{r|r|\sin(\delta_{\rm p})},\ Y^{(i)}_{r\sin(\delta_{\rm s })\sin(\delta_{\rm p})},\ Y^{(i)}_{\cos(\delta_{\rm s})\cos(\delta_{\rm s}) \sin(\delta_{\rm p})},\] \[Y^{(i)}_{uv_{\rm m}\cos(\delta_{\rm p})},\ Y^{(i)}_{ur\cos(\delta _{\rm p})},\ Y^{(i)}_{u\sin(\delta_{\rm s})\cos(\delta_{\rm p})},\] \[Y^{(i)}_{u\sin(\delta_{\rm p})\cos(\delta_{\rm p})},\ Y^{(i)}_{v _{\rm m}\cos(\delta_{\rm s})\cos(\delta_{\rm p})},\] \[Y^{(i)}_{r\cos(\delta_{\rm s})\cos(\delta_{\rm p})},\ Y^{(i)}_{ \sin(\delta_{\rm s})\cos(\delta_{\rm s})},\] \[Y^{(i)}_{r\cos(\delta_{\rm s})\sin(\delta_{\rm p})\cos(\delta_{ \rm p})},\ Y^{(i)}_{uv_{\rm m}n_{\rm p}},\ Y^{(i)}_{ur\eta_{\rm p}},\] \[Y^{(i)}_{u\sin(\delta_{\rm s})n_{\rm p}},\ Y^{(i)}_{u\sin(\delta _{\rm p})n_{\rm p}},\ Y^{(i)}_{v_{\rm m}\cos(\delta_{\rm s})n_{\rm p}},\] \[Y^{(i)}_{v_{\rm m}\cos(\delta_{\rm p})n_{\rm p}},\ Y^{(i)}_{v_ {\rm m}n_{\rm p}},\ Y^{(i)}_{r\cos(\delta_{\rm s})n_{\rm p}},\] \[Y^{(i)}_{r\cos(\delta_{\rm p})n_{\rm p}},\ Y^{(i)}_{r\eta_{\rm p} n_{\rm p}},\ Y^{(i)}_{\sin(\delta_{\rm s})\cos(\delta_{\rm s})n_{\rm p}},\] \[Y^{(i)}_{\sin(\delta_{\rm s})\cos(\delta_{\rm p})n_{\rm p}},\ Y^{(i)}_{ \sin(\delta_{\rm s})n_{\rm p}n_{\rm p}},\ Y^{(i)}_{\cos(\delta_{\rm s})\sin( \delta_{\rm p})n_{\rm p}},\] \[Y^{(i)}_{\sin(\delta_{\rm p})\cos(\delta_{\rm p})n_{\rm p}},\ Y^{(i)}_{ \sin(\delta_{\rm p})n_{\rm p}}, \tag{16}\] \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Submodel-3 shown in Algorithm 2 and Submodel-4 shown in Algorithm 3. Submodel-3 uses the stall angle as the branch for forwarding conditions only, but Submodel-4 uses it for both forward and astern. As shown above, the proposed model can apply to various flow fields by adding Submodels because it includes all terms that may be relevant to the characteristics of the flow and actuators. For example, for a single-propeller, single-rudder ship, if the user assumes that characteristics of propeller reversal are not crucial for their usage, then Submodel-2 can be used. Still, if a difference is desired, Submodel-4 can be used. ### Modeling on wind disturbance In this study, the wind force is modeled using Fujiwara's regression formula, [38], which has fewer model parameters than Isherwood's regression [45] because trigonometric functions represent wind direction. Fujiwara's wind force model is shown by Eq. (18), \[\begin{split} X_{A}&=(1/2)\rho_{A}U_{A}^{2}A_{T} \cdot C_{X}\\ Y_{A}&=(1/2)\rho_{A}U_{A}^{2}A_{L}\cdot C_{Y}\\ N_{A}&=(1/2)\rho_{A}U_{A}^{2}A_{L}L_{OA}\cdot C_{N} \enspace,\end{split} \tag{18}\] where \[\begin{split} C_{X}=& X_{A0}+X_{A1}\cos(2\pi-\gamma _{A})+X_{A3}\cos 3(2\pi-\gamma_{A})\\ &+X_{A5}\cos 5(2\pi-\gamma_{A})\\ C_{Y}=& Y_{A1}\sin(2\pi-\gamma_{A})+Y_{A3}\sin 3(2\pi- \gamma_{A})\\ &+Y_{A5}\sin 5(2\pi-\gamma_{A})\\ C_{N}=& N_{A1}\sin(2\pi-\gamma_{A})+N_{A2}\sin 2(2\pi- \gamma_{A})\\ &+N_{A3}\sin 3(2\pi-\gamma_{A})\enspace.\end{split} \tag{19}\] ``` if\(u\geq 0\)then if\(|\delta_{s}|<\alpha_{\text{stall,s}}\)and\(|\delta_{s}|<\alpha_{\text{stall,p}}\)then\(\mathbf{X}^{(i)}=\mathbf{X}^{(1)}\) else\(\mathbf{X}^{(i)}=\mathbf{X}^{(2)}\) endif ``` **Algorithm 2**\(\mathbf{X}^{(i)}\) of Submodel-3 ``` if\(u\geq 0\)then if\(|\delta_{s}|<\alpha_{\text{stall,s}}\)and\(|\delta_{s}|<\alpha_{\text{stall,p}}\)then\(\mathbf{X}^{(i)}=\mathbf{X}^{(1)}\) else\(\mathbf{X}^{(i)}=\mathbf{X}^{(2)}\) endif ``` **Algorithm 3**\(\mathbf{X}^{(i)}\) of Submodel-4 \(X_{Ai},\;Y_{Ai},\;N_{Ai}\) are the model coefficients of wind force. Fujiwara's formula estimates the \(X_{Ai},\;Y_{Ai},\;N_{Ai}\) by regression formulae with the ship's particulars as explanatory variables. But in this study, they were obtained by optimization. ## 3 Optimization procedure and its condition In this chapter, we describe the method to optimize the parameters of a proposed model using system identification (SI). SI of ship maneuvering is defined as the problem of finding a parameter vector of a dynamic model \(\mathbf{\theta}_{\text{opt}}\) that minimizes the difference between the state variable history of the input dataset \(\mathcal{D}\) and the state variable history estimated by the maneuvering simulation using obtained model. Here, \(\mathcal{D}\) is the measured results of model tests or full-scale ship trials and includes the data of state variables \(\mathbf{s}_{t}\), control input \(\mathbf{a}_{t}\) and disturbances \(\mathbf{\omega}_{t}\). The dataset \(\mathcal{D}\) consisted of a training dataset \(\mathcal{D}^{\rm(train)}\) for the optimization procedure, validation dataset \(\mathcal{D}^{\rm(validation)}\) for selecting the optimal parameter and hyperparameter, and test dataset \(\mathcal{D}^{\rm(test)}\) to test the generalization performance on unknown data. From now on, bracketed superscripts indicate values for the associated dataset, and superscript (input) indicates one of (train), (validation), or (test). Hereafter, the chapter will be organized as follows: The objective function, which is the metric for the optimization, is described in Section 3.1; the domain of the parameter exploration is described in Section 3.2; the details of the dataset are described in Section 3.3; the hyperparameters are described in Section 3.4; the optimization algorithm CMA-ES in Section 3.5; and a model generated by conventional parameter identification method used as a comparison is described in Section 3.6. ### Objective function The optimization procedure is defined as exploration for the parameter vector \(\mathbf{\theta}\) of the dynamic model that minimizes the objective function \(\mathcal{F}\) for the training data set \(\mathcal{D}^{\rm(train)}\) using an optimization algorithm, CMA-ES. Objective function \(\mathcal{F}\) is defined as follows: \[\mathbf{\theta}^{\rm(train)}_{\rm opt}=\underset{\mathbf{\theta}\in\mathbf{\Theta}}{\rm argmin }\ \ \mathcal{F}(\mathbf{\theta};\ \mathcal{D}^{\rm(train)}) \tag{20}\] where \[\begin{split}\mathcal{F}(\mathbf{\theta};\ \mathcal{D}^{\rm(train)})& =\mathcal{L}^{\rm(train)}(\mathbf{\theta};\ \mathcal{D}^{\rm(train)})\\ &+\alpha P_{\rm div}(\mathbf{\theta})+\lambda\|\mathbf{\theta}\|_{1}\\ \mathcal{L}^{\rm(train)}(\mathbf{\theta};\ \mathcal{D}^{\rm(train)}) \\ &=\sum_{i=1}^{N}\int_{t=0}^{t_{\rm f}}\|\hat{\mathbf{s}}_{t}^{\rm( train,i)}-\hat{\mathbf{s}}_{t}^{\rm(sim,i)}(\mathbf{\theta})\|^{2}\mathrm{d}t\enspace.\end{split} \tag{21}\] The first term of Eq. (21) is the error norm \(\mathcal{L}\) of the maneuvering simulation using \(\mathbf{\theta}\) for the data set \(\mathcal{D}^{\rm(train)}\). The second term is the deviation penalty from the a priori acceleration range for \(\mathbf{\theta}\), and \(\alpha\) is its weighting coefficient. The third term is the regularization term, and \(\lambda\) is the L1 regularization penalty. Detail of the first term is described in Section 3.1.1, and the second term is described in Section 3.1.2. In Eq. (22), superscript \(i=1\dots N\) denotes the \(i\)-th contiguous subsequences (CS), and \(N\) denotes the total number of CS. Details of CS are stated in the next subsection. Also, \(\hat{s}_{t,j}^{\rm(\cdot,\cdot)}\) is the standardized state variables of the \(i\)-th CS at time \(t\), and their \(j\)-th component is defined by the mean \(\mu_{j}^{\rm(train,i)}\) and standard deviation \(\sigma_{j}^{\rm(train,i)}\) as in the following equation: \[\hat{s}_{t,j}^{\rm(train,i)}=\left(s_{t,j}^{\rm(train,i)}-\mu_{j} ^{\rm(train,i)}\right)/\sigma_{j}^{\rm(train,i)} \tag{23}\] \[\hat{s}_{t,j}^{\rm(sim,i)}=\left(s_{t,j}^{\rm(sim,i)}-\mu_{j}^{ \rm(train,i)}\right)/\sigma_{j}^{\rm(train,i)}\enspace. \tag{24}\] #### 3.1.1 Maneuvering Simulation and Error Norm \(\mathcal{L}\) The maneuvering simulation is conducted to evaluate the performance of \(\mathbf{\theta}\), by estimating the state variable \(\mathbf{s}_{t}^{\rm(sim)}\) as an initial value problem using the dynamic model described in Section 2 and \(\mathbf{\theta}\). The control input \(\mathbf{a}_{t}\) and the disturbance \(\mathbf{\omega}_{t}\) are given. In this study, \(\mathbf{s}_{t_{0}}^{\rm(sim)}\), \(\mathbf{a}_{t}^{\rm(sim)}\), \(\mathbf{\omega}_{t}^{\rm(sim)}\) were defined by the input dataset \(\mathcal{D}^{\rm(input)}\) as Eq. (25): \[\begin{split}\mathbf{s}_{t_{0}}^{\rm(sim,i)}&=\mathbf{s}_{t _{0}}^{\rm(input,i)}\\ \mathbf{a}_{t}^{\rm(sim)}&=\mathbf{a}_{t}^{\rm(input,i)}\\ \mathbf{\omega}_{t}^{\rm(sim,i)}&=\mathbf{\omega}_{t}^{\rm( input,i)}\enspace.\end{split} \tag{25}\] Here, \(\mathcal{D}^{\rm(input)}\) is one of \(\mathcal{D}^{\rm(train)}\), \(\mathcal{D}^{\rm(validation)}\), \(\mathcal{D}^{\rm(test)}\). We can get the time derivative of \(\mathbf{s}_{t}\): \(\hat{\mathbf{s}}_{t}\equiv(\dot{u},\ \dot{v}_{m},\ \dot{r})\) by solving Eq. (1). The first-order Euler method was used for time development. The time interval \(\Delta t=0.1\) s was used. In the simulation, we divide \(\mathcal{D}^{\rm(input)}\) into contiguous subsequences: \(\mathcal{D}^{\rm(input,i)}\) to avoid accumulation of errors. We divided \(\mathcal{D}^{\rm(input)}\) into CS because the velocity was used in the error norm (Eq. (22)) instead of accelerations. Accelerations can be directly obtained by solving Eq. (1); however, acceleration measurement is more difficult than speed and angular velocity measurement. On the other hand, the estimation error of accelerations will be accumulated in the velocity components. The length of the CS is set to \(t_{\text{f}}=100\) s. In addition, because we search for coefficients from a wide range, certain combinations of model coefficients can result in unusually large acceleration during maneuvering simulations. Unrealistically large acceleration will cause numerical overflows in \(\mathbf{s}_{t}\) or \(\mathcal{L}\). We try to prevent overflow by substituting the extra large value with a constant value corresponding to the time-step number when the absolute values of velocity and acceleration exceed the limit \(\mathbf{S}_{\text{limit}}\), same as [1]: \[S_{t,j=1\ldots 6}(t)=\begin{cases}\text{sgn}\big{(}S_{t,j}\big{)}(2-t/t_{ \text{f}})S_{\text{limit},j}&:|S_{t,j}|>S_{\text{limit}}\\ S_{t,j}&:\text{else}\end{cases} \tag{26}\] where \[\mathbf{S}_{t} \equiv(\mathbf{s}_{t},\dot{\mathbf{s}}_{t}) \tag{27}\] \[\mathbf{S}_{\text{limit}} =(a,\ a,\ 2a/L_{\text{pp}},\ a,\ a,\ 2a/L_{\text{pp}})\] (28) \[a =1.0\times 10^{80}\] \[\text{sgn}(x) =\begin{cases}1&:x>0\\ -1&:x<0\\ 0&:x=0\end{cases}. \tag{29}\] #### 3.1.2 Deviation Penalty \(P_{\text{div}}\) Because the proposed model is a simple polynomial equation that does not include hydrodynamic constraints, the acceleration may be overestimated for unknown data, which may cause generalization performance degradation. To prevent overestimation, we imposed a penalty on the objective function when the estimated acceleration exceeded the a priori limit for a specific state and control inputs, which was assumed to give maximum acceleration. This method [46] was expected to eliminate model parameters that return unrealistic accelerations. Below is the detailed calculation method of the deviation penalty, \(P_{\text{div}}\). With given initial state variable \(\mathbf{s}_{0}\), steady control input \(\mathbf{a}_{t}=\mathbf{a}_{0}\), and steady disturbance \(\mathbf{\omega}_{t}=\mathbf{\omega}_{0}\) a penalty is applied if the estimated acceleration \(\dot{\mathbf{s}}_{t}^{(\text{sim})}\) exceed the predefined range \([\dot{\mathbf{s}}_{\text{min}},\dot{\mathbf{s}}_{\text{max}}]\) as follows: \[P_{\text{div}}=\sum_{n=1}^{N_{\text{div}}}P_{n} \tag{30}\] where \[P_{n} =\begin{cases}1-\frac{t_{\text{over}}-\Delta t}{t_{\text{f}}}&: t_{\text{over}}<t_{\text{f}}^{(\text{div})}\\ 0&:\text{else}\end{cases} \tag{31}\] \[t_{\text{over}} =\inf\left\{t\geq 0:\dot{\mathbf{s}}_{t}^{(\text{sim})}\not\in[ \dot{\mathbf{s}}_{\text{min}},\dot{\mathbf{s}}_{\text{max}}]\right\}. \tag{32}\] Here \(t_{\text{f}}^{(\text{div})}\) is duration of the maneuvering simulation and \(t_{\text{f}}=1\) s. Since \(\mathbf{s}_{0}\) and \(\mathbf{a}_{0}\) are expected combinations to generate the maximum or minimum acceleration, we set \(s_{0}=(u_{0},\ v_{0},\ r_{0})\) by the maximum and minimum value of \(\mathcal{D}\) with a certain margin, and set \(a_{0}=(n_{\text{P}0},\ \delta_{s0},\ \delta_{p0})\) by the value which expected to generate maximum and minimum control forces. Accordingly, \(\mathbf{s}_{0}\) and \(\mathbf{a}_{0}\) are show as in Eq. (33): \[u_{0} \in\{-0.2,0.0,0.5\} \tag{33}\] \[v_{0} \in\{-0.2,0.0,0.2\}\] \[r_{0} \in\{-0.1,0.0,0.1\}\] \[n_{\text{P}0} \in\{0.0,12.5\}\] \[\delta_{s0} \in\{-35,0,45,60,90\}\] \[\delta_{p0} \in\{35,0,-45,-60,-90\}\ \.\] Total number of combination \(N_{\text{div}}\) is 1350. In addition, no wind disturbance is assumed \(\mathbf{\omega}_{0}=(0,\ 0)\). The range of \([\dot{\mathbf{s}}_{\text{min}},\dot{\mathbf{s}}_{\text{max}}]\) was defined using a priori information. Here, we used 1-second moving average of the numerical differentiation of \(\mathbf{s}_{t}^{(\mathcal{D})}\) as the acceleration \(\dot{\mathbf{s}}_{t}^{(\mathcal{D})}\). This study's model experiment did not measure acceleration due to its low signal-noise ratio. The 99.7th percentile of the of modulus of the acceleration \(\mathbf{Q}_{99.7}(|\dot{\mathbf{s}}_{t}^{(\mathcal{D})}|)\) was used to set the range: \[[\dot{\mathbf{s}}_{\text{min}},\dot{\mathbf{s}}_{\text{max}}]=[-\mathbf{Q}_{99.7}(|\dot{ \mathbf{s}}_{t}^{(\mathcal{D})}|),\ \mathbf{Q}_{99.7}(|\dot{\mathbf{s}}_{t}^{(\mathcal{D})}|)]. \tag{34}\] We can simplify the range by employing the modulus function in Eq. (34). The reason for using the 99.7th percentile is to remove the effect of noise amplified by numerical differentiation. If the subject ship's acceleration can be measured appropriately, measured acceleration is more suitable. ### Exploration domain In this section, we show the exploration domain \(\mathbf{\Theta}\) of parameter \(\mathbf{\theta}\). The parameter vector \(\mathbf{\theta}\) is consisted of added masses and inertia, hydrodynamic coefficients, and wind force coefficients, as follows: \[\begin{split}\mathbf{\theta}=&\big{(}m_{x},\ m_{y},\ I_{ zz}+J_{zz},\\ &\mathbf{X}^{(i)},\ \mathbf{Y}^{(i)},\ \mathbf{N}^{(i)},\\ & X_{A0},\ X_{A1},\ X_{A3},\ X_{A5},\\ & Y_{A1},\ Y_{A3},\ Y_{A5},\ N_{A1},\ N_{A2},\ X_{A3}\ \big{)}\enspace.\end{split} \tag{35}\] Domain \(\mathbf{\Theta}\) must be defined without a captive model test of the subject ship a priori. In our previous study [1], boundary of the domain \(\mathbf{\Theta}\) is set to be approximately 10 times greater value of a priori solutions, i.e., captive model test results and empirical value. Because of this vast exploration domain, we believed we could obtain optimal coefficients even if the hull form or dynamic model's formulae differed from a priori solutions. Hence, we set the domain boundary by multiplying the coefficient values of the classic Abkowitz model for single-propeller, single-rudder ship [3]: \[\theta_{j}\in\Theta_{j}=\big{[}-1.0,1.0\big{]}\enspace. \tag{36}\] Here, \(\Theta_{j}\) is a domain for the \(j\)-th component of the parameter vector. Moreover, several exceptions of domain boundary setting were made, like the previous study, as follows: * Maximum value of \(X_{u},\ X_{uu},\ Y_{v_{\rm m}},\ N_{r}\) are set to 0 because the signs of diagonal resistance components are self-explanatory. * Minimum value of \(X_{n_{\rm P}}\) was set to 0 because the sign of the first-order thrust coefficient is self-explanatory. * We multiplied 10 to Eq. (36) according to the order of \(r\) contained in \(\theta_{j}\), because \(r\) is relatively smaller than the other components of \(\mathbf{x}\) due to its radian/s dimension. * We multiplied 0.1 to Eq. (36) according to the order of \(n_{\rm P}\) contained in \(\theta_{j}\), because the subject ship is a model ship, \(n_{\rm P}\) is relatively larger than the other components of \(\mathbf{x}\). * The added mass, added mass moment, and wind force coefficients were determined from the empirical value \(\theta_{\rm EFD,}j\). For \(m_{x},\ m_{y},\ I_{zz}+J_{zz},\)\(\Theta_{j}=\big{[}-10\theta_{\rm EFD,}j,\ 10\theta_{\rm EFD,}j\big{]}\) and for wind coefficient, \(\Theta_{j}=\big{[}-10\theta_{\rm EFD,}j,\ 10\theta_{\rm EFD,}j\big{]}\). There are Pros and Cons to searching dimensional values. In dimensional form, domain \(\mathbf{\Theta}\) depends on the ship's size; hence,\(\mathbf{\Theta}\) needs to be determined by repeated trials every time the subject ship changes. In contrast, \(\mathbf{\Theta}\) for non-dimensional coefficients can set its domain boundaries independent of the size and scale of the ship; however, non-dimensionalization causes a problem when ship speed is zero. Further study on domain design is needed. ### Dataset This section describes the datasets used in the study. Maneuvering time histories of a 3-meter free-running model ship were used as the datasets. The measurement system of the model ship is almost the same as in the previous study [1], with the following differences. Ship speed \(u\) and \(v_{\rm m}\) were calculated from Speed Over Ground (SOG), Course Over Ground (COG) measured by the GNSS receiver, and ship heading measured by fiber optical gyro (FOG). In addition, compared to previous studies, the current FOG has better heading accuracy; the yaw drift was about \(1^{\circ}\)/hour. Moreover, unlike in previous studies, \(r\) was not filtered due to the improved performance of FOG. Random maneuvers are the maneuver used in the dataset, same as the previous study [1]. The reason for using random maneuvers was used is to ensure the stability of the dynamic model (Requirement 7), as described in Section 2. The random maneuvers are intended to obtain various different values of state variables and control inputs that can occur as the motion of the subject ship as possible. In this study, as in the previous studies, the model ship operator gives random \(\mathbf{a}_{t}\) so that the state variables and control inputs are distributed. For details on the random maneuvers, please refer to the previous study [1]. The experiment was conducted in an experiment pond (_Inukai_ Pond) at Osaka University. The control input \(\mathbf{a}_{t}\) of random maneuvers was given within a set upper and lower limit range, as shown in Table 2. The range of \(\delta_{\rm s},~{}\delta_{\rm p}\) is the maximum mechanical steering range of the VecTwin rudder. With the maximum propeller speed \(n_{\rm p\,max}\), the ship reaches \(u\approx 0.6\) m/s in a forwarding equilibrium state. The dimensionless speed corresponding to \(u=0.6\) is \(\mathrm{Fr}=u/\sqrt{gL_{\rm pp}}=0.11\), equivalent to 6.7 knots for a medium-sized ship (assuming \(L_{\rm pp}=100\) m). Therefore, \(n_{\rm p\,max}\) is a typical range of propeller speed for ship handling in a harbor. Since the subject ship is equipped with VecTwin rudder system, the ship can stop and astern without reversing the propeller; hence, \(n_{\rm p\,min}=0\) was used. The entire dataset \(\mathcal{D}\) was divided into \(\mathcal{D}^{\rm(train)}\), \(\mathcal{D}^{\rm(validation)}\) and \(\mathcal{D}^{\rm(test)}\), and here we compare the distribution among the datasets. First, the number of samples for each dataset is shown in time in table 1. The measurement frequency of the datasets is 0.1 s, equal to \(\Delta t\) of the maneuvering simulation. The entire dataset \(\mathcal{D}\) is divided to be the length of each dataset \(T^{\rm(train)}\), \(T^{\rm(validation)}\), \(T^{\rm(test)}\) is approximately \(6:1:1\). The distribution of \(\mathbf{s}_{t}\) and control inputs \(n_{\rm P},~{}\delta_{\rm s},\delta_{\rm p}\) for each dataset are shown in Fig. 2. The distributions of the datasets generally agree well with each other. The data set \(\mathcal{D}\) is expected to cover all possible maneuvers that may occur during harbor maneuvers, including unknown maneuvers, to obtain a robust dynamic model. There is no clear definition of the range of state variables for harbor maneuvers; however, the state variables in \(\mathcal{D}\) are comparable to the empirical range of speeds used in the harbor when converted to real ship scale. Specifically, \(u\) is \(u\in[-0.1,~{}0.4]\), equivalent to 1.2 knots astern and 4.5 knots forward of \(L_{\rm pp}=100\). Both \(v_{\rm m}\) and \(r\) are approximately symmetric, i.e., positively and negatively symmetrically distributed, and \(v_{\rm m}\in[-0.1,~{}0.1]\), which is the range of slow maneuver corresponding to \(\mathrm{Fr}_{v_{\rm m}}=\pm 0.018\) and 1.2 knots at \(L_{\rm pp}=100\). Further, by observing the distribution of \(u\) and \(v_{\rm m}\), we see that the drift angle \(\beta\) is distributed over the entire circumference. In addition, random maneuvers are favorable because correlations between state variables and control inputs are weak. The multi-colinearity is a known problem when using Zig-zag maneuvers and turnings as training data due to the strong correlation between \(v_{\rm m}\) and \(r\) in those motions [3, 47]. On the other hand, in the random maneuvers, the correlation between all state variables and control inputs is weak, not only between \(v_{\rm m}\) and \(r\) as shown in Fig. 2. In this study, random maneuvers were employed as the data set, but the validity of the proposed model needs to be demonstrated for motions other than random maneuvers. Here, other than random maneuvers, Crash-Astern (CA) test data set \(\mathcal{D}^{\rm(CA)}\) was employed. A CA test was performed by making a steady propeller revolution at \(n_{\rm P}=7.3\) rps or \(n_{\rm P}=9.8\) rps from a stopped state, accelerating straight ahead at \(\delta_{\rm sp}=0^{\circ}\) for 120 s (\(n_{\rm P}=7.3\)) or 100 s (\(n_{\rm P}=9.8\)), and steer \(\delta_{\rm s}=105^{\circ},~{}\delta_{\rm p}=-105^{\circ}\) to decelerate, stop, and astern. In total, \(\mathcal{D}^{\rm(CA)}\) contains four CA tests, and each test is about 400 seconds. Hence the total length of the \(\mathcal{D}^{\rm(CA)}\) is \(T^{\rm(CA)}=1647.7\) s as shown in Table 1. The normalized histogram of \(\mathcal{D}^{\rm(CA)}\) and \(\mathcal{D}^{\rm(train)}\) shown in Fig. 3. As mentioned earlier, CA test only used a specific \(n_{\rm P},~{}\delta_{\rm s},~{}\delta_{\rm p}\) as a command signal, which results in a stronger \(n_{\rm P},~{}\delta_{\rm s},~{}\delta_{\rm p}\) bias than the random maneuvers. In addition, since \(n_{\rm P},~{}\delta_{\rm s},~{}\delta_{\rm p}\) were constant, the speed developed faster, and the distribution of \(u\) was wider than that of random maneuvering. On the other hand, \(v_{\rm m}\) and \(r\) were more biased around 0 than in random maneuvers. ### Numerical Conditions of Dynamic Models and hyperparameters The objective function of Eq. (21) requires the user to select the hyperparameters \(\alpha\) and \(\lambda\). The \(\alpha\) and \(\lambda\) \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & \(T^{\rm(train)}\) & \(T^{\rm(validation)}\) & \(T^{\rm(test)}\) & \(T^{\rm(CA)}\) \\ \hline Value (s) & 7407.4 & 1201.2 & 1201.2 & 1647.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Duration of datasets. Figure 2: Distribution of state variables and control inputs of data sets. Diagonal and upper triangle figures are univariate and bivariate kernel density estimations of state variables and control inputs, respectively. Lower triangle figures are bivariable scatter plots of state variables and control inputs. In this figure, data sets were re-sampled with one sample per five seconds for plotting. depend on the dynamic model's complexity, i.e., order of the model's polynomials and number of Submodels. In this study, order of polynomials and the number of Submodels were calculated for five combinations as shown in Table 3. Hereafter, a dynamic model with Submodel-2 and 2nd-order polynomials will be referred to as a "SM-2, 2nd-order model." In addition, hyperparameters \(\alpha\) and \(\lambda\) are used for the combinations shown inTables 4 and 5. Hereafter, an optimization performed with a specific dynamic model, \(\alpha\), and \(\lambda\) is referred to as a "computational case." A total of 28 computational cases were executed. \begin{table} \begin{tabular}{c c c c c} \hline Case No. & Order & SM & \(\alpha\) & \(\lambda\) \\ \hline 1 & & & 0 & 0 \\ 2 & 2nd & 2 & 0 & \(1e2\) \\ 3 & & & \(1e2\) & 0 \\ 4 & & & \(1e2\) & \(1e2\) \\ \hline 5 & & & 0 & 0 \\ 6 & 2nd & 3 & \(0\) & \(1e2\) \\ 7 & & & \(1e2\) & 0 \\ 8 & & & \(1e2\) & \(1e2\) \\ \hline 9 & & & 0 & 0 \\ 10 & 2nd & 4 & 0 & \(1e2\) \\ 11 & & & \(1e2\) & \(1e2\) \\ \hline \end{tabular} \end{table} Table 4: Settings of computational cases with 2nd-order models. \begin{table} \begin{tabular}{c c} \hline Submodels & Max. order & Unknown variables \\ \hline 2 & & 133 \\ 3 & 2nd order & 193 \\ 4 & & 253 \\ \hline 2 & & 413 \\ 3 & 3rd order & 613 \\ \hline \end{tabular} \end{table} Table 3: Maximum order of polynomials and number of Submodels of the dynamic models. Figure 3: Histograms of \(x_{k}\) of \(\mathcal{D}^{\rm(train)}\) and \(\mathcal{D}^{\rm(CA)}\). Histograms are normalized to make the total area of the bins to be 1 for each data set. ### CMA-ES and its conditions This study used the optimization method Covariance Matrix Adaption-Evolution Strategy (CMA-ES) [48]. CMA-ES is effective for complex optimization problems where the problems are non-separable and multi-modal [49, 50]. In addition, CMA-ES has strong search capability for optimization problems up to several hundred dimensions[51]. We expect CMA-ES to be effective for the Black-Box optimization problem we are tackling in this study. In addition, CMA-ES has a practical advantage in requiring fewer hyperparameters to be determined by the user. CMA-ES uses the normal distribution to generate candidate solutions and updates the statistics (mean and covariance matrices) of the normal distribution using candidates with a high degree of adaption to the objective function among the candidates. This process of generation, evaluation, and update is iterated, which is interpreted as minimizing the expectation value of the evaluation function [52, 53]. In this study, the CMA-ES with box-constraints[54] and the restart strategy [55] was used, as in the previous study [1, 35]. The initial and maximum population sizes of candidate solutions were set to 64 and 256, respectively. This study's optimization using CMA-ES was terminated at \(5\times 10^{5}\) iterations. ### MMG-EFD model The proposed model needs to be verified whether it can be used as a maneuvering estimation module of a harbor maneuvering simulator. The authors confirmed the validity of the proposed method by confirming that the proposed model has the same or better estimation performance as the model identified by conventional methods. That is, the low-speed maneuvering MMG model and captive model tests. Hereafter, the model for comparison will be referred to as the MMG-EFD (Experimental Fluid Dynamics) model. Details of the model and parameters used in MMG-EFD model are as follows. The MMG model estimates the hydrodynamic forces generated by each module, such as the hull, propeller, rudder, and wind force. The sub-modules used in MMG-EFD model are as follows: Yoshimura's model [56] for the hull; Fujiwara's model [38] for the wind pressure; and Kang's model [57] for the propeller and rudder forces. Model parameters were obtained by a captive model test of the subject ship, empirical formulae, or substituted by other ship's parameters. Specifically, for the hull model, the linear coefficient and the astern drag coefficient were derived from captive model tests, while the other values were obtained by empirical formulae [56]. For the propeller and rudder models, propeller thrust coefficients \(K_{T0},~{}K_{T1},K_{T2}\) were obtained from the propeller open test of the subject ship's propeller, and the other coefficients were substituted by VLCC's values given by Kang [57]. Wind coefficients were derived by Fujiwara's regression formulae using the subject ship's geometric parameters. Although the MMG-EFD model used surrogate parameters for more than half of its parameters, the authors believe that the MMG-EFD model has sufficient accuracy for practical use because the authors used the MMG-EFD model to train an automatic berthing controller and successfully demonstrated a \begin{table} \begin{tabular}{c c c c c} \hline \hline Case No. & Order & SM & \(\alpha\) & \(\lambda\) \\ \hline 13 & & & & 0 & 0 \\ 14 & & & & 0 & \(1e2\) \\ 15 & & & & 0 & \(1e4\) \\ 16 & & & & \(1e2\) & 0 \\ 17 & & & & \(1e2\) & \(1e2\) \\ 18 & & & & \(1e2\) & \(1e4\) \\ 19 & & & & \(1e4\) & 0 \\ 20 & & & & \(1e4\) & \(1e2\) \\ \hline 21 & & & & 0 & 0 \\ 22 & & & & 0 & \(1e2\) \\ 23 & & & & 0 & \(1e4\) \\ 24 & & & & \(1e2\) & \(0\) \\ 25 & & & & \(1e2\) & \(1e2\) \\ 26 & & & & \(1e2\) & \(1e4\) \\ 27 & & & & \(1e4\) & \(0\) \\ 28 & & & & \(1e4\) & \(1e2\) \\ \hline \hline \end{tabular} \end{table} Table 5: Settings of computational cases with 3rd order models. scaled-model experiment of automatic berthing [34]. ## 4 Results In this chapter, we examine the performance of the optimal parameter \(\mathbf{\theta}_{\text{opt}}\) obtained from the optimization. Firstly, we select the optimal parameter for each computational case by observing the learning curve of train loss \(\mathcal{L}(\mathbf{\theta};\ \mathcal{D}^{\text{(train)}})\) and validation loss \(\mathcal{L}(\mathbf{\theta};\ \mathcal{D}^{\text{(validation)}})\). Secondly, since CMA-ES is a stochastic search method, we check the dependency on the random seed used in CMA-ES. Thirdly, we select the optimal value of hyperparameters by comparing the \(\mathcal{L}(\mathbf{\theta}_{\text{opt}};\ \mathcal{D}^{\text{(validation)}})\) with different hyperparameter values. Finally, the generalization performance is checked using random maneuvering test set \(\mathcal{D}^{\text{(test)}}\) and crash-astern test set \(\mathcal{D}^{\text{(CA)}}\). We also check how the performance varies with the order of model formulae and the Submodel. Furthermore, the estimation error in Train, Validation, and Test are discussed. The error norm \(\mathcal{L}\) defined by Eq. (22) is used as a performance metric. This is because the objective function \(\mathcal{F}\) shown in Eq. (21) is penalized by two terms other than the error norm. ### Learning Curve of Optimization The optimization yields the optimal parameter \(\mathbf{\theta}_{\text{opt}}^{\text{(train)}}\) in \(\mathcal{D}^{\text{(train)}}\) defined by Eq. (20), but overfitting may occur. Thus, the error norm \(\mathcal{L}^{\text{(validation)}}\equiv\mathcal{L}(\mathbf{\theta};\ \mathcal{D}^{\text{(validation)}})\) on \(\mathcal{D}^{\text{(validation)}}\) shown below, \[\begin{split}\mathcal{L}(\mathbf{\theta};\ \mathcal{D}^{\text{(validation)}})\\ =\sum_{i=1}^{N}\int_{t=0}^{t_{\text{f}}}\|\hat{\mathbf{s}_{t}}^{ \text{(validation,}i)}-\hat{\mathbf{s}_{t}}^{\text{(sim,}i)}(\mathbf{\theta})\|^{2} \mathrm{d}t\end{split} \tag{37}\] and \(\mathcal{L}^{\text{(train)}}\) are observed during iterative process of optimization. Fig. 4 shows the learning curve of two representative computational cases. Note that in Fig. 4, the mean of the candidate solutions \(\mathbf{\overline{\theta}}\) of each iteration of the CMA-ES were used for error norm \(\mathcal{L}^{\text{(train)}},\ \mathcal{L}^{\text{(validation)}}\) by Eqs. (22) and (37). Since the total time of datasets is different, the error norms were compared in error-per-time from \((\mathcal{L}/T)\) using the data set time \(T^{\text{(train)}},\ T^{\text{(validation)}}\). Note that \(\mathcal{L}^{\text{(train)}}\) and \(\mathcal{L}^{\text{(validation)}}\) have several peaks at the same iterations in Fig. 4. This is due to the restart strategy of CMA-ES. The learning curve of the SM-2, 3rd order model is shown in Fig. 4a. The validation loss \(\mathcal{L}^{\text{(validation)}}\), indicated by the blue line, showed the same downward trend as training loss \(\mathcal{L}^{\text{(train)}}\). The validation loss stagnated at the value with slight degradation from its minimum indicated by the blue circle. Therefore, we concluded that overfitting did not occur in this computational case. On the other hand, in the SM-3, 3rd order model, shown in Fig. 4b, \(\mathcal{L}^{\text{(validation)}}\) worsened in the iteration around \(\mathbf{\overline{\theta}}_{\text{opt}}^{\text{(train)}}\) indicated by the black circle; thus, we concluded that overfitting has occurred. To avoid overfitting, the optimal parameter on \(\mathcal{D}^{\text{(validation)}}\) was selected as the optimal parameter for each computational case, and the dynamic model with the optimal parameter \(\mathbf{\theta}_{\text{opt}}^{\text{(validation)}}\) was defined as the optimal model. More precisely, the optimal parameter \(\mathbf{\theta}_{\text{opt}}^{\text{(validation)}}\) was selected from the set \(\Theta^{\text{(train)}}\), which is the set of \(\mathbf{\overline{\theta}}\) per 200 iterations: \[\mathbf{\theta}_{\text{opt}}^{\text{(validation)}}=\underset{\mathbf{\overline{ \theta}}\in\Theta^{\text{(train)}}}{\text{argmin}}\ \mathcal{L}(\mathbf{\overline{\theta}};\ \mathcal{D}^{\text{(validation)}})\enspace. \tag{38}\] ### Random seed trial Since CMA-ES is a stochastic search method, the optimal parameter may depend on the random seed. Before testing the performance of the optimal model, we check the effect of the random seed. Here, we performed five independent trials with different random number seeds for representative cases. Random seed trials with regularization penalty \(\lambda\) and deviation penalty \(\alpha\) were only conducted for 3rd-order models. Trials on the cases with \(\alpha=0\) or \(\lambda=0\) were conducted both on 2nd-order and 3rd-order models. From now on, for simplicity of notation, the error norm is expressed in the following expression: \[\begin{split}\mathcal{L}_{\text{opt}}^{\prime\text{(train)}}& =\mathcal{L}(\mathbf{\theta}_{\text{opt}}^{\text{(train)}};\mathcal{D}^{ \text{(train)}})/T^{\text{(train)}}\\ \mathcal{L}_{\text{opt}}^{\prime\text{(validation)}}& =\mathcal{L}(\mathbf{\theta}_{\text{opt}}^{\text{(validation)}}; \mathcal{D}^{\text{(validation)}})/T^{\text{(validation)}}\enspace.\end{split} \tag{39}\] The effect of random seed on the error norm \(\mathcal{L}\) is shown in Fig. 5. In all cases except for the SM-3, 3rd order model, \(\alpha=0,\ \lambda=1e2\) shown in yellow, the error norm is smaller than that of the MMG-EFD model for both training and validation dataset regardless of the random seed, indicating that the dynamic model obtained by the proposed method has the same or better estimation performance than the existing methods. Next, we discuss the differences in performance for each computational case on the training and validation datasets. First, we focus on \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\). In \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\), the third-order model has a smaller error norm \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\) than the second-order model, which means that it fits the training data better. The effect of the regularization penalty \(\lambda\) and the deviation penalty \(\alpha\) differed depending on the dynamic model. In the SM-2, 3rd order model, \(\alpha,\ \lambda\) tended to suppress the variation of \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\). On the other hand, in the SM-3, 3rd order model, the variation of \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\) was larger. Moreover, on \(\lambda=1e2\) case, \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\) was significantly worsen. Therefore, the effect of \(\alpha\) and \(\lambda\) on optimization is different for each dynamic model. A parameter study of the hyperparameters is required for each model when using the proposed method. Next, we focus on \(\mathcal{L}^{\prime(\text{validation})}_{\text{opt}}\). Although \(\mathcal{L}^{\prime(\text{validation})}_{\text{opt}}\) is several times worse than that of \(\mathcal{L}^{\prime(\text{train})}_{\text{opt}}\), \(\mathcal{L}^{\prime(\text{validation})}_{\text{opt}}\) is still smaller than that of the MMG-EFD model except for one case. In other words, the proposed method has the same or better estimation accuracy than the conventional method even when the data is not used for training, as long as the appropriate \(\alpha\) and \(\lambda\) were selected, regardless of the random seed. The performance rank of the dynamic models on \(\mathcal{D}^{(\text{validation})}\) is the same as for training: SM-2, 3rd order model is the best, followed by SM-3, 3rd order model. The second-order model is inferior to the third-order model, and the performance difference between the second-order models is insignificant compared to the variation due to random seeding. Moreover, as in training, the trend of the regularization penalty \(\lambda\) and the deviation penalty \(\alpha\) differed among the dynamic models. In the SM-2, 3rd order model, the regularization penalty \(\lambda\) and the deviation penalty \(\alpha\) suppressed the variation of \(\mathcal{L}_{\text{opt}}^{\prime(\text{validation})}\). In SM-3, 3rd order model, \(\alpha=0,\ \lambda=1e2\) is an exception where \(\mathcal{L}_{\text{opt}}^{\prime(\text{validation})}\) is worse than the other cases. This may be because proper learning is not achieved as described in the previous paragraph, resulting in worse performance for \(\mathcal{L}_{\text{opt}}^{\prime(\text{validation})}\). ### Select the Best hyperparameter In the previous section, we discussed the effect of random seeding on performance estimation; we found that \(\alpha\) and \(\lambda\) show different trends depending on the dynamic model. Therefore, we next select the hyperparameters: \(\alpha\) and \(\lambda\), based on the performance on \(\mathcal{D}^{(\text{validation})}\). The error norm \(\mathcal{L}_{\text{opt}}^{\prime(\text{train})}\) and \(\mathcal{L}_{\text{opt}}^{\prime(\text{validation})}\) of the optimal model for all computational cases defined in Tables 4 and 5 are shown in Fig. 6. In Fig. 6a, dot plots of \(\mathcal{L}_{\text{opt}}^{\prime(\text{train})},\ \mathcal{L}_{\text{opt}}^{ \prime(\text{validation})}\) are color-coded by \(\alpha\) values, and in Fig. 6b dot plots are color-coded by \(\lambda\) values. For \(\mathcal{D}^{(\text{train})}\), the fitness was higher when the value of \(\alpha\) and \(\lambda\) were close to zero. For \(\mathcal{D}^{(\text{validation})}\), the same trend was seen for \(\lambda\), but there was no clear trend due to the deviation penalty \(\alpha\). Since the deviation penalty and regularization are intended to prevent overestimation of acceleration and overfitting, both caused by the complexity of the proposed model, the appropriate value is expected to depend on the characteristics of the dynamic model and dataset. However, we could not observe any clear performance degradation due to the penalties. Therefore, when using the proposed method, the user is recommended to perform parameter studies on the validation dataset for \(\alpha\) and \(\lambda\). For example, we introduced penalties to expect a regularization effect for models with high searching dimensions, but \(\lambda=1e4\) has worse \(\mathcal{L}_{\text{opt}}^{\prime(\text{train})},\ \mathcal{L}_{\text{opt}}^{ \prime(\text{validation})}\) are both worse than in the other cases, suggesting that the regularization penalty was too large. In the next section, we selected the combination of \(\alpha,\ \lambda\) that minimize \(\mathcal{L}^{(\text{validation})}\) for each dynamic model, and compared the performance of each dynamic model with the test data. ### Evaluation by Test Data The generalization performance of the optimal models is checked on the test data. As with training and validation, the following simplified notations are made for test and crash-astern: \[\begin{split}\mathcal{L}_{\text{opt}}^{\prime(\text{test})}& =\mathcal{L}(\mathbf{\theta}_{\text{opt}}^{(\text{validation})}; \mathcal{D}^{(\text{test})})/T^{(\text{test})}\\ \mathcal{L}_{\text{opt}}^{\prime(\text{CA})}&= \mathcal{L}(\mathbf{\theta}_{\text{opt}}^{(\text{validation})};\mathcal{D}^{(\text{CA})})/T^{( \text{CA})}\end{split} \tag{40}\] #### 4.4.1 Random Maneuvers The Test data results for the optimal model for each dynamic model are shown in Fig. 7a. The figure shows that all the optimal models outperform MMGFED in \(\mathcal{D}^{(\text{test})}\). The plots are generally on the linear line shown by the dashed line in the figure. In other words, the optimal parameter \(\mathbf{\theta}_{\text{opt}}^{(\text{validation})}\) chosen by the \(\mathcal{D}^{(\text{validation})}\) achieves the same accuracy with unknown test data. The MMG-EFD model is also on the same line, indicating that the proposed method has a generalization performance that can maintain its estimation accuracy regardless of the data set under random maneuvers, same as MMG-EFD model. Next, we show the results of the maneuvering simulation using the best optimal model and the worst Fig. 5: Results of random seed trial. Dot plots represent each random seed trial. Labels on vertical axis without \(\alpha\) and \(\lambda\) values, those are \(\alpha=0,\ \lambda=0\). The vertical purple dashed line represents MMGFED model results. optimal model to verify the performance of the proposed model in detail. Based on the Fig. 6(a), SM-2, 2nd order model, \(\alpha=0,\ \lambda=0\), is the worst, and the SM-2, 3rd order model, \(\alpha=0,\ \lambda=0\) is the best. The maneuvering simulation results of the two models are shown in Fig. 8. The simulation results of the MMG-EFD model and the time series of \(\mathcal{D}^{\rm(test)}\) are also compared. The simulation results from both optimal models agreed with the experimental results. In addition, the acceleration \(\mathbf{\dot{s}}_{t}=(\dot{u},\ \dot{v}_{\rm m},\ \dot{r})\) of the optimal model and the MMG-EFD model are of the same order. In other words, the proposed model with optimal parameters can obtain the same level of acceleration as the MMG-EFD model, which is a model constructed from hydrodynamic forces measured by model tests, even though the acceleration is not included in the optimization of the proposed model. #### 4.4.2 Crash-Astern Here we validate the proposed model for maneuvers other than random maneuvers using the crash-astern (CA) test data set \(\mathcal{D}^{\rm(CA)}\). The error norm for each dynamic model is shown in Fig. 6(b). The correlation coefficient between \(\mathcal{L}^{\prime}_{\rm opt}\)(validation) and \(\mathcal{L}^{\prime}_{\rm opt}\)(CA) is \(r=0.76\) and positively correlated. In other words, by improving the performance on random maneuvers, performance could be improved even for data with different types of maneuvers. As shown in Fig. 3, the distributions of \(\mathcal{D}^{\rm(train)}\) and \(\mathcal{D}^{\rm(CA)}\) are different. Consequently, the proposed method can improve the estimation performance for various motions and generate a stable dynamic model. Compared to \(\mathcal{D}^{\rm(test)}\), \(\mathcal{L}\) increased by a factor of 2 to 3, which means the estimation accuracy worsened, but the MMG-EFD model also showed a similar degree of deterioration. The time series of the maneuvering simulation with the best model are compared with the physical experiment results. The results of maneuvering simulation using the best and worst model for \(\mathcal{D}^{\rm(CA)}\) are shown in Figs. 9 and 10. In Figs. 9 and 10, one of the four CA tests included in \(\mathcal{D}^{\rm(CA)}\) is shown. This particular CA test has the smallest \(\mathcal{L}^{\prime}_{\rm opt}\) with simulation using the best model. The trajectory \((x_{0},\ y_{0},\ \psi)\) shown in Fig. 9 agrees with the experimental trajectory well except for \(\psi\) immediately after the start of Fig. 6: Relation between hyperparameters and estimation performance of the optimal model. The vertical purple dashed line represents MMG-EFD model results. [MISSING_PAGE_POST] garwal. (2016) A new class of nonlinear systems. In _Advances in the stopping maneuver in both optimal models and during asterin in the worst model. In addition, the time series of \(\mathbf{s}_{t}^{(\text{sim})}\) and \(\hat{\mathbf{s}}_{t}^{(\text{sim})}\)shown in Fig. 10. In both of best and worst model's results, the estimation performance of time-series showed two types of trend on contiguous subsequences (CS); CS with a good agreement and CS with a large discrepancy. In particular, the deviations were considerable for the CS at \(100<t<200\) s immediately after the start of the crash-astern steering. Moreover, slopes of \(v_{\text{m}},\ r\) had opposite signs with the experiment at \(100<t<120\) s. The velocity slope, i.e., acceleration, is not resolved correctly, meaning transient characteristics of hydrodynamic forces are not represented at that period. In this period, \(u=0.5\) m/s at \(t=0\) of the subsequence, which is the velocity extrapolated from the training data as shown in Fig. 3. The extrapolation of the training data is considered to have degraded the estimation performance. #### 4.4.3 Analysis on Error This section analyzes the distribution of errors in the maneuvering simulations for each data set, to summarize the estimation performance on the training, validation, and test-CA datasets. The model used is the best model, SM-2, 3rd order model, \(\alpha=\lambda=0\). Since the error function Eqs. (22) and (37) is the sum of errors, the squared error vector \(\mathbf{\varepsilon}_{t}\equiv\left(\varepsilon_{t,u},\ \varepsilon_{t,v_{\text{m}}},\ \varepsilon_{t,r}\right)\in\mathbb{R}^{3}\) at time \(t\) is analyzed. Here, the following equation defines the \(j\)-th component of \(\mathbf{\varepsilon}_{t}\) as follows, \[\varepsilon_{t,j}=\left\{\hat{s}_{t,j}^{(\text{input},i)}-\hat{s}_{t,j}^{( \text{sim},i)}(\mathbf{\theta}_{\text{opt}};\ \mathcal{D}^{(\text{input})})\right\}^{2}. \tag{41}\] Here, \(\hat{s}_{t,j}^{(\cdot,i)}\) is the \(j\)-th component of the standardized state variables at time \(t\) of the \(i\)-th CS. The standardized state variables are \(\hat{\mathbf{s}}_{t}^{(\cdot,i)}=\left(\hat{u}^{(\cdot,i)}(t),\ \hat{v}_{\text{m}}^{( \cdot,i)}(t),\ \hat{r}^{(\cdot,i)}(t)\right)\). The histogram of \(\mathbf{\varepsilon}_{t}\) is shown in Fig. (a)a and the cumulative frequency histogram of \(\mathbf{\varepsilon}_{t}\) is shown in Fig. (b)b. The figure shows that test-CA contains larger instantaneous errors than training and validation. In test-CA, \(v_{\text{m}}\) and \(r\) have longer legs of the distribution than \(u\), which means \(v_{\text{m}}\) and \(r\) contain larger instantaneous errors. In addition, cumula Figure 9: Estimated trajectories of maneuvering simulation using two optimal models and MMG-EFD model for \(\mathcal{D}^{(\text{CA})}\). Ship-like pentagons represent ship positions and headings at CS’s beginning, middle, and end. The black line represents the input dataset, i.e., measured trajectory of the free-running model test. Optimal model is SM-2, 2nd order, \(\alpha=0,\ \lambda=0\). tive frequencies of \(\varepsilon_{t,v_{\rm m}},\ \varepsilon_{t,r}\) increased gradually for \(\varepsilon_{t,j}>15\). From this result, we can say that a specific large instantaneous error does not increase the sum of errors; instantaneous errors of about 5 to 10% of the overall time steps are worsening. Therefore, measures to improve the accuracy of these lower 5 to 10% time steps of \(v_{\rm m}\) and \(r\) are needed. ## 5 Discussion Modeling AutomationBefore concluding the study, we explain how the proposed model and method enable modeling automation. First, because of the simple rules for model derivation, derivation can be automated. In contrast, existing hydrodynamic models must derive the expression based on hydrodynamics for each actuator configuration. Second, the necessary inputs for the proposed method are only the dataset. Generation of datasets itself can not be automated with the proposed method; however, a priori hydrodynamic information other than dataset is not necessary. Since CMA-ES with the restart strategy is a quasi-parameter-free optimization [55], the user is not required to seek the parameter for the optimization itself. Third, selection of the model formulae can also be automated because multiple model formulae can be derived automatically and selected by the evaluation based on maneuver-ring simulation. Because of those three reasons, the proposed method can essentially remove the human work required to generate dynamic models. By employing the proposed method, we can develop an algorithm that automatically generates a dynamic model suitable for harbor maneuvers once the user feeds a dataset. Limitations and Future workThe main drawback of the proposed method is the computation time required for optimization. The SM-2, 2nd order model, a model with the smallest number of parameters, requires three days of computation time, and the SM-3, 3rd order model, the largest number of parameters, requires ten days. The computation conditions were 16 parallel Intel Xeon Platinum 8260 computers, and the language and compiler were Fortran 90 and Intel Fortran. CMA-ES is easy to parallelize and may be faster depending on machine power, but it is computationally expensive in general because of the need for parallel-capable machines. A remaining issue is a dependence on the dataset size. In this study, the dataset size was kept constant. A total of approximately 10,000 seconds of random maneuvers was used. If \(L_{\text{pp}}=100\)m, this corresponds to about 16 hours. As the size of the target vessel increases, the cost of data acquisition also increases. Therefore, in addition to the dependence on the amount of data, reducing the amount of data required and efficient methods of data acquisition are also future work. In addition, introducing non-dimensionalization without velocity \(U\), such as [58], may ease the domain boundary setting of coefficients. ## 6 Conclusion In this study, we proposed the Abkowitz-MMG hybrid model and its parameter identification method for harbor maneuvers and to realize automated modeling. The proposed model can be derived according to simple rules using Taylor expansion and has a high degree of freedom to express harbor maneuvers' complex motions and ship handlings. The proposed model can be more easily derived than existing Hydrodynamic models for arbitrary ship configurations. In addition, the physical meaning of the model's formulae and their terms is much more understandable than those using neural networks. We also proposed a method to identify the model parameters and select the model's formulae using System Identification. Even though the proposed model needs to identify several hundred model parameters because of its simple rules for derivation, SI using CMA-ES enables identifying model parameters within reasonable computational time. Moreover, since multiple models can be easily derived, we identified the parameters of several models and selected the best model from them. Thus, the proposed method does not depend on the captive model tests and knowledge of ship hydrodynamics to select appropriate mathematical expressions of the model, thereby reducing the amount of labor required for model selection. In addition, simple rule-based model derivation and easy model selection methods can relax the necessary skill and facility requirements for users to perform model generation. This study used trajectories of a free-running model ship of a single-propeller, VecTwin-rudder-equipped ship, as the data set. As a result, we confirmed that even a dynamic model derived from simple rules could estimate unknown harbor maneuvers equivalently or more accurately than a dynamic model generated by existing methods. In summary, we showed that the proposed model and method could reduce human intervention in model generation and have adequate estimation performance. Consequently, they can enable automated modeling, which automates the modeling process itself. Although design and development of the automation algorithm are the remaining tasks, the basic concept of modeling automation, and the core of the automation algorithm, i.e., the dynamic model and the model generation method, were proposed in this study. ## Acknowledgements This study was conducted as a part of the Nippon Foundation's "demonstration tests of The Nippon Foundation MEGURI2040 Fully Autonomous Ship Program." This study was also supported by a Grant-in-Aid for Scientific Research from the Japan Society for Promotion of Science (JSPS KAKENHI Grant #22H01701). This study was conducted under a joint research project between Japan Hamworthy Co., Ltd. and Osaka university. The authors would like to express gratitude to Prof. Masaaki Sano at Hiroshima University, for providing captive model test results. The authors also would like to thank students of the Ship Intelligentization Subarea, Osaka University, for supporting the model experiment: Dimas M. Rachman, Nozomi Amano, Hiroaki Koike, and Yuta Fueki.
2303.09221
Physics-based model of solar wind stream interaction regions: Interfacing between Multi-VP and 1D MHD for operational forecasting at L1
Our current capability of space weather prediction in the Earth's radiation belts is limited to only an hour in advance using the real-time solar wind monitoring at the Lagrangian L1 point. To mitigate the impacts of space weather on telecommunication satellites, several frameworks were proposed to advance the lead time of the prediction. We develop a prototype pipeline called "Helio1D" to forecast ambient solar wind conditions (speed, density, temperature, tangential magnetic field) at L1 with a lead time of 4 days. This pipeline predicts Corotating Interaction Regions (CIRs) and high-speed streams that can increase high-energy fluxes in the radiation belts. The Helio1D pipeline connects the Multi-VP model, which provides real-time solar wind emergence at 0.14 AU, and a 1D MHD model of solar wind propagation. We benchmark the Helio1D pipeline for solar wind speed against observations for the intervals in 2004 - 2013 and 2017 - 2018. We developed a framework based on the Fast Dynamic Time Warping technique that allows us to continuously compare time-series outputs containing CIRs to observations to measure the pipeline's performance. In particular, we use this framework to calibrate and improve the pipeline's performance for operational forecasting. To provide timing and magnitude uncertainties, we model several solar wind conditions in parallel, for a total of 21 profiles corresponding to the various virtual targets including the Earth. This pipeline can be used to feed real-time, daily solar wind forecasting that aims to predict the dynamics of the inner magnetosphere and the radiation belts.
R. Kieokaew, R. F. Pinto, E. Samara, C. Tao, M. Indurain, B. Lavraud, A. Brunet, V. Génot, A. Rouillard, N. André, S. Bourdarie, C. Katsavrias, F. Darrouzet, B. Grison, I. Daglis
2023-03-16T10:55:02Z
http://arxiv.org/abs/2303.09221v1
Physics-based model of solar wind stream interaction regions: Interfacing between Multi-VP and 1D MHD for operational forecasting at L1 ###### Abstract Our current capability of space weather prediction in the Earth's radiation belts is limited to only an hour in advance using the real-time solar wind monitoring at the Lagrangian L1 point. To mitigate the impacts of space weather on telecommunication satellites, several frameworks were proposed to advance the lead time of the prediction. We develop a prototype pipeline called "Helio1D" to forecast ambient solar wind conditions (speed, density, temperature, tangential magnetic field) at L1 with a lead time of 4 days. This pipeline predicts Corotating Interaction Regions (CIRs) and high-speed streams that can increase high-energy fluxes in the radiation belts. The Helio1D pipeline connects the Multi-VP model, which provides real-time solar wind emergence at \(0.14\) AU, and a 1D MHD model of solar wind propagation. We benchmark the Helio1D pipeline for solar wind speed against observations for the intervals in 2004 - 2013 and 2017 - 2018. We developed a framework based on the Fast Dynamic Time Warping technique that allows us to continuously compare time-series outputs containing CIRs to observations to measure the pipeline's performance. In particular, we use this framework to calibrate and improve the pipeline's performance for operational forecasting. To provide timing and magnitude uncertainties, we model several solar wind conditions in parallel, for a total of 21 profiles corresponding to the various virtual targets including the Earth. This pipeline can be used to feed real-time, daily solar wind forecasting that aims to predict the dynamics of the inner magnetosphere and the radiation belts. Space weather solar wind corotating interaction region ## 1 Introduction The Sun and the solar wind are the main drivers for space weather at Earth and in the heliosphere. Nowadays, forecasting solar wind conditions becomes a crucial task as the human society is increasingly dependent on space technologies, that are under the influence of space weather. The major drivers that can have profound effects are the Coronal Mass Ejections (CMEs), caused by solar eruptions, and Corotating Interaction Regions (CIRs), formed when a fast solar wind or high-speed stream (HSS) from the coronal holes takes over a slower solar wind. Although CMEs cause more severe events, their effects often disappear in a day or two while CIR/HSS events have significant effects that last several days after their arrivals [e.g. Burns et al., 2012]. During solar minima when CMEs are absent, CIRs cause low- to intermediate-strength geomagnetic storms and occasionally lead to strong geomagnetic storms [e.g. Richardson et al., 2001, Alves et al., 2006, Tsurutani et al., 2006, Zhang et al., 2008, Kilpua et al., 2017, Chi et al., 2018]. These CIR-driven storms also have impacts down to the ionosphere and thermosphere, leading to equatorial ionization anomaly [e.g. Oyedokun et al., 2022] and total electron content variability [e.g. Chakraborty et al., 2020]. Furthermore, CIRs impact the dynamics of CMEs [e.g. Liu et al., 2019], broaden solar energetic particle spectrum [e.g. Zhao et al., 2016, Wijsen et al., 2019], and modulate galactic cosmic rays [e.g. Iucci et al., 1979, Rouillard and Lockwood, 2007]. Cranmer et al. [2017], Kilpua et al. [2017], Vrsnak et al. [2017], Richardson [2018] and Yermolaev et al. [2018] comprehensively reviewed our current physical understanding and the geoeffectiveness of CIRs. Communication satellites in the geostationary orbit and medium and low Earth orbits are in the vicinity of the Earth's radiation belts - the regions encircling the near-Earth space and containing significant fluxes of high-energy electrons and ions. CIRs can drive radiation belt electron variability [e.g. Lam et al., 2009, Hudson et al., 2021], leading to electron flux enhancements [e.g. Blake et al., 1997, Li et al., 1997]; electron acceleration especially in the presence of strong southward Interplanetary Magnetic Field (IMF Bz) component [Blake et al., 1997], in which in certain cases reaching relativistic level [Paulikas and Blake, 1979, Pinto et al., 2018, Hudson et al., 2021, Pandya et al., 2019]. The high speed streams that follow CIRs have been demonstrated to be more effective (compared to CMEs) in producing multi-MeV electron enhancements up to more than 7 MeV [Horne et al., 2018, Katsavrias et al., 2019]. This is due to fact that they produce intervals of combined southward IMF and solar wind velocity over 500 km/s, which lead to an enhanced magnetic reconnection rate at the dayside magnetopause [Borovsky and Denton, 2006, Miyoshi et al., 2013]. Since the beginning of the space technology era, there have been a number of reports of spacecraft anomalies and even failures, for example, at the geostationary orbit due to the elevated fluxes of several MeV electrons that persisted for several days following CIRs [see a review by Lanzerotti, 2007]. To better mitigate impacts on those satellites, we need to develop a system for continuously predicting the solar wind and radiation belt conditions. With the real-time monitoring of the solar wind at the Lagrangian L1 point, e.g. by Advanced Composition Explorer (ACE), the continuous, real-time prediction of the radiation belt conditions via modeling is limited to one hour in advance or less as the solar wind takes about 40 - 60 minutes from the ACE spacecraft to arrive at the magnetosphere. Using the Kp that is a 3-hour index indicative of global auroral activities, the SPACECAST [Horne et al., 2013] model, for instance, was used to provide a forecast up to 3 hours in advance based on the real-time ACE data. To further improve our current capability, we need continuous solar wind prediction with a significant lead time. Numerous efforts have been made in modeling real-time solar wind conditions at Earth. The majority of models consists of two main parts: (1) a young solar wind emergence modeling and (2) a solar wind modeling and/or propagation from near Sun to Earth. The former part is initiated by a reconstruction of the coronal magnetic fields based on synoptic or synconic maps from ground observations; this is achieved by employing a simplified analytical model with a magnetostatic potential field source surface (PFSS) extrapolation. The PFSS extrapolation combined with the Wang-Sheeley-Arge (WSA) model is a widely-used predictor of the solar wind in the corona/low heliosphere [Arge and Pizzo, 2000, Arge et al., 2004]. In combination with the global 3-D magnetohydrodynamics (MHD) "ENLIL" -- the physics-based solar wind model [Odstrcil et al., 1996, Odstrcil and Pizzo, 1999, Odstrcil et al., 2004] -- the WSA-ENLIL has been used to provide daily forecasts of solar-wind streams. The WSA-ENLIL tool has also been extended to include CMEs using an ad-hoc hydrodynamic cone model or the spheromak model [Taktakishvili et al., 2009, 2011, 2018]. The Space Weather Prediction Centre of the National Oceanic and Atmospheric Administration (SWPC/NOAA) has put the WSA-ENLIL into operation to provide daily solar wind predictions with a lead time of 1 - 4 days [Pizzo et al., 2011]. Alternative to the WSA model is the MHD Around a Sphere (MAS) that is a global, time-dependent model based on the resistive MHD equations [Linker et al., 1999, Riley et al., 2012]. The WSA-ENLIL and the MAS-ENLIL tools have been extensively benchmarked by Owens et al. [2005], Lee et al. [2009], MacNeice [2009a,b], Jian et al. [2011a,b, 2015]. For the solar-wind propagation part, several less-numerically intensive options are also available, though they are not fully put into operation at the time of this writing. A few models include Heliospheric Upwind eXtrapolation (HUX) [Riley and Lionello, 2011], ballistic propagation [Dosa et al., 2018], and Tunable HUX [Reiss et al., 2020]. MacNeice et al. [2018] and Reiss et al. [2019, 2020] compiled several combinations of the existing models and established frameworks for cross-validation. Recently, more efforts have been made to combine and benchmark European-based models to develop operational solar-wind forecasting services. The Multi-VP is a coronal model developed by Pinto and Rouillard [2017] that computes multiple solutions of 1D solar wind flux tubes in sub-domains of interest. This approach allows a much faster calculation compared to global models with better accuracy at regions of interest, e.g. at the sub-Earth point, with a lead time of 1 - 3 days [Rouillard et al., 2020]. The Multi-VP model has been coupled with EUHFORIA [European Heliospheric Forecasting Information Asset; Pomoell and Poedts, 2018] and validated for HSS modeling in comparison with WSA-EUHFORIA (Samara et al., 2021). In addition, the overall performance of solar wind modeling with EUHFORIA has been assessed by Hinterreiter et al. (2019) and Samara et al. (2022). Most recently, the Heliocast -- a global MHD model comprising both coronal and heliospheric parts -- has been developed by Reville et al. (under revision in JSWSC at the time of this writing) to provide a daily solar wind forecast with a lead time up to 5 days for the French organization for applied research in space weather10. Moreover, an empirical solar wind forecast (ESWF; Vrsnak et al., 2007; Milosic et al., 2022) model for CIR/HSS prediction with a lead time of 4 days has been developed using an empirical relation found between the areas of coronal holes as observed in Extreme Ultraviolet data and the solar wind speed at 1 AU (Robbins et al., 2006; Vrsnak et al., 2007). This model has been put into operation by the European Space Agency. Footnote 10: OFRAME, see [http://www.meteo-espace.fr/](http://www.meteo-espace.fr/). In this work, we consider the coupling of the Multi-VP model and a 1D MHD model (Tao et al., 2005) for the first time to develop an automated pipeline named "Helio1D" for solar wind CIR forecasting. The 1D MHD model was originally developed for modeling solar wind conditions at Jupiter using the in situ observations at Earth as inner boundary; it was later extended to provide operational service for solar wind conditions at other solar-system planets (Andre et al., 2018), see HelioPropa11. Here, we model multiple solar wind solutions at several targets around the sub-Earth point in parallel. In essence, we develop a pipeline for daily solar wind forecasting at Earth 4 days ahead with a systematic characterization of timing and amplitude uncertainties. Using 13 years of data covering parts the solar cycles 23 and 24, from 2004 to 2013 and from 2017 to 2018, we benchmark the pipeline results as well as calibrate it to develop a prototype for operational daily solar wind forecasting. The pipeline is integrated into the operational forecasting service prototype on radiation belt environmental indicators for the safety of space assets as a part of the European Union Horizon 2020 SafeSpace12 project. Footnote 11: [http://heliopropa.irap.omp.eu/](http://heliopropa.irap.omp.eu/) Footnote 12: [https://www.safespace-h2020.eu/](https://www.safespace-h2020.eu/) The evaluation or validation of modeling results is crucial to understand the efficiency of the forecast. Several works considered standard metrics such as root mean square, mean square error, and Pearson correlation coefficient to compare modeling results to the observations. With such metrics, a skill score is also computed to assess whether the model performs better than a baseline model. For CIR/HSS and CME modeling, some works also considered event-based verification that allow the characterization of a true positive (predicted and observed), a false positive (predicted but not observed), a false negative (not predicted but observed) to construct contingency tables (Woodcock, 1976) and compute the probability of detection (e.g. Reiss et al., 2016). Most recently, an alternative approach using the Dynamic Time Warping (DTW) algorithm, a technique commonly employed in time-series analysis, was applied to characterize the performance of solar wind modeling (Samara et al., 2022). We consider these metrics and approach for verification of the pipeline's results. The paper is organized as follows. Firstly, we introduce the Multi-VP and 1D MHD models and then describe the interfacing between them for the Helio1D pipeline. Secondly, we describe metrics and techniques for quantifying the performance of Helio1D. Thirdly, we provide results from the pipeline with the assessments of the results using the classic metrics and the DTW algorithm. Fourthly, we discuss the pipeline calibration with a newly developed approach with the DTW algorithm. Finally, we discuss operational forecasting and provide conclusions and prospects. ## 2 Automated CIR forecasting pipeline We develop a pipeline to connect the solar wind forecasting from near-Sun to Earth. Here, we briefly introduce the models for the solar wind formation (Multi-VP) and solar wind propagation (1D MHD). We then describe the interfacing of the two models for the automated Helio1D pipeline. ### Multi-VP MULTI-VP (Pinto and Rouillard, 2017) is a coronal model that covers the heliocentric distances over which all solar wind streams are formed and accelerated: between 1 and about 30 solar radii (\(R_{\odot}\)). The main advantages are that it is data-driven, while taking into account the thermodynamics of the wind flows across the highly stratified low solar atmosphere, and hence it produces physically correct and realistic estimations of the state of the solar wind across its domain. MULTI-VP determines a full set of physical quantities consisting of solar-wind speed (\(\mathbf{V}\)), density (\(n\)), temperature (\(T\)), magnetic field (\(\mathbf{B}\)), and secondary quantities without requiring empirical scaling laws. In addition, MULTI-VP actually computes a set of individual wind streams from surface to high corona that can then be put together to build the full three-dimensional solar wind solutions from multiple 1D solar-wind flux tubes. This brings an enormous advantage with respect to more traditional 3D MHD models in terms of required computing time, and also by not being subject to the strong diffusion of the gradients in the transverse directions. In practice, it also lets us compute solar wind solutions restricted to certain regions of interest rather than in the full solar atmosphere. For the 1D MHD model, MULTI-VP is required to produce time-series of the solar wind properties at the sub-Earth point at 0.14 AU (at 30 \(R_{\odot}\)). Internally, this output is defined via a data object that lists which magnetic field lines (and hence individual wind streams) are sampled, as well as their coordinates and geometry, and properties of the input magnetogram. Each time a forecast is performed, we select a number of positions around the sub-Earth point (i.e. at 0.14 AU in Sun-Earth line) stretching up to 15 degrees in azimuth and 15 degrees in latitude. The spread in latitude is aimed to cover positional errors propagated from the magnetic field reconstruction. The spread in longitude is aimed to cover the azimuthal range that corresponds to an elemental time-series three days of solar rotation with respect to Earth, which translates to one day behind and two days ahead of the modeling at sub-Earth point. The next forecast proceeds in the same way, producing another elemental time-series that partially overlaps the previous in time. This procedure is repeated to continuously build a long-term time-series that can be used as an input to the 1D MHD model. #### 2.1.1 Multi-VP data In order to characterize the pipeline performance with regard to various phases of the solar cycle, we need sufficiently long datasets. The longest historical magnetogram series available is that from the Wilcox Solar Observatory (WSO). Here, we obtain mainly two long datasets from Multi-VP processed using the WSO magnetograms. The first set extends from Carrington Rotation (CR) 2024 to CR 2139, corresponding to data from December 2004 to August 2013. This 8 year and 7 month-long interval covers the declining phase (2004 - 2008) and the solar minimum (2009 - 2010) of solar cycle 23 as well as the ascending phase (2011 - 2013) of solar cycle 24. The second set extends from CR 2192 to CR 2210 and corresponds to data from June 2017 to November 2018. These two datasets are fed into the 1D MHD model. ### 1D MHD model From \(0.14\) AU (30 \(R_{\odot}\)) onwards, the solar wind is propagated through the heliosphere using a 1D MHD model (Tao et al., 2005) that takes the solar wind time-series as time-varying boundary conditions, the solar wind parameters -- mainly the radial plasma velocity and tangential velocity -- vary as a function of time (see more below). The 1D MHD model propagates the solar wind using an ideal MHD fluid approximation to the target position while taking into account the interaction between fast and slow streams in the radial direction. The code solves the ideal MHD equations under the influence of solar gravity in a one-dimensional spherically symmetric coordinate system. The 1D MHD equations are solved using the Coordinated Astronomical Numerical Software (CANS)13, an early version of CANS+ code (Matsumoto et al., 2019). The 1D MHD code was developed by Tao et al. (2005) to propagate the solar wind observations at Earth to upstream of Jupiter, and was further extended by the French plasma physics data center14 to cover other solar-system planets or target spacecraft positions (Andre et al., 2018). The code is robust and widely used for solar wind propagation and CIR modeling in the heliosphere (e.g. Palmerio et al., 2021, Nilsson et al., 2022). Footnote 13: developed by T. Yokoyama at the University of Tokyo and collaborators, see documentation at [http://www.astro.phys.s.chiba-u.ac.jp/netlab/pub/index.html](http://www.astro.phys.s.chiba-u.ac.jp/netlab/pub/index.html). Footnote 14: Centre de Donneées de Physique des Plasmas, France. See [http://heliopropa.irap.omp.eu/](http://heliopropa.irap.omp.eu/). The MVP output at 30 \(R_{\odot}\) were used as boundary conditions for the initiation of the 1D-MHD code, which then propagated these results up to the L1 point at 1 AU. The coordinate system in the 1D MHD code is equivalent to the Radial-Tangential-Normal (RTN) system, where the \(X\)-axis (\(R\)) is pointing radially outward from the Sun in the equatorial plane to the Earth, the \(Z\)-axis (\(N\)) is the solar rotation axis, and the \(Y\)-axis (\(T\)) completes the orthonormal system. The outer boundary is set to 1.4 AU where the derivatives of all physical parameters diminish. The grid spacing and the time step are chosen to be \((1.4-0.14)/400\) AU and \(150\) s, respectively. To adopt the 1D simulation, we keep the tangential magnetic field \(B_{y}\) component to physical values while the \(B_{x}\) is fixed to 0.001 nT to meet the solenoidal criterion, and the \(B_{z}\) is set to zero (see Tao et al. (2005) for discussion). We retrieve Multi-VP data at 30 minute cadence and resample them to 1 hour cadence; these hourly-averaged data are then linearly interpolated to meet the CFL (Courant-Friedrichs-Lewy) condition for every time step. ### Helio1D - Interfacing Multi-VP and 1D MHD To automate the interfacing between the two models, several steps are taken. Multi-VP provides solar wind time series with a full set of physical parameters including plasma number density, velocity, temperature, magnetic field in the RTN coordinates. We first format the time series to fit the input format of the 1D MHD model. Here, we take the input at a 1-hour time resolution and output the propagated time series at L1 with the same 1-hr resolution. To successfully run the 1D MHD code, two criteria are required to be met: (1) the data length must cover at least a solar rotation, i.e., 27.25 days, in order to produce a consistent CIR formation, and (2) the parameter values must be physical (e.g. non-negative plasma number density). In certain conditions, the Multi-VP model may provide unphysical solar wind values due to the poor quality of the magnetograms. We apply a set of criteria to remove unphysical values (see Appendix) and fill gaps using linear interpolation between two available data points. These aspects are further described in Section 6. We set up the Helio1D pipeline with two modes for running (a) long-term historical data consisting of several Carrington rotations and (b) short-term data of 1-month long for daily, operational forecasting as shown in Fig 1. The mode-(a) allows us to quantify the performance of the pipeline as well as identify calibrations that may improve the pipeline performance for operational forecasting. We first benchmark the pipeline using long-term Multi-VP data in Section 4. ## 3 Measuring the performance of Helio1D In this section, we first introduce the metrics and methodology for characterizing the pipeline performance. We then give results and discuss calibration for the pipeline. ### Standard metrics We choose three standard metrics - root mean square error (RMSE), mean square error (MAE), and Pearson correlation coefficient (\(r\)) - to measure the performance of the pipeline. RMSE is a classic point-by-point metric that compares the modeled value and observed value for a given point. It is defined as \[RMSE=\sqrt{\frac{1}{T}\Sigma_{t=1}^{T}\left[V_{m}(t)-V_{o}(t)\right]^{2}} \tag{1}\] where \(V_{m}(t)\) and \(V_{o}(t)\) are the modeled and observed values, respectively, at a given time \(t\) while \(T\) is the number of time elements. Mean absolute error (MAE) is an alternative metric that measures a similar property. It is defined as \[MAE=\frac{1}{T}\Sigma_{t=1}^{T}|V_{m}(t)-V_{o}(t)| \tag{2}\] where the symbols have the same meanings as defined for the RMSE. The units of the RMSE and MAE are the same as the unit of parameter values \(V_{m}(t)\) and \(V_{o}(t)\). Smaller values of RMSE and MAE indicate better model prediction when compared to observations. Zero values of both metrics indicate a perfect model prediction. We also introduce the Pearson correlation coefficient (Pcc) to roughly quantify the similarities between the two time series. In principal, Pearson correlation measures the linear relationship between two datasets. Using the observed and modeled time series (\(V_{o}(t)\) and \(V_{m}(t)\), respectively), the formula for \(Pcc\) is \[Pcc=\frac{\Sigma_{t=1}^{T}\left(V_{m}(t)-\left<V_{m}(t)\right>\right)\left(V_ {o}(t)-\left<V_{o}(t)\right>\right)}{\sqrt{\Sigma_{t=1}^{T}\left(V_{m}(t)- \left<V_{m}(t)\right>\right)^{2}}\sqrt{\Sigma_{t=1}^{T}\left(V_{o}(t)-\left<V _{o}(t)\right>\right)^{2}}}. \tag{3}\] The \(Pcc\)-value ranges from \(-1.0\) to \(1.0\), with a value of \(1.0\) being a perfect positive linear correlation between the two time series and \(-1.0\) being a perfect negative linear correlation between the two. Figure 1: Diagram of the Helio1D pipeline. Additionally, to evaluate the model performance in a more concrete way using above metrics, we introduce the skill of a forecast as \[Skill=1-\frac{MSE_{pred}}{MSE_{ref}} \tag{4}\] where \(MSE_{pred}\) is the mean squared error of the prediction compared to the observation and \(MSE_{ref}\) is the mean squared error of the prediction compared to a reference baseline model. Here, we will use the average of the observation as a reference baseline model (i.e., the climatological mean in climate studies). The observations of solar wind data at L1 are taken from the High Resolution OMNIweb database (King and Papitashvili, 2005). The average of the OMNI data of the corresponding interval are taken as a reference baseline model. Skill score equal to unity means a perfect forecast while a zero skill means that the model performs the same as the baseline model. On the other hand, a negative skill means that the model performs worse than the baseline model. The aforementioned metrics will be used throughout this report for the quantification of the performance of the model. ### Dynamic Time Warping Technique Although the classic metrics -- RMSE, MAE, skill score, and \(Pcc\) -- provide quantification of the performance of the model, they do not provide in-depth information such as time lags, i.e., the difference in arrival times between the modeled and observed stream interfaces, that are crucial for model evaluation. For this reason, we also consider an alternative technique called Dynamic Time Warping (DTW). The DTW technique compares two sequences by finding an optimal alignment by which one sequence may be stretched or compressed (hence, "warped") in the temporal domain to match the other under certain constraints (e.g., Muller, 2007). The technique was first introduced by Bellman and Kalaba (1959) in adaptive control processes and later found several applications notably in speech and pattern recognition (e.g., Sakoe and Chiba, 1978; Myers et al., 1980). More recently, DTW has been applied in space weather in particular for comparing modeled geomagnetic indices and solar wind data to observations (e.g., Laperre et al., 2020; Owens and Nichols, 2021; Bunting and Morgan, 2022; Nilsson et al., 2022; Samara et al., 2022). We first introduce its formulation and then describe the methodology for our application. The DTW technique finds the warping path between two sequences, e.g., \(X=\{x_{1},x_{2},...,x_{|X|}\}\) and \(Y=\{y_{1},y_{2},...,y_{|Y|}\}\), through the DTW cost matrix by choosing the path that minimizes the total cumulative cost compared to all other paths. The DTW cost matrix is defined as: \[D(i,j)=\delta(i,j)+min\{D(i-1,j-1),D(i-1,j),D(i,j-1)\} \tag{5}\] where \(i,j\) are the indices of \(X\) and \(Y\), respectively, and \(\delta(i,j)=|x_{i}-y_{j}|\) is the Euclidean distance between the element \(x_{i}\) of series \(X\) and the element \(y_{j}\) of series \(Y\). Here, the optimal warping path (i.e., alignment) can be found via back-tracing in the DTW cost matrix; this process is carried out by choosing the previous points with the minimum cumulative distance (e.g., Berndt and Clifford, 1994; Keogh and Pazzani, 2001; Gorecki and Luczak, 2013). A main disadvantage of DTW is that an element \(x_{i}\) may be mapped to several elements of \(Y\) -- this problem is called "pathological mapping" or "singularities" (e.g., Sakoe and Chiba, 1978). To alleviate this issue, certain constraints have been added such as the so-called windowing, slope weighting, and step patterns (see Keogh and Pazzani, 2001; and references therein). Various variations of DTW technique have also been developed to optimize the alignments between the sequence elements/points (e.g. Keogh and Pazzani, 2001; Chu et al., 2002; Keogh and Ratamanhatana, 2005; Salvador and Chan, 2007; Efrat et al., 2007; Furtuna, 2008; Zhu et al., 2012; Yadav and Alam, 2018, and references therein). Choosing an appropriate DTW technique is crucial as different algorithms can lead to different results. Here, we choose the FastDTW algorithm developed by Salvador and Chan (2007) due to its multi-level approach. First, the time series are sampled down to a very coarse resolution. A warp path is found in this coarse resolution and projected onto a higher resolution. The warp path is then refined and projected again to a higher resolution. The process of projecting and refining is repeated until a warp path is found for the original resolution time series. Most importantly, since the technique initially finds a warp path in the coarse resolution, it is particularly appropriate to capture large-scale features in the time series, notably "CIRs" with large speed gradients. To compute a metric similar to the skill score using the (classic) DTW technique, Samara et al. (2022) defined the Sequence Similarity Factor (SSF) to quantify the similarity between the modeled and observed time series as: \[SSF=\frac{DTW_{score}(O,M)}{DTW_{score}(O,\langle O\rangle)}, \tag{6}\] where \(DTW_{score}(O,M)\) is the DTW cost calculated between the observed (\(O\)) and modeled (\(M\)) time series, and the \(DTW_{score}(O,\langle O\rangle)\) is the DTW cost between the observed series and the baseline. Here, the baseline is given as the average of the observed time series (\(\langle O\rangle\)). The SSF is zero when the model forecast is perfect; it equals to unity when the forecast performs exactly the same as the baseline; and it is higher than unity when the forecast is worse than the baseline. The DTW cost (equation 5) calculated between two sequences is the sum of the length (i.e., the Euclidean distance) between all the optimal alignment pairs. Therefore, the existence of singularities, where a data point \(x_{i}\) has several possible optimal alignments to \(y_{j}\), increases the DTW cost. The baseline is an average of the observed time series -- it is a constant that is not varied with time for a considered interval. The DTW mapping between a model with time variation and a constant sequence (i.e., a straight line) would have several singularities. As a result, we cannot directly compare the DTW cost between two sequences with time variation to the DTW cost between a time-varying sequence and the (constant) baseline. For this reason, we propose two new metrics as follows. First, a normalized cost or normalized distance can be defined as the ratio of the DTW cost to the number of all alignment pairs. This quantity has the unit of the quantity under consideration (similar to RMSE and MAE). Its value corresponds to the average distance of all the mapped alignments that encodes both temporal and spatial differences between the two sequences; we call this quantity "DTW normalized distance". The normalized SSF is then defined as: \[SSF_{normalized}=\frac{DTW_{score}(O,M)/DTW_{length}(O,M)}{DTW_{score}(O, \langle O\rangle)/DTW_{length}(O,\langle O\rangle)}, \tag{7}\] where \(DTW_{length}\) represents the number of all the alignments between two sequences. When the \(SSF_{normalized}\) is smaller than unity, the forecast model performs better than the baseline. When it is less than unity, the forecast model performs worse than the baseline. We consider these two newly defined metrics besides the classic metrics in Section 4.1. ## 4 Results ### Performance of Helio1D using the long-term data Table 1 shows the RMSE, MAE, FastDTW normalized distance, Pcc, Skill score, and normalized SSF (\(SSF_{normalized}\)) from the comparison between the Helio1D and the in situ observations at L1 for each 6-month interval. The OMNI data were obtained at a 5-min resolution, and they are resampled to a 1-hour resolution to match the cadence of the Helio1D data. We find that the intervals in 2009 and the first half of 2010 have relatively low RMSE, MAE, and FastDTW distance, with the interval of July - Dec 2009 having the lowest values for all three metrics. Overall, the FastDTW \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Interval & RMSE & MAE & FastDTW & Pec & Skill score & \(SSF_{normalized}\) \\ & (km/s) & (km/s) & distance & & & \\ & & & (km/s) & & & \\ \hline Jan – June 2005 & 175 & 145 & 89 & 0.05 & -0.79 & **0.77** \\ July – Dec 2005 & 179 & 145 & 105 & -0.07 & -1.22 & 1.03 \\ Jan – June 2006 & 124 & 98 & 62 & 0.17 & -1.00 & **0.84** \\ July – Dec 2006 & 178 & 151 & 81 & -0.25 & -1.36 & **0.80** \\ Jan – June 2007 & 155 & 125 & 69 & 0.03 & -1.11 & **0.77** \\ July – Dec 2007 & 112 & 94 & 71 & 0.43 & -0.36 & **0.94** \\ Jan – June 2008 & 133 & 108 & 72 & -0.02 & -2.24 & 1.16 \\ July – Dec 2008 & 124 & 105 & 60 & -0.03 & -2.75 & 1.22 \\ Jan – June 2009 & 91 & 70 & 55 & 0.10 & -0.78 & 1.08 \\ July – Dec 2009 & 75 & 59 & 42 & 0.12 & -1.15 & 1.01 \\ Jan – June 2010 & 103 & 80 & 67 & 0.32 & -0.27 & **0.87** \\ July – Dec 2010 & 142 & 122 & 108 & -0.07 & -1.17 & 1.30 \\ Jan – June 2011 & 159 & 128 & 97 & 0.07 & -0.38 & **0.83** \\ July – Dec 2011 & 156 & 129 & 90 & 0.07 & -0.32 & **0.79** \\ Jan – June 2012 & 167 & 134 & 101 & 0.08 & -0.30 & **0.80** \\ July – Dec 2012 & 140 & 110 & 79 & 0.16 & -0.19 & **0.84** \\ Jan – June 2013 & 184 & 150 & 95 & 0.22 & -0.10 & **0.66** \\ \hline July – Dec 2017 & 145 & 117 & 79 & 0.03 & -1.02 & **0.95** \\ Jan – June 2018 & 123 & 97 & 58 & 0.01 & -0.82 & **0.79** \\ July – Nov 2018 & 102 & 80 & 59 & 0.17 & -0.74 & **0.90** \\ \hline \hline \end{tabular} \end{table} Table 1: List of the Helio1D long-series intervals and the point-by-point metrics calculated using the Helio1D series in comparison with OMNI data for the same intervals. distance has a lower value than RMSE and MAE for each interval. The Pcc values are mostly close to zero, and reach negative values for certain intervals, indicating that there is no linear correlation between Helio1D and OMNI data (i.e., the two data are qualitatively dissimilar). The interval of July - Dec 2007 has the best Pcc value of 0.43, showing somewhat linear correlation between the Helio1D model and the OMNI data; this interval is shown in Figure 3. The skill score is negative for all intervals, indicating that the Helio1D performs worse than the baseline model despite the relatively low errors or even positive Pcc, for some intervals. In contrast, the normalized SSF of most intervals has a value of less than unity (as highlighted in bold), indicating that the Helio1D indeed performs better than the baseline model. We discuss these different measures in Section 7. To investigate the Helio1D performance in various phases of the solar cycle, we plot these metrics against the average number of the sunspots in Figure 2. Figure 2 shows the performance of the Helio1D characterized by the RMSE, MAE, and FastDTW distance for the intervals shown in Table 1 (left y-axis, black) compared to the average sunspot numbers of the corresponding intervals (right y-axis, red). The sunspot number data were obtained as the monthly mean total sunspot number from the Royal Observatory of Belgium15. The data in 2005 - 2013 correspond to the solar cycle 23 and the data in 2017 - 2018 correspond to the solar cycle 24. Here, we have more complete data for solar cycle 23 than the solar cycle 24. Based on the average sunspot numbers, we infer that our data for the solar cycle 23 comprise the declining phase from 2005 to 2007, the solar minimum from 2008 to mid-2009, the ascending phase from mid-2009 to 2011, and the solar maximum Figure 2: The average RMSE (dot), MAE (triangle), and FastDTW normalized distance (square) versus the average sunspot numbers (cross) for each 6-month interval. The values of the three metrics are shown on the left Y-axis (black) and the average sunspot numbers are shown on the right Y-axis (red). The data for 2005 - 2013 (solar cycle 23) and 2017 - 2018 (solar cycle 24) are separated by a vertical grey dashed line. The individual metric values for each solar cycle are connected by dotted lines. from 2012 to 2013. The rest of our data from mid-2017 to late 2018 correspond to the declining phase of the solar cycle 24. We find that all the metric values (shown in black) vary with the average sunspot numbers (shown in red). During the declining phase of the solar cycle 23, all the metric values proportionally decrease with the average sunspot numbers. During the ascending phase of the solar cycle 23, all the metric values roughly increase with the average sunspot numbers. The most striking features are during the solar minimum and the solar maximum. The metric values reach lowest values during the solar minimum and/or the late declining phase and early ascending phase. Here we find the minimum RMSE, MAE, and FastDTW distance six months after the lowest average sunspot number in Jan - June 2009. The highest errors are found during the solar maximum, which are not surprising as the solar magnetic fields often undergo changes and they are thus less predictable. During the declining phase of the solar cycle 24, we find that the errors proportionally decrease following the average sunspot numbers. These results demonstrate that the performance of the Helio1D varies with the solar cycle -- it performs best during the solar minimum and it performs worst during the solar maximum. This finding is consistent with the performance of solar wind modeling by other existing models [e.g. Hinterreiter et al., 2019]. We further discuss these results in Section 7. To understand how the Helio1D pipeline qualitatively performs, we next visualize an example of the model output against the observations at L1. Figure 3 shows the Helio1D output in magenta and the OMNI data in black. In Fig 3a, the bulk flow velocity displays a recurring pattern of the stream interaction regions (i.e., CIRs) characterized by the transition from slow to fast wind, clearly noticeable between August and November 2007. The stream interfaces also collocate with the polarity change in \(B_{y}\), shown in Fig 3b, and the enhancement of density and temperature in Fig 3c and Fig 3d, respectively. Comparing to the observations, we find that the modeled solar wind speed agrees qualitatively well and that the model correctly produces consistent CIR formation. Nevertheless, the number density and temperature Figure 3: Comparison of CIR modeling from the 1D MHD code (magenta) to the observations (black) from the OMNI database during May 2007 and April 2008 — (a) radial bulk flow velocity (\(V_{x}\)); (b) tangential magnetic field component (\(B_{y}\)); (c) ion number density (\(n\)); and (d) ion temperature (\(T\)). in Fig 3c and Fig 3d at the stream interfaces are generally higher than the observed values. These higher peaks are produced as a consequence of over-compression at the stream interfaces due to the ideal MHD plasma assumption in the 1D MHD code, which limits dissipation. Despite some over-compression and temporal uncertainties, we conclude that the interfacing between Multi-VP and 1D MHD works reasonably well. We further perform qualitative assessments of the pipeline performance and discuss calibration in Section 5. From the long-term results (not shown), we find that the Helio1D pipeline correctly produces the expected large-scale modulation of CIRs especially in the bulk flow velocity, demonstrating that the Helio1D pipeline correctly produces long-term solar wind fluctuations. On a shorter timescale, there are some mismatches between the amplitudes of the fluctuations of the Helio1D results and the observations. We quantify these differences in detail in Section 4.2. ### Performance of Helio1D with the FastDTW technique Since the classic metrics only indicate crude information of the model performance when comparing to real data, we consider applying FastDTW algorithm. Fig 4 shows an example of the FastDTW mapping between Helio1D and OMNI time series for the stream interfaces during 27 December 2005 and 14 February 2006. The OMNI data (black) show two clear stream interfaces and two corresponding high-speed streams. It can be seen that the Helio1D correctly produces the second stream interface along with the adjacent high-speed stream period compared to the observation despite some time delay and differences in the finer-scale structures. Fig 4a shows the FastDTW alignments (cyan) that map large-scale features consisting in the stream interfaces and high-speed stream. The normalized FastDTW distance is about \(53\) km/s for this interval. As there can be several alignments for a data point (e.g., at local peaks), the number of alignment pairs is equal or larger than the length of the Figure 4: Example of the FastDTW mapping and performance metrics. (a) The FastDTW mapping (cyan) between the predicted CIR structures with Helio1D (purple) and the OMNI (black) data. (b) Helio1D, OMNI, and the baseline model (red) of the same interval. Relevant numbers indicative of the model performance are noted. data. We indicate the number of the alignment pairs (called "path length") to the length of the data in Fig 4a. Fig 4b shows the Helio1D, OMNI, and baseline model (red) of the same interval. We note the RMSE of \(119.4\) km/s and MAE of \(79.4\) km/s, measured between the Helio1D and OMNI data. The skill score, obtained from equation 4, is \(-1.23\) indicating that the Helio1D performs worse than the baseline model. Nevertheless, the normalized SSF (equation 7) of \(0.8\), obtained from the comparison of the FastDTW mapping between OMNI and Helio1D to the FastDTW mapping between OMNI and baseline model, indicates that the Helio1D model performs better than the baseline model. This example shows that the Helio1D model prediction is still useful despite the negative skill score. We propose that the FastDTW algorithm can be used for mapping large-scale features, in particular, of the stream interfaces and high-speed streams. Here, the alignments can be used for extracting detailed information (i.e., the normalized FastDTW distance and normalized SSF) indicative of the model performance. We further exploit the alignments for model calibration in Section 5. As illustrated in Fig 4a, the FastDTW alignments optimally match large-scale features in the time series. For a given FastDTW map path, we may define the time difference as \(\Delta t=t_{OMNI}-t_{1D}\) and the velocity difference as \(\Delta V=V_{OMNI}-V_{1D}\), where the subscript OMNI means the observations and the subscript 1D means the Helio1D results. Here, the statistical information on \(\Delta t\) and \(\Delta V\) from the FastDTW mapping are useful for investigation of the model performance in terms of timing and magnitude difference. For each mapping path, \(\Delta t>0\), i.e., \(t_{OMNI}>t_{1D}\), means that the Helio1D model result is ahead of the observation; while \(\Delta t<0\) means that the Helio1D model result is behind (e.g., lagged or delayed) with respect to the observation. Similarly, \(\Delta V>0\), i.e., \(V_{OMNI}>V_{1D}\), means that the Helio1D model result underestimates the observed speed; while \(\Delta V<0\), i.e.,\(V_{1D}>V_{OMNI}\), means that the Helio1D model result overestimates the observed speed. We can measure the model performance by constructing histograms of distribution of \(\Delta t\) and \(\Delta V\) as follows. Fig 5 shows histograms of the \(\Delta t\) and \(\Delta V\) of all the data obtained from FastDTW alignments on the velocity time series from 2005 to 2013. Using the Gaussian distribution fit on the histogram of \(\Delta t\) in Fig 5a, the median of the \(\Delta t\) is about \(2.7\) hours; the full width at half maximum (FWHM), i.e., \(2.355\sigma\), where \(\sigma\) is the standard deviation, is about \(31.8\) hours. In Fig 5b, we apply the Cauchy-Lorentzian to the \(\Delta V\) distribution to extract their statistical properties as it best fits with the shape of the distribution. We find that the \(\Delta V\) has the median of \(0.53\) km/s and the FWHM of \(34.1\) km/s. This means that there is almost no systematic bias for the modeling of the velocity amplitude by the Helio1D, with the majority of the modeling having \(\Delta V\) within \(\pm 34.1\) km/s. In brief, considering several years of statistics covering parts of the solar cycle 23, the Helio1D model results show minimal time and velocity differences compared to the observations. Using the FastDTW alignment information, we also explore the model performance for different types of the solar wind. In Fig 5b, for instance, we find that the \(\Delta V\) distribution has slight skew or an asymmetry towards the negative values. This indicates that the Helio1D model likely overestimates the solar wind speed. To understand the model performance in detail, we consider histograms of \(\Delta t\) and \(\Delta V\) for the individual 6-month intervals in Table 1. Particularly, we categorize the solar wind schemes into three types: (1) slow wind with \(V<400\) km/s, (2) moderate wind with \(400\leq V\leq 500\) km/s, and (3) fast wind with \(V>500\) km/s. This categorization is based on the observed solar wind. Fig 6 shows an example of distributions of \(\Delta t\) (a) and \(\Delta V\) (b) with individual distribution fit functions to the slow (yellow), moderate (green), and fast (blue) solar winds, in addition to all solar wind (black). Table 1 in Appendix summarizes the statistical information of all the 6-month intervals. We now discuss our fitting results to the histograms of \(\Delta t\) and \(\Delta V\) obtained from the FastDTW algorithm in Fig 6 for the data in Jan - June, 2006. For the slow and moderate winds, the median time differences are within a few hours, with \(\Delta t_{slow}=-1.5\) hours and \(\Delta t_{moderate}=3.8\) hours. For the fast wind, in contrast, we obtain a rather large time delay with \(\Delta t_{fast}=17.5\) hours. Overall, the FWHM of time delay is about 30 hours for all solar wind speed ranges. For the velocity difference, we find that the slow and moderate winds have small medians of \(\Delta V\), with \(\Delta V_{slow}=6.2\) km/s and \(\Delta V_{moderate}=3.5\) km/s. For the fast wind, however, the median of \(\Delta V\) is bigger, with \(\Delta V_{fast}=74\) km/s. This example shows that the Helio1D indeed performs differently for various solar wind speed ranges. For this interval, for instance, the Helio1D model underestimates the speed of the fast wind. To investigate whether this particular behavior persists, we next consider the median \(\Delta t\) and \(\Delta V\) of the individual 6-month intervals throughout the solar cycle 23. Fig 7 shows bar plots of the median \(\Delta t\) in the left panels and \(\Delta V\) in the right panels, separated for all wind types (a, b), fast wind (c, d), and slow wind (e, f). The bar plots are shown against the average sunspot numbers of the corresponding intervals to indicate the phases of the solar cycle. We note that the moderate wind does not show a particular trend and thus excluded from this plot. Overall, we find that the Helio1D has a positive \(\Delta t\) (Fig 7a), indicating that the Helio1D timing is ahead of the observations. The largest positive \(\Delta t\) is found during the ascending phase and the solar maximum (Jan - June 2012 in Fig 7a). This positive \(\Delta t\) is larger for the fast wind (40 hours) compared to the slow wind (25 hours) as seen in Figs 7c and 7e. For the velocity difference for all wind types, we find that the Helio1D provides a minimal velocity difference within 10 km/s except during the ascending phase and solar maximum as seen in Fig 7b. When considering the different wind types, we find that the median \(\Delta V\) of the fast wind (Fig 7d) is markedly different from that of the slow wind (Fig 7f). In particular, during the solar minimum and early ascending phase, there is a large positive \(\Delta V\), indicating that the Helio1D underestimates the speed of the fast wind (\(V_{1D}<V_{OMNI}\)) by 50 - 150 km/s. In contrast, during the same intervals, there is mostly negative \(\Delta V\) for slow wind. In other words, the Helio1D overestimates the speed of the slow wind (\(V_{1D}>V_{OMNI}\)). This overestimation can go up to 60 km/s as seen during the ascending phase. In brief, we find that the timing of the Helio1D results (i.e., stream interfaces) is often ahead of the observations (consistent with Fig. 3). This positive \(\Delta t\) is even larger for the fast wind. In terms of the velocity, we find that the Helio1D model underestimates the fast wind speed while overestimates the slow wind speed. This effect is clear during the solar minimum and the early ascending phase. ## 5 Pipeline calibration As mentioned in Section 4.1, Helio1D usually overestimates the number density and temperature of plasma at stream interaction regions as a consequence of over-compression at the stream interfaces. To alleviate this issue, we consider a post-calibration of the data to lower the extreme peaks in the Helio1D outputs. As shown in Section 4.2, FastDTW provides the alignments between the modeled and observed solar wind speeds. Since the FastDTW maps similar structures within the two time series, we create a calibration plot for the mapped values between the Helio1D and OMNI data. Fig 8 shows a kernel density estimation (KDE) plot of the mapped solar wind speeds from the Helio1D (horizontal axis) and the OMNI data (vertical axis), which shows their smoothed 2D density probability histogram. The brighter color in the plot corresponds to a higher probability density. Here, the velocity data in Fig 8a has a high probability density that mostly follows the line of slope of 1. This demonstrates that there is overall no bias from the Helio1D modeling (similar to Fig 6b), and that the magnitude of the solar wind speed from Helio1D mostly matches the OMNI data. Figure 5: Histograms of (a) time delay and (b) velocity difference obtained from FastDTW alignments between the Helio1D and OMNI data from 2005 to 2018. The Gaussian (a) and the Cauchy-Lorentzian (b) distribution fits (red solid lines) are applied to obtain median (red vertical dashed line) and FWHM of the time delay and the velocity difference, respectively. The FastDTW alignments information allows us to map not only velocity but also plasma density and temperature values between the modeled and observed time series. In Figs 8b and 8c, we compare the plasma number densities and temperatures between the Helio1D and OMNI using the mapped path obtained from the FDTW application on the solar wind speed. It is clearly visible that the number density and the temperature from Helio1D are higher than observed values. To lessen these overestimations, we consider using linear regression. Here, we apply a linear fit onto the data shown as the red solid line in Fig 8b. Similarly, we apply a linear fit function to the plasma temperature in Fig 8c. The linear fit functions of both number density and temperature are found to have a slope of 0.653. Using these linear functions, we find that the magnitudes of the peaks of the number density and temperature better match those of the observations (not shown). We thus conclude that our calibration on the plasma number density and temperature with the linear function significantly improves the Helio1D modeling. We apply these calibrations for the number density and temperature of the Helio1D pipeline for operational forecasting described in Section 6. ## 6 Operational forecasting We now shift our focus to the development of a stable "prototype" service for solar wind forecasting with Helio1D. To continuously predict solar wind conditions, several technical aspects as mentioned in Section 2.3 are addressed. First and foremost, characterization of model uncertainties is critical to assess the modeling results in order to plan for mitigation of plausible impacts of extreme events. We provide model uncertainties using an ensemble modeling in Section 6.1. Secondly, the 1D MHD model requires that the length of input time series covers at least one solar rotation in order to produce consistent CIR formation. The Multi-VP model has a setup that provides for the daily solar wind emergence comprising 72 hours of solar wind conditions covering from the present day (\(D\)) to the next two days (\(D+2\)). Therefore, the solar wind emergence time series from Multi-VP must be concatenated to produce a continuous time series with a minimum length of 27.25 days. Lastly, there can be data gaps and/or unphysical values arising from either the inputs for MultiVP (i.e., magnetogram), due to the lack of data or the presence of invalid data points, or numerical errors. The implementation of automatic procedures to tackle these issues are detailed in Section 6.2. Figure 6: Histograms of (a) time delay and (b) velocity difference obtained from FastDTW alignments between the Helio1D and OMNI data from Jan to June 2006. Blue, red, and green bars represent slow, intermediate, and fast winds, respectively. (a) The Gaussian distribution fits are applied to obtain median (\(\mu\)) and FWHM of the time delay. (b) The Cauchy-Lorentzian distribution fits are applied to obtain \(\mu\) and FWHM of the velocity difference. The median and FWHM for all solar wind speeds are also given. See appendix for \(\mu\) and FWHM all 6-month intervals. ### Helio1D ensemble modeling with 21 virtual targets We perform an ensemble forecasting of the MULTI-VP and 1-D MHD models by considering a range of heliospheric latitudinal and longitudinal uncertainties. For example, a solar wind structure that is ahead or behind in time compared to the observation can be accounted for by considering the time-series at some nearby heliospheric longitudes. Also, the magnitude of the solar wind speed profiles is different at different heliospheric latitudes as a function of the warping of the Heliospheric Current Sheet and nearby velocity gradients. Using daily magnetograms from WSO, we set up the Multi-VP model to automatically generate daily one-dimensional solar wind profiles that cover from \(D\) to \(D+2\) at the sub-Earth point and the surrounding virtual targets. The surrounding virtual targets are set to spread up to \(15^{o}\) from the sub-Earth point in latitude and longitude; they are set at \(5^{o}\) apart from each other in latitude and longitude; these chosen points constitute the star-grid pattern with the total of 21 points including sub-Earth point (see Appendix). The spatial cuts through these 21 points on the 2D map of solar wind emergence from Multi-VP along the ecliptic are then translated to temporal solar wind profiles. These 21 time-series inputs are automatically fed into the 1D MHD to propagate them in parallel from 0.14 AU to 1 AU. The 1D MHD model is rather computationally inexpensive; the model running with 21 virtual targets remains fast compared to 3D MHD modeling in general. Fig 9 shows results from the Helio1D pipeline with the 21 virtual points between March and May, 2018. The Helio1D data at the sub-Earth point are shown in blue, and the observation data are shown in black. The green shade highlights the spread of values between the minimum and maximum among the 21 virtual targets at each hour. Panels (a) - (d) show the solar wind speed, tangential magnetic field, plasma number density, and plasma temperature, respectively. We Figure 7: Bar plots of the median time difference (\(\Delta t\); left panels) and velocity difference (\(\Delta V\); right panels) from the FastDTW technique applied to the 6-month intervals in 2005 - 2013, over-plotted by the average sunspot numbers (red cross, right Y-axis) indicative of the solar cycle. The bar plots are shown for (a, b) all wind types, (c, d) slow wind, and (e, f) fast wind. note that the post-calibration with the linear functions for number density and temperature (found in Section 5) has been applied. Here, the ensemble spread in green shade provides an error bar. In this example, we find that the stream interfaces modeled for the sub-Earth point appear to lag behind the observations for about two days. Nevertheless, the timing uncertainties from the values at the virtual points indeed cover the timing of the arrivals of observed stream interfaces. Overall, we find that most of the observed data points fall within the error spread for both magnitude and timing, although there are some underestimations of the speed of the high-speed stream (consistent with our findings in Section 4.2). We conclude that the Helio1D modeling using the 21 virtual targets improves the modeling compared to using only one virtual point targeting at Earth. ### Solar wind data concatenation & Data gaps The main technical problems deal with the automated interfacing between Multi-VP and 1D MHD model. Unlike the model benchmarking and case-by-case analysis, we rely on automatic modeling for both the Multi-VP and 1D MHD models. Here, the Helio1D pipeline is scheduled to run in-house every day. First, the Multi-VP is set up to produce a daily forecast with a data length of 3 days, covering from day \(D\) to \(D+2\). We first concatenate these Multi-VP data for each virtual target by averaging the data from the day \(D-30\) to the present day \(D\) to make the time series input to be sufficiently long. After the concatenation process, this 1-month solar wind profile at 0.14 AU is subsequently propagated by the 1D MHD model to provide the solar wind profile at 1 AU. Since the solar wind takes time to propagate to the Earth, we gain an extra lead time of \(2-7\) days depending on the solar wind speed. To provide a total lead time of 4-days from Helio1D, we limit the extra time gained from the 1D MHD propagation modeling to 2 days. This setup has been done to automatically interface the Multi-VP and 1D MHD to provide daily solar wind nowcast and forecast from day \(D\) to \(D+4\) (see branch (b) in Fig 1). In an absence of the magnetogram data, the magnetogram of the previous solar rotation is programmed to be fetched. This procedure is implemented based on the assumption that the coronal structures remain unchanged compared to the previous solar rotation (i.e., when the consistent, recurring CIRs are expected). Furthermore, one of the models, or both, may terminate earlier than expected and give incomplete outputs. This is because, in reality, there can be data gaps arising from the lack of daily data (e.g., magnetogram data), or unphysical values in the data. The latter can come from numerical errors from one of the codes or poor quality of the raw inputs. To prevent the Helio1D pipeline from early termination, we perform an automatic search for data gaps and unphysical values (after performing the fetching of previous magnetogram when the present-day magnetogram is unavailable as mentioned above). The data gaps are then filled using linear interpolation between two available data points. The unphysical values identified using the criteria (see Appendix) are subsequently removed. The resulting data gaps are then filled using linear interpolation. To remove unphysical features that may arise from these procedures, we perform a running average on the data using a window of \(\pm 3\) hours centered around the data point. Finally, when either the input or daily output is shorter than expected, we fill the data with latest available values to complete the length of expected output. For practical purpose, we provide daily solar wind time series extending from day \(D-3\) to \(D+4\) for a given day \(D\). Figure 8: Calibration plots obtained from the FDTW alignments between Helio1D (horizontal axis) and OMNI (vertical axis) time series for (a) solar wind speed, (b) number density, and (c) temperature. Example plots of the daily data from the Helio1D are publicly available; they can be accessible from [http://swift.irap.omp.eu/](http://swift.irap.omp.eu/) and, in the near future, through the ESA Space Safety Program via www.esa.int/Space_Safety/Space_weather. ## 7 Discussion We developed the Helio1D pipeline for solar wind modeling by interfacing Multi-VP for the coronal part and 1D MHD for the interplanetary part. The interfacing has been done for the first time as an effort to connect two developed models for operational forecasting purpose. Using the long-term Multi-VP data produced with the WSO magnetograms in 2005 - 2013 and 2017 - 2018, we evaluated the pipeline performance in various phases of the solar cycles 23 and 24. In addition to classic metrics such as RMSE, MAE, and skill score, we devised a new metric based on the FastDTW algorithm. Using the long-term data and the metrics, we discuss our main findings as follows. 1. The performance of the Helio1D pipeline is dependent on the phases of the solar cycle. Using the RMSE, MAE, and FastDTW distance, measured for each 6-month interval, we find that their values positively correlate with the average sunspot numbers of the previous 6-month interval. The RMSE, for example, reaches a minimum value of 80 km/s during the solar minimum, and reaches a maximum value of 180 km/s during the solar maximum. Overall, the Helio1D pipeline provides minimal magnitude errors during the late declining phase and the solar minimum. 2. The Helio1D pipeline produces consistent CIR formation. For each CIR, the solar wind speed profile at the stream interface qualitatively agrees with the observations (Fig 3a). Thus, the model sufficiently includes the large-scale physics of the CIRs. However, due to the limited dimensionality and ideal MHD assumption, the compression at stream interfaces (resulting in extreme peaks of number density and temperature) is rather Figure 9: Helio1D ensemble time series outputs using the 21 virtual spacecraft compared to the OMNI data for March to April, 2018. The ensemble Helio1D outputs are shaded in green and the output at sub-Earth point is plotted in blue. The observation data are represented by a black line. strong compared to the observations. This effect was found to be common to 1D MHD models used for solar wind propagation [e.g., Zieger and Hansen, 2008]. To alleviate this effect, we perform a post-calibration to lower the peaks at stream interfaces using a linear function. This linear function was found using the FastDTW alignments of the solar wind profiles, which particularly mapped stream interfaces. 3. Using the FastDTW alignments, we also evaluated the timing and magnitude differences between the Helio1D and the observation data. For the data in the solar cycle 23, we find minimal \(\Delta t\) of 2.7 hours and \(\Delta V\) of 0.5 km/s. However, when separating the data into shorter intervals of 6-months and categorizing the solar wind types based on its speed, we find different results summarized as follows. 1. For all wind types, we find that the Helio1D timing is often ahead of that of the observation for about 10 hours. This timing error is highest six months after the average sunspot number reached the maximum in late 2011. Meanwhile, the \(\Delta V\) for all wind types remains low throughout the solar cycle except for the ascending phase of the solar cycle in 2010. 2. The fast wind has higher \(\Delta t\) and \(\Delta V\) than those of the slow wind. The Helio1D timing of fast wind is often 10 - 20 hours earlier than that of the observation, and this timing difference goes up to 40 hours six months after the highest sunspot number. For \(\Delta V\), the Helio1D pipeline mostly underestimates the speed of the fast wind for about 50 - 100 km/s during the declining phase and solar minimum; this difference goes up to \(\sim 150\) km/s during the late ascending phase. 3. The timing of the slow wind agrees with that of the observations within \(\pm 10\) hours, except during the solar maximum. For \(\Delta V\), the Helio1D pipeline often overestimates the speed of the slow wind for about 20 - 40 km/s. The \(\Delta V\) is largest with a value of \(-60\) km/s in late 2010. Comparing to existing models that have been benchmarked using large data sets, the performance of the Helio1D pipeline is in agreement with other equivalent models. Owens and Riley (2017) utilized solar wind emergence at 0.14 AU from MAS based on Carrington maps (e.g., WSO) of the photospheric magnetic field. Particularly, they used a large ensemble (N = 576) of solar wind time series from MAS, where the ensemble members are produced from sampling solar wind solutions within a range of latitudes about the sub-Earth point. The solar wind flows at 0.14 AU are then propagated using the HUX model [Riley and Lionello, 2011] that is based on the fluid momentum equation [e.g., Pizzo, 1978] and takes into account a residual solar wind acceleration [Schwenn, 1990]. Using a long interval of data from 1996 to 2016, they find that the median RMSE of the upwind ensemble propagation is 107 km/s. Reiss et al. (2020) performed a forecasting of ambient solar wind using the WSA model and the HUX tool as well as the Tunable HUX [Reiss et al., 2020] to map solar wind flows from near Sun to Earth for the period 2006 - 2015. Their RMSE is ranging from 90 to 122 km/s while their MAE is ranging from 72 to 93 km/s. Their skill scores are negative for WSA/HUX (greater than -0.6) and positive (less than 0.06) for WSA/THUX. Compared to their studies, our Helio1D pipeline performance characterized by the average RMSE and MAE are in agreement with their models despite somewhat larger values by about a few 10 km/s. Nevertheless, RMSE and MAE are rather crude metrics; they do not tell all qualities of the prediction and the measurement, e.g., as discussed by MacNeice et al. (2018) and several others. Regarding to the variation of the Helio1D performance with phases of the solar cycle, we find that our findings are generally consistent with other 1D MHD models. Zieger and Hansen (2008) performed an extensive validation of solar wind propagation using a 1D MHD model. They find that the variation of the coronal structure on the timescale of a solar rotation, characterized by the recurrence index of solar wind speed, plays an important role in the prediction efficiency. This explains our Helio1D results such that, in an absence of variation, i.e., during the late declining phase, the model generally predicts CIR formation consistent with the observations. In contrast, in a presence of strong variations, i.e., when there are CME emergences during the ascending phase and the solar maximum, the model generally predicts poor results compared to the observations. In terms of timing, we find that there is a bias on the solar wind stream arrival time such that the Helio1D solar wind streams usually arrive 10 hours earlier than the observations. This bias is stronger for the fast wind compared to the slow wind. This effect may come from the assumption of an ideal MHD plasma in the 1D MHD model that ignores finer-scale physics and/or interactions that could slow down fast wind or accelerate slow wind in the interplanetary space. Moreover, the limited dimensionality of the simulation does not take into account the propagation in other directions; this may result in the timing mismatch. In terms of magnitude, the Helio1D underestimates the speed of the fast wind while overestimates the speed of the slow wind. The bias on the fast and slow wind magnitudes may come from the lack of the physics on solar wind acceleration. In HUX model, an empirical, ad-hoc solar wind acceleration is incorporated into the model [Schwenn, 1990]. Since the 1D MHD code was originally developed for solar wind propagation in the outer heliosphere [Tao et al., 2005], this physics was deemed less important and it was not added into the model. We note that this aspect must be addressed in a future improvement of the 1D MHD model for the inner heliosphere; this aspect out of scope of this work which focuses on interfacing the matured models. Our work devised a new exploitation of the FastDTW technique for a detailed assessment of the Helio1D performance. In particular, the FastDTW alignments encoded optimal time delays and magnitude differences between the modeled stream interfaces and observed structures. We highlight that the FastDTW can be used to provide more qualitative measures than RMSE, MAE, and skill score, for example. We also demonstrated that the FastDTW (normalized) distance correlates with the RMSE and MAE. Nevertheless, the FastDTW technique is not without caveats. The main constraint of DTW techniques in general is the pathological mapping or the singularities. Therefore, the obtained \(\Delta t\) and \(\Delta V\) from the DTW applications are not unique but instead depending on the constraints that have been added. In our case, we restrict that the alignments along the time axis must be within 96 hours (corresponding to the time that a solar wind with an intermediate speed should take to travel from 0.14 to 1 AU). Furthermore, we chose the FastDTW technique in particular because it searches to first map the structures at coarse scales, then refine to smaller scales. We emphasize that this algorithmic feature is desirable for mapping large-scale solar wind structures such as CIRs or CMEs. Finally, we highlight the work that needs to be taken to develop an automated solar wind forecasting pipeline of Helio1D. In addition to the pipeline calibration to alleviate the pipeline caveats, there are other aspects that must be addressed. This comprises (1) providing model uncertainties and (2) dealing with data gaps and bad data. The Helio1D modeling of an ensemble of 21 solar wind solutions was shown to be a reasonable method to provide worst case scenario especially for the timing of stream interface arrivals. We note that the ensemble members can be further exploited to obtain an optimized forecast; we leave this aspect for future work. Most importantly, to develop a reliable service, we implemented strategies to tackle with data gaps and bad data points so that the pipeline can automatically run and provide continuous nowcast and forecast. This aspect should be further tested in order to evaluate their impacts on accuracy, performance, and stability of the operational forecasting pipeline. Such a task is of particular importance; it requires dedicated tests on this prototype pipeline in various situations before employing it for applications in real world. ## 8 Summary We developed a prototype pipeline "Helio1D" that automatically interfaces the coronal model "Multi-VP" and the solar wind propagation model "1D MHD" to provide nowcast and forecast of ambient solar wind containing CIRs at L1. The operational prototype pipeline provides daily solar wind modeling with a lead time of 4 days. In this work, we benchmarked and extensively tested the Helio1D pipeline using 10 years of data spanning from late 2004 - 2013 and 2017 - 2018. We evaluated the performance of the pipeline using classic metrics including RMSE, MAE, and skill score. In particular, we devised a new exploitation of the FastDTW technique to map large-scale solar wind structures of CIRs. We demonstrated that this technique can be used to characterize the detailed performance of the solar wind modeling with Helio1D. For instance, we characterized the statistical information on timing and magnitude differences for velocity profiles of stream interfaces. Using this new approach, we assessed the pipeline performance for various phases of the solar cycle 23 and investigated the modeling bias for fast and slow solar winds. We find that the Helio1D pipeline performs best during the declining phase and solar minimum. However, the Helio1D pipeline often underestimates the speed of the fast wind while overestimates the speed of the slow wind, especially between the late declining phase and early ascending phase of the solar cycle 23. Furthermore, the solar wind structures modeled by Helio1D often arrives earlier than the observations especially for the fast wind. These caveats plausibly arise from the simplistic assumptions in the 1D MHD model, comprising the limited dimensionality, the lack of dissipation, and the solar wind acceleration physics, for example. Nevertheless, the Helio1D pipeline models consistent CIR formation while remains computationally light, which is desirable for operational forecasting. To transition from case-by-case benchmarking to operational solar wind forecasting, we implemented (1) the pipeline calibration to alleviate the over-compression at stream interface due to the ideal MHD assumption, (2) the ensemble modeling of 21 solar wind solutions at virtual targets including the sub-Earth point to provide timing and magnitude uncertainties, and (3) the procedures to remove unphysical data points and automatically fill data gaps. The latter two procedures were discussed to be crucial in providing continuous, reliable service while providing forecasting uncertainties. Alternatively, the Helio1D pipeline can be adapted to use other more reliable magnetogram sources and with higher resolution, e.g., from Air Force Data Assimilative Photospheric Flux Transport [ADAPT; Hickmann et al., 2015, Arge et al., 2013, Worden and Harvey, 2000]. We emphasized that the implementation of procedures for making the pipeline fully operational, in addition to the pipeline benchmarking, are critical and they must be further tested in several situations before employing the pipeline for real applications. ## Acknowledgements The SafeSpace project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 870437. The 1D MHD model of solar wind propagation initially developed by Tao et al. [2005] and used for Helio1D is duplicated from the Heliopropa service provided by CDPP ([http://heliopropa.irap.omp.eu](http://heliopropa.irap.omp.eu)) and developed during the Planetary Space Weather Services (PSWS) Virtual Activity of the Europlanet H2020 Research Infrastructure funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 654208 and extended during the Sun Planet Interactions Digital Environment on Request (SPIDER) Virtual Activity of the Europlanet H2024 Research Infrastructure funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 871149. Work at IRAP is supported by the French National Centre for Scientific Research (CNRS), Centre national d'etudes spatiales (CNES), and the University of Toulouse III (UPS).
2310.09319
Topological Data Analysis in smart manufacturing: State of the art and futuredirections
Topological Data Analysis (TDA) is a discipline that applies algebraic topology techniques to analyze complex, multi-dimensional data. Although it is a relatively new field, TDA has been widely and successfully applied across various domains, such as medicine, materials science, and biology. This survey provides an overview of the state of the art of TDA within a dynamic and promising application area: industrial manufacturing and production, particularly within the Industry 4.0 context. We have conducted a rigorous and reproducible literature search focusing on TDA applications in industrial production and manufacturing settings. The identified works are categorized based on their application areas within the manufacturing process and the types of input data. We highlight the principal advantages of TDA tools in this context, address the challenges encountered and the future potential of the field. Furthermore, we identify TDA methods that are currently underexploited in specific industrial areas and discuss how their application could be beneficial, with the aim of stimulating further research in this field. This work seeks to bridge the theoretical advancements in TDA with the practical needs of industrial production. Our goal is to serve as a guide for practitioners and researchers applying TDA in industrial production and manufacturing systems. We advocate for the untapped potential of TDA in this domain and encourage continued exploration and research.
Martin Uray, Barbara Giunti, Michael Kerber, Stefan Huber
2023-10-13T13:03:25Z
http://arxiv.org/abs/2310.09319v3
# Topological Data Analysis in smart manufacturing processes - A survey on the state of the art ###### Abstract Topological Data Analysis (TDA) is a mathematical method using techniques from topology for the analysis of complex, multi-dimensional data that has been widely and successfully applied in several fields such as medicine, material science, biology, and others. This survey summarizes the state of the art of TDA in yet another application area: industrial manufacturing and production in the context of _Industry 4.0_. We perform a rigorous and reproducible literature search of applications of TDA on the setting of industrial production and manufacturing. The resulting works are clustered and analyzed based on their application area within the manufacturing process and their input data type. We highlight the key benefits of TDA and their tools in this area and describe its challenges, as well as future potential. Finally, we discuss which TDA methods are underutilized in (the specific area of) industry and the identified types of application, with the goal of prompting more research in this profitable area of application. Industry 4.0, Smart Manufacturing, Smart Production, Topological Data Analysis ## I Introduction Industry 4.0 is the fourth industrial revolution, characterized by the convergence of digital and physical technologies. This revolution is transforming manufacturing processes and enabling the development of smart production systems. Smart production systems are characterized by their ability to collect and analyze data in real time, make intelligent decisions, and adapt to changing conditions, like customized products or production on demand. Topology is a field of mathematics that deals with the study of the shape of objects. The recent, and highly successful, field of TDA is at the intersections of data analysis, computer science and topology, and uses the latter to analyze data. TDA has been shown to be effective in a wide range of applications, including anomaly detection, image processing, genome sequencing, and predictive maintenance [16]. However, its potential has not yet been realized in _Industry 4.0_. We argue that TDA is particularly well-suited for smart production systems because it can be used to extract insights from complex datasets generated by sensors and other devices and perform decision based on that. Moreover, employing TDA can be beneficial to identify patterns and relationships in data that would be difficult to detect using traditional methods. In this survey, we review the current state of the art on the application of TDA on manufacturing and production processes and identify future research directions in these areas. ### _Related Work_ The number of applications of TDA has become so numerous and diverse that no individual researcher can possibly keep track of all developments anymore. This poses the question of how the interface of TDA to application domains can be structured to become accessible from both the theorists and the practitioners. One recent initiative is DONUT [16], a search engine that allows for a lookup of applications of TDA in a simple way. Another way to structure this knowledge are scientific survey articles that summarize and compare different approaches of how TDA has been linked to applied setups. There is a substantial body of such surveys and textbooks. Their focus is usually on explaining the theory and presenting a few sample applications, demonstrating the diversity of areas that can profit from TDA [5, 54]. This approach is reasonable since a comprehensive survey of all applications of TDA would result in a document of unmanageable size. Many works take a complementary approach focusing on one domain and surveying all methods used in it. Concerning the application of TDA in the industrial production and manufacturing processes, we are not aware of any other review works that bridge the gap between the theoretical results of TDA and the industrial applications. However, there are some literature reviews that bridge the gap towards other fields, or that survey literature for the manufacturing processes, but do not explicitly focus on TDA. The authors of [4] review Machine Learning (ML) and other Data Analysis methods for design-process-yield optimizations in electronic design automation and semiconductor manufacturing for integrated circuits. In their work they briefly cover the application of the Mapper Algorithm for this specific manufacturing process. The challenges for Big Data Analytics for smart factories is discussed in [15]. In this work, the authors state that methods of TDA, such as the Mapper Algorithm for clustering data, are promising candidates for the analysis of Big Data in that domain. Still, they do not provide any empirical work on the application of TDA. Recently, the current state of the art on chatter detection in machining was reviewed [28]. This work shows the tremendous amount of research in that domain, while also highlighting the potential of TDA for the application on the specific problem. Some of the results of this review are also included in this work (c.f. section V-B). In [29], the current literature on data-driven approaches for the application on metal forming and blanking technologies is reviewed. In this survey, methods of Uniform Manifold Approximation and Projection (UMAP) are explicitly mentioned. However, both _Industry 4.0_ and TDA include many more areas and methods. Big Data is also a field related to _Industry 4.0_. A survey published in 2017 aims at bridging the gap between theoretical results of geometrical and topological methods and this engineering discipline [45]. In the domain of additive manufacturing, 3D printing is a predominate technology. The work of [53] discusses the applications of Persistent Homology (PH) and the Mapper Algorithm in this discipline. Here, we suggest a different kind of survey, tailored towards domain experts in industrial applications. To the best of our knowledge, this is the first paper to address the following questions: What are the current applications of TDA in industrial manufacturing and production? What are applications are we missing and should, therefore, focus on? ### _Contribution & Outline_ This paper presents an overview of the current literature on the application of TDA in the industrial production and manufacturing processes. We strongly believe that this survey will be of interest to both theorists in the field of TDA and practitioners of industrial production. Both disciplines are very active and the number of publications is growing, but the mutual awareness is still lacking. An aim of this survey is helping to bridge this gap and foster the exchange of ideas and methods between these worlds. The contributions of this paper are: 1. It gives an overview on the current literature on the application of TDA in the sector of industrial production and manufacturing processes; 2. It shows combination of areas and methods that are being used; 3. It highlights the combinations of areas of applications and methods that are underutilized. The remainder of this paper is structured as follows. Section II provides the needed background information and main terminology about the topics on _Industry 4.0_. In section III, TDA and its main tools are introduced, namely PH, the Mapper Algorithm, and UMAP. The methodology of the survey is described in section IV and the results are presented in section V. This latter includes a detailed discussion for each application domain identified in this work. Section VI discusses the results and gives an outlook on future research directions. Finally, section VII concludes the paper. ## II Smart Manufacturing in Industry 4.0 ### _Industry 4.0_ The term _Industry 4.0_ was coined in the early 2010's by the German government in order to foster and drive the so-called "fourth industrial revolution" (see [42] for details). This movement is driven by the need for higher flexibility in production, the operation of more adaptive machines, and a smarter, more autonomous way of operating machines, production lines, factories and even whole supply chains. This enables production paradigms like lot-size-one production, mass customization, but also the optimization of production not only per machine but on the entire value chain, eventually enabling new business and operational models as well. This vision gives rise to various other terms along with _Industry 4.0_, like "smart factory" or "cognitive factory". However, _Industry 4.0_ is not only about production of products only, but its context is much wider. However, [42] argue that this paradigm enables more innovative and revolutionary products per se by incorporating other technologies that are occurring simultaneously. These technologies include gene sequencing to nanotechnologies, renewables to quantum computing, and much more [42]. In order to operate such systems, Hermann et al. [19] identified four design principles of _Industry 4.0_: * _Interconnection_: all components, like sensors, machines, and even humans, are connected with each other. * _Information Transparency_: the information about all components are transparent within the system. This enables operators to make intelligent and well-informed decisions. * _Technical Assistance_: the technological facilities assist humans in decision-making, support in problem-solving and help or take over hard or unsafe tasks. * _Decentralized Decisions_: decisions are not made by a central instance, rather they are made "on the edge". Cyber-physical systems are able to make decisions on their own, based on the information they have. Exceptions to this rule are in inferences or conflicting aims, delegated to a higher instance. A review by Erboz [12] identified the main components of systems of _Industry 4.0: Big Data and Analytics_, _Autonomous Robots, Simulation, Horizontal and Vertical System Integration, Industrial Internet of Things (IIoT), Cloud, Cybersecurity, Additive Manufacturing, and Augmented Reality (AR). When talking about Cybersecurity in the context of Industry 4.0, we further refer to the term Operational Technology (OT) security [46]. OT security is the field of cyber-security for OT systems. The term Additive Manufacturing refers to the technology of _3D Printing_ in an industrial production context. Here, three-dimensional objects are created layer-wise by deposing material in a computer-controlled process [36]. ### _Production and manufacturing_ In the literature, the terms Manufacturing and Production are used for the process of creating products. Depending on the domain, e.g., semiconductors, also the term Fabrication may be found [24]. Even when there is a semantic difference between the terms manufacturing, production, and fabrication, in this work, the term Production and Manufacturing are used interchangeably as umbrella terms for all three. The manufacturing of a product involves a sequence of process steps, applied by industrial machines in a production line. At the beginning of a design process, the definition of the product requirements have to be identified, followed by a Conceptional Design and Evaluation of the said. Based on that, a prototype is created, enabling the creation of schematics for industrial reproduction, where "industrial" typically means in a repeatable, efficient and effective way. These schematics, in combination with the requirements of the product, define the specification for the selection of material, processes, and production equipment. The production itself is then accompanied and finished by an inspection and quality assurance before the products are packed. Manufacturing- or Production Engineering describes the branch of engineering working on the entire process of manufacturing. Among others, the planning and optimization of the production process are subject of interest to this discipline [33]. Figure 1 illustrates the stages of the manufacturing engineering process sequentially. In general, smart manufacturing in Industry 4.0 poses new challenges to the domain, compared to traditional manufacturing. Here, additional strategies and technologies are used to improve the manufacturing process, to fulfill the needs for the integration into Industry 4.0. An overview on the technologies and architectures used for smart manufacturing systems is given in [8]. ## III Topological Data Analysis The field of TDA can be clustered into three main methods: Mapper Algorithm [44], PH [11, 23], and UMAP [34]. Common to all of them is that the data at hand is first converted into a suitable geometric representation whose topological properties are analyzed. The key observation is that, when analyzing data, many parameters need to be handled (tuned, removed, weighted,...). Since topology is the branch of mathematics that deals with shapes, and since often data contain shapes, we can use topology to process (some) parameters, namely the ones "linked to shapes". These three methods deal with parameters in different ways. The Mapper Algorithm assembles parameters (and their values) in different groups and then clusters the inputs accordingly. These groupings often find previously unknown relations in the dataset. PH overcomes the need to choose a threshold for the parameter(s) by analyzing the data for all possible choices: it keeps track of how the shapes in the data evolve along the varying thresholds. For this reason, it is particularly apt for automatized production (see for example [6]). UMAP, being a dimensionality reduction method, removes some of the parameters by projecting the data into lower-dimensional ambient space, where they are more easily analyzed. The pipeline of each method is illustrated in fig. 2 and further discussed in the following. Fig. 1: The manufacturing engineering process with its stages along product production, beginning from the productions definition to its final, mass-produced artifact. Feedback connections omitted for the sake of readability. Illustration according and adapted from [24]. Fig. 2: (a) Mapper pipeline. (b) Persistent homology pipeline. (c) UMAP pipeline. ### _Mapper Algorithm_ The Mapper Algorithm is the conceptually simpler approach since the only topological property considered is connectivity. In essence, it is a topologically guided construction of a graph of clusters of an object set \(V\) in \(\mathbb{R}^{n}\): First, \(V\) is mapped by \(f\), the so-called lens function, to a low-dimensional space \(\mathbb{R}^{d}\), using, for instance, PCA or autoencoders. This step is crucial as two elements that, collapsed, end close by (and can thus be grouped in the next step) and may lay very far apart in the input setting, where therefore their relation goes undetected. Then the image \(f(V)\) is covered by (overlapping) sets \(U_{1},\ldots,U_{k}\), the so-called covering. Each \(U_{i}\) is pulled back into \(\mathbb{R}^{n}\) as \(f^{-1}(U_{i})\) and clustered using a clustering method of choice (e.g., \(k\)-means for a fixed \(k\) if \(V\) is Euclidean, or kernel-\(k\)-means for a more general \(V\)). All clusters of all \(f^{-1}(U_{i})\) form the vertices of the _Mapper Graph_\(G\) and if two such clusters intersect we add an edge to \(G\). Note that the clustering happens in the original space of the point set, but it is guided by the filter function and the covering. The _Mapper Graph_ is used for explorative data analysis: usually, one looks for _flares_ in the graph, that is, subpopulations of objects that are connected across several scales (intervals) and distinguished from the remaining objects on these scales. One then analyzes these subpopulations (potentially with traditional data analysis methods) to find a reason for their distinctiveness. To give an idea of how the method works in practice, we now describe a practical example taken from industrial manufacturing (c.f. section V). The goal of [40] is to improve the state-of-the-art demand forecasting methods. The problem is the following: manufacturers have to forecast demand for their products and their (hierarchical) components. The frequency of demand of each component is a time-series, which can be labeled with the predicting models that work best for it. These times series can be assembled into a _Mapper Graph_ partitioned into clusters according to the best prediction model. Besides improving our understanding of prediction models, this method has the additional considerable advantage of allowing us to efficiently choose a predicting model for a new component by simply computing the cluster it belongs to. In practice, the major obstacle to the use of the Mapper Algorithm is the splitting of \(f(V)\) into \(U_{1},\ldots,U_{n}\): the interpretability of the outcome entirely depends on this choice. While a few standard choices are known, it usually requires the prior knowledge of a domain expert to get meaningful insights out of the Mapper pipeline. Nevertheless, the Mapper Algorithm should be considered a powerful, versatile interactive tool that can reveal hidden connectivity in datasets. ### _Persistent Homology (PH)_ _Homology_ is a fundamental concept from algebraic topology, allowing us to identify shapes that cannot be continuously deformed into each other. An extensive treatment is beyond our scope; informally, homology reveals the number of \(k\)-dimensional holes of a shape, for every integer \(k\). For \(k=0,1,2\), this corresponds to the number of connected components, tunnels, and cavities in the shape. Crucially, given a continuous map between two shapes, for instance, an inclusion from \(X\) into \(Y\), there is a well-defined map between these holes. In the pipeline of PH, we build a sequence, called a filtration, of increasingly larger shapes \(X_{r}\) for every scale parameter \(r\geq 0\) and observe how holes of various dimensions appear and disappear when we consider the growth of \(X_{r}\) as a continuous process. The resulting evolution of topological features can be represented as a _barcode_ (aka _persistence diagram_), a collection of intervals (bars) that represent the lifetime of a hole within the filtration. The length of a bar is called the persistence of the corresponding topological feature. For a practical example, we describe the results of [32]. In manufacturing a well-known task is to grip objects. Besides form or force-closure grasps, we can consider energy-bounded cages, see fig. 3. A force field \(f\) acts on an object \(O\), pushing it into a gripper. Therefore, for the object to escape the gripper against \(f\), a certain energy is necessary. Assuming a bound on the escape energy, energy-bounded cages are formed. The goal of the paper is to identify these. To this end, the authors consider the configuration space of \(O\), sample the free space, approximate it through alpha complexes, and construct a super-level set filtration according to the energy potential for each simplex. The energy-bounded cages in the configuration space then display as persistent homology classes, i.e., points in the persistence diagram, where birth-time corresponds to the escape potential, the death-time corresponds to the deepest potential in the cage, and the persistence corresponds to the escape energy as difference of the former to potentials. One advantage of this framework is that there is often a natural choice for picking a filtration, so the pipeline allows for easier automation. There is also a rich theory for how to compare two datasets by comparing their barcodes, and how to integrate PH into ML methods, i.e., kernel-based methods or neural nets. The well-founded theory and the interpretability of the obtained features have contributed to the success of PH in practice. Moreover, there are many efficient algorithms to compute filtrations and barcodes, and to subsequently compare them. It Fig. 3: Picture from [32, Fig. 8]. On the left an energy –bounded cage is shown: Given the force field \(f\), for a given pose of the object (blue) a certain energy is necessary to escape the gripper (black). On the right we have points in the persistence diagram, where persistence corresponds to the escape energy necessary. should be mentioned, however, that despite all these advances, PH does not easily scale to very big datasets, unlike the conceptually simpler Mapper Algorithm, or the next described UMAP which works specifically for large datasets. ### _Uniform Manifold Approximation and Projection (UMAP)_ Often data live in a high-dimensional ambient. One can, for example, think about a collection of objects, each with 3 spatial dimension, plus their cost, material, hierarchical position in production, etc. However, in a specific analysis, much of this information is not needed, and may even hinder understanding. Therefore, one may want to first reduce the dimension of the data as a preprocessing step before the actual analysis. UMAP has proved to be very good at this task. UMAP works by first creating a weighted graph from the input points, and then projecting it onto a lower-dimensional space, obtaining another, simpler graph that preserves all the information that was deemed important. This latter choice is made via the selection of the appropriate projection. The construction of the graph is not trivial, as it has to maintain the information about the local distances of points. It is based on \(k\)-nearest neighbors and fuzzy structures, which are a way to weight the membership to a set (instead of either being in or out, an element can belong to a set _fuzzily_). We will not enter into further details here as the construction is highly abstract, but we refer to the original (well-explained) work [34]. For example, consider the setting of [43]: the additive manufacturing of electromagnetic devices. There, fabrication anomalies (geometric information) may result in unpredictable performance issues. Thus, of all the information that is neither geometric nor about electromagnetic performances can be disregarded, which is precisely what UMAP does in this case. What remains is then fed to a ML pipeline, whose output is the wanted relation between geometric defects and performances. It is important to remark that the output of UMAP, contrary to the outputs of the Mapper Algorithm and PH, is not immediately interpretable and needs to be further analyzed (for example, using ML). Nevertheless, it has great applications, outside and inside industry. ## IV Survey Method One of the main objectives of this work is to ensure reproducibility of the survey results. Therefore, we decided to perform the review as an exhaustive literature review [39], where each step is documented and can be reproduced. The problem defined for this work is the review of methods from TDA on the application of industrial production and manufacturing processes. The pipeline of the survey method is the follow: definition of appropriate keywords for the search, identification of the digital libraries where to search, and filtering of the obtained works. We now explain these steps in details. ### _Keywords and queries_ The used search queries are defined by means of two categories, namely _Method_ and _Domain_. The keywords of the categories _Method_ describe the tools of TDA we are interested to find applications for. On the other hand, the keywords of the _Domain_ category describe applications and tasks in the industrial manufacturing process. Figure 4 illustrates these categories including the identified keywords. The intersection of both categories is the search space for this literature review. To formulate a meaningful search query, the keywords of each category are connected using a "OR" statement. The resulting search strings of both categories are connected using a boolean "AND" operator. This resulting single search query is used to collect literature from the digital libraries. ### _Digital libraries_ The following five digital libraries are used in our survey for data collection: * IEEE with IEEE Xplore Digital Library * Springer with SpringerLink * Elsevier with ScienceDirect * ACM with ACM Digital Library * The American Society of Mechanical Engineers (ASME) with the ASME Digital Collection The selection for those five digital libraries was made based on the fact that they are the most prominent digital libraries for scientific publications in the field of computer science and engineering, especially for the applications of TDA and ML, as well as the most prominent digital libraries for scientific publications in the field of industrial engineering. The only exception from this assumption here is the ASME Digital Collection. It was selected because an earlier and preliminary semi-exhaustive search - using a method similar to the one used in [48] - on a restricted set of keywords on _Google Scholar_ showed additional and relevant results from what was found in the first four digital collections. Fig. 4: Venn diagram of the set of 13 keywords for the category _Domain_ (left segment) and the set of 4 keywords for the category _Method_ (right). The intersection of both sets indicates the scope for literature search. Note: the asterisk ‘*’ indicates a wildcard, so that variants of the keyword can be considered (e.g. Technologies and Technology). _IEEE Xplore, SpringerLink_, and the _ACM Digital Library_ provide a search interface that allows the usage of the generated search string. The interface of _ScienceDirect_ is restricted in the sense that only a limited number of boolean expressions and search string can be combined in a search. Due to the distributivity of the boolean operator "OR" over the boolean operator "AND", the search string can be split into multiple search strings that are combined with the boolean operator "OR". This reduced the number of keywords per search string, yielding the same search results. Lastly, the _ASME Digital Collection_ does not offer an advanced search and only supports one boolean operator per query. For this particular case, all \(13\cdot 4=52\) combinations of keywords are used independently. The moderately low number of resulting publications made this approach feasible, see details later. From all these digital libraries, all results are collected and stored using the Zotero reference manager1. For each publication its metadata is automatically extracted through its Digital Object Identifier (DOI). Footnote 1: [https://www.zotero.org/](https://www.zotero.org/) ### _Filtering results_ For the filtering, we did not restrict the search results on a specific time period. The reason for this is that the field of TDA is still a young field of research and the application of TDA in the field of industrial manufacturing processes is even younger. The search was performed at the end of June 2023, and all published works until then are considered. The filtering procedure was done step-wise, removing items according to the following criteria: 1. Duplicates 2. Relevant types of publication 3. Language of the references 4. Availability of a full text 5. Context of the work The first step of the filtering procedure was to remove duplicates, where duplicates are identified by the DOI and the title of the publication. When duplicates were found based on the title then a manual check was performed to verify this duplicity. Each duplicate was removed, keeping only one instance of the publication2. Footnote 2: Here to be noted, Conference Paper with a following, extended Journal version are considered to be independent (since they have a uniqe DOI, respectively). Their relationship is indicated in the results section (Section V). In order to deliver a meaningful review of high quality, only publications with a sufficient degree of quality were considered (i.e., peer-reviewed publications). Based on this requirement, only publications from conference proceedings and journals were taken into account for this review. We would like to add that we are quite aware that not all conference proceedings and journals published may be peer-reviewed, but we assume that the majority of them are and hardly no status on a review process is actually provided by the digital libraries. Other references, like preprints, presentations, books, or reports are excluded and not further considered for our analysis. To ensure the correct extraction of information from the publications and to be commonly reproducible, only publications with an available English full text were considered. The availability is highly dependent on the access and subscriptions of our institutions to those digital library. Therefore, all publications with no available full text were screened manually, so that no relevant publications are missed. In case of a missing, but relevant full text, alternative sources, like preprint servers or the authors webpages, were searched to obtain the full text. The execution of this procedure was not necessary, since the majority of the full texts of the publications were available in the digital libraries. The semi-automatically filtered references are then analyzed for the context. For this, all publications were screened manually. Here, the keywords of the categories _Method_ and _Domain_ were searched in the publications. They had to be present in a section relevant to the contribution. In particular, it was not sufficient to mention any of the methods or the domains only in the related work or in the outlook. After this filtering procedure, only the remaining publications are further considered for this survey. In total, \(4683\) results were screened, resulting in \(27\) publications that were considered to be relevant for our stated research questions. These publications were then screened in detail. Based on this screening, the publications were categorized manually into different categories, that are presented in the following. ## V Results Besides being used for strict data analysis tasks, TDA methods are also used for validating the analysis performed by other methods. For example, in [14] and [20], PH is used for evaluation, and, in [9, 31, 47, 52, 63], and [64], UMAP is used for the same purpose. Since this survey addresses the direct applications of TDA, these works are not included in the following discussion. Works from fields of production different to manufacturing - like petroleum production [6, 30, 37] - were also found during the literature search. Since these works are not related to a manufacturing process, we did not include these into the survey. Furthermore, other works mention the use of TDA methods on manufacturing processes in their work, but did not do any empirical work. Examples for this are the potential application of TDA methods to hybrid twins for smart manufacturing [7] and 3D printing [53]. In total \(27\) works were identified as relevant for this survey. Each of those identified works was assigned to on of three clusters (A-C), based on its application within the production process. The identified clusters are: * A: Quality Control on Product Level * B: Quality Control on Process Level * C: Manufacturing Engineering The resulting works are listed in table I. In this table, each work is assigned to one of the three clusters (A-C) and arranged over the used TDA method. Figure 5 shows an overview of these works in relation to their assigned clusters. Furthermore, this illustration also indicates the used TDA method over each application area. From the publication dates of the listed works can be seen that the interest in TDA methods for manufacturing processes has increased over the last years. The first publications employing methods from TDA were published in 2016. A major gain in interested can be observed on from 2022 onwards. The count of publications on from 2022 is higher compared to number of publications in the years before 2022. No publication was found for this survey work to be published in 2020. Figure 6 illustrates the absolute count of identified publications per year. The illustration in fig. 6 seems to indicate that the interest in TDA methods for manufacturing processes is decreasing towards 2023. Here one needs to keep in mind that the data for 2023 is incomplete since this survey only includes publications that have been added to the digital libraries until June 30th, 2023. A more detailed overview of the results is given in table II. This table shows each individual work, the associated cluster, the used TDA method, and the kind of input data used to solve the task. The type of input data is extracted from the referenced works. The most common data type is time series data, followed by point clouds and scalar fields. Additionally, we found also one work employing the TDA method on textual log files and one on a labeled graph. In the remainder of this section, the three identified application clusters are discussed. For each area of application, a short description is presented, followed by a brief summary of the associated works. For further details on these works, we refer to the original, referenced publications. ### _Quality Control on Product Level_ Quality control is considered in two different ways in the work we found: on a product level or on a production level. In the first class, the quality of the production is assessed on basis of the produced goods. In the second class, the quality is assessed by observations made from the production process. In this section, the results on the first class are considered. Using TDA methods at the product level, the quality of the produced good can be analyzed in a very efficient way. In general, methods from TDA are very naturally suited for the analysis of structures, surfaces, and shapes. Furthermore, the methods are very robust against noise, in addition to be very efficient in terms of computational complexity. The works identified in this part perform "classical" tasks of TDA and these tasks can be found in literature a lot (c.f. [5, 54]). Nevertheless, we included only works explicitly mentioning the application of TDA on the product level in a production process. A natural application of TDA on the product level is the analysis of discrepancies of the products topology. This is done in [2]. In this work, the authors describe the classification of topological discrepancies in Additive Manufacturing (AM). Fig. 5: This illustration shows the relation of the works to the identified clusters. Furthermore, the used TDA method is indicated over each cluster. The number in the parenthesis indicates the number of publications associated. Fig. 6: The number of relevant publications per year. The products to classify are embedded as a mesh in \(\mathbb{R}^{3}\). In this work, they use mainly pure Homology3. Footnote 3: For the sake of simplicity, we account this work as an application of PH, and mathematically PH can be considered to be a generalization of homology. Another natural use case is the analysis of a surfaces texture. An early work is done by [57], where they employ PH for the differentiation. This initial work did not provide any specific task, whilst the follow-up work [56] does. In the latter, they discuss how the surface texture is a key factor for the quality of a product. Their method is applied on surface profiles [57] and consecutively on the more specific task of microscope images [56]. For the segmentation of shapes, [55] propose a new method employing PH and graph convolutional networks. Their PH-based graph convolution network outperforms the state of the art on methods of fine-grained 3D shape segmentation, performed on point cloud data. A more specialized use case is presented in [22]. The application of their work is the quality control in the wafer production. Using the Mapper Algorithm, the task is to cluster the defect patterns. The input features are extracted from wafer map images by a vision transformer. Another case is the production of electrical motors. In [51], the authors use PH for the detection of eccentricity of electric motors. Here, the data is given as a time series of the process parameters of the electric motors. In their work, they are able to predict the fault levels with a reasonable accuracy, while maintaining a low computational complexity, due to the use of a simple regression model. Similarly, the work [3] analyzes the root cause in manufacturing variations of part-to-part variations in the production process of mechanical components. Applied on point cloud data acquired from optical scanning data, they extend their ML method by use of UMAP. The work in [41] does not classify separate anomalies in the products. The requirement for their use case is the detection of defect on produced wafer maps, which shall be discarded. Within a subsystem of their Deep Learning pipeline, they use UMAP for dimension reduction. The same task is performed in a very recent work [27], whereas they detect defect patterns in wafers maps during the process of production. In this work, the authors propose the use of PH for the generation of features for a neural network. For further processing by a neural network, the resulting persistence diagrams are transformed to Persistent Images. On the task of additive manufacturing of RF devices, [43] propose the use of UMAP with convolutional neural networks on microscope images. By identifying defect mechanisms and their performance impact by mapping geometric variances to electromagnetic performance metrics, this contributes to a faster and cheaper quality control, since no in-line electromagnetic simulations are necessary. ### _Quality Control on Process Level_ After outlining literature on quality control on product level (section V-A), we now discuss the results on the process level. Given process data, the goal is to assess the quality of the production process, whilst not observing the produced goods, rather instead the process variables are being observed (c.f. fig. 7). Examples of the data are machine states, sensor data, or other data acquired from the production process. For this task, \(7\) works were found during the review. Whilst this number suggests divers applications, actually only two main applications were identified. Given observations from key process parameters, the goal of the work in [17] and [18] is to predict the productivity of a manufacturing process. According to the authors, their works are the first employing methods from TDA for the application of manufacturing. In [17]and [18], the authors propose the use of the Mapper Algorithm to identify intrinsic clusters in a benchmark processing dataset. Using the Mapper Algorithm's output network, key process variables or features are selected that impact the final product quality. Their work showed that this model achieves the same level of prediction accuracy as with all process variables, while being more cost-effective. The second application within the cluster of quality control on process level is the detection of chatter. Chatter detection in machining gained some attention over the last years, as can be seen by a survey work on that particular application domain [28]. Detection of chatter is important, since it can lead to damage to the workpiece and the machine tools. Especially, the detection using methods of chatter using methods from TDA gained some attraction and influenced the work of Firas A. Khasawneh, by contributing \(5\) works to this application. The first work [25] is a proof of concept, where the authors show that PH can be used for chatter detection. In the follow-up work [26], the authors propose a method for chatter detection, based on PH and supervised learning. The authors state that their method is able to detect chatter with a high accuracy. In [60], the authors propose a supervised method for chatter detection based on topological feature vectors obtained using PH. The very same work is described in more detail in [59]. In their follow-up work, they additionally propose a method for transfer learning [58]. Here, they showed and evaluated that transfer learning can be used to improve the performance of the chatter, when training using different datasets. This work also employs dynamic time warping, to align the time series. Naturally, all process tasks are dependent on temporal, consecutive steps. Therefore, it is a natural fact that all works in this cluster apply there methods on time-series data. ### _Manufacturing Engineering_ This section covers the applications of TDA in the field of manufacturing engineering. With manufacturing engineering, we refer to the engineering discipline that designs, analyzes, and improves manufacturing processes and systems. The tasks in this area are not centered on the products per se, but rather they are concerned with the overall processes and systems that are used to manufacture these products. Tasks of manufacturing engineering include the optimization of the material flow, the optimization of the production process, and the optimization of the production system, as well as the selection of components and the design of production lines. A very common task for a manufacturer is the temporal planning of the production. The demand for a product can vary depending on several factors. Such factors can be the season, the location, the weather, or other events, like promotions or holidays. Not meeting the demand can lead to a loss of customers, while overproduction leads to a monetary loss, e.g., caused by the massive storage or disposal of the products. Depending on the use case, it may be beneficial that similar products are grouped and share forecasting models. In [40], the authors propose a method for demand forecasting with different methods. In order to generate forecasts for a new product, a predictor-model needs to be selected. For this selection process a \(k\)-nearest neighbor algorithm based on a _Mapper Graph_ is proposed. By utilizing the topological properties of the historic time-series data, the authors claim that the selection of the predictor-model is more accurate and much faster, compared to other methods. The very recent work [35] address the problem of relying on expert knowledge and experience of machine operators. Turnovers on machinery require re-parameterization of the machines. However, these changes of parameters are often not done based on numerical evidence, but rather they are done by hand and the experience of the machine operator. This poses the drawback that this re-parameterization is only reproducible up to a certain degree and an operator needs Fig. 7: The applications on the quality control on process level take the key process parameters as time series data as a basis for the quality control of the machinery processes. Process parameters are illustrated here on a schematic injection modeling machine. This figure is complete regarding the input data types (time-series), as shown in table II. to be trained for a long time in order to gain the required experience on the particular machine type. The authors of [35] propose a synergetic use of existing ML tools to extract a reduced manifold from existing geometric designs of signed distance functions. Using interpolation techniques, the reduced manifolds are then used to generate new geometric designs by inferring missing information using clustering techniques. Their work heavily relies on PH and Persistent Images. See [1] for further details on Persistent Images. Material flow optimization is the task of optimizing the schedule and the flow of material through a production system. This task involves transporting raw material from the depot to the production, transporting semi-finished products between the production lines, transferring finished products from the production to the depot back again. Given the complexity of the production systems, the optimization of the material flow is a challenging task, since all involved components have different capacities, changeover times, and other constraints. Having these constraints in mind, the optimization of the material flow is must be covered from a business perspective, as well as from a technical perspective. The task of material flow optimization is a multi-objective optimization problem, where the objectives are to minimize the costs and the time of the material flow. A visual benchmark is proposed in [10]. For this benchmark task, the material flow from depot to the production line is to be optimized as a multi-vehicle routing problem, with the data embodied as a point cloud in a multidimensional space. The evaluation is performed using PH. An optimization task of another kind is presented in [32]. Given independent moving objects, like grippers or robots, within a production environment, these must be secured against collisions with other objects, the surrounding environment, and, most important, human operators. A securing can be accomplished by the use of physical cages, where the objects operate within. However, these cages can be very complex to built, are inflexible and expensive. A more cost-effective solution is to use virtual cages, where the objects are restricted by a virtual boundary. The task of this work is the syntheses of planar energy-bounded cages that pose to have an optimal configuration for a given object. By means of identifying gripper and force-direction configurations and the application of PH, an optimal configuration is found. For this purpose, the objects and grippers are modeled as point clouds. Making sure that a product leaving the production line is of a certain quality is a major task in manufacturing engineering. Releasing faulty products can lead to a loss of reputation and, in the worst case, to a loss of human life. To make sure to meet the quality requirements, system-level tests are performed on each product. For the generation of rules to classify the failed parts, [21] proposed a method that employs UMAP for dimension reduction. From their work it is a bit unclear how they actually solve their problem, but it looks like that they embed the information about passed and failed tests in a multidimensional space and then use UMAP to reduce the dimensionality. This representation is then used to generate rules for the classification of failed parts. OT systems are typically Cyber-Physical Production Systems, i.e., where the physical part is controlled by a computer system through sensors and actors. Establishing and extending such systems is a challenging task, as these systems tend to become highly complex and heterogeneous. So to find repeating patterns in these systems, [50] propose a method to reuse components that are already established in the system, based on labeled graphs. This guarantees higher reliability, lower cost, and lower effort for maintenance. For their method, they use UMAP for dimension reduction. A case study in the herbal medicine manufacturing industry is presented in [61]. In this work, the authors attempt to analyze the degradation of the evaporation process, since this process is a major cost factor in the production. For the analysis, they employ UMAP for dimension reduction on time-series data. For the task of preventive maintenance, [62] propose a method for the analysis of machine maintenance data. Such datasets are often heterogeneous and multidimensional logs. These data need to be analyzed to find patterns that indicate a failure. In their work, the authors introduce a visual analytics approach, for the diagnosis of such heterogeneous and multidimensional machine maintenance data (textual log data), where UMAP is used for dimension reduction as one step in the processing pipeline. Another approach for the analysis of machine maintenance data is presented in [49]. In their work, they mitigate the problem of the degradation of the machinery in the fine blanking industry, by observing the wear of the machinery using acoustic emissions. This approach is based on UMAP and hierarchical clustering on time-series data. The authors claim that the data visualized in two dimensions resemble the temporal dependence of the data, while allowing to identify the wear of the machinery. In Industry 4.0, the security of OT systems is a major concern. OT has far different requirements than Information Technology (IT) systems. This also reflects the challenges in the field of security [13, 46]. Covering the topics on OT security there is only one work found during the survey. In [38], the authors propose using UMAP for downstream tasks in security applications with steel production processes, whereas the data is given as time-series data. ## VI Discussion & Future Research Directions The interest of methods of TDA on the application in industrial production and manufacturing processes has been growing in the few years. After its first application in 20164, the number of publications has been growing steadily with a peak in 2022. A further and continuing interest in the application of TDA in this domain can be expected. Footnote 4: The date of the first publication found by our survey work. The number of publications per identified application area is varying. Also within the context of this survey, one popular application is the one very natural from a topological perspective: the analysis of shapes, surfaces and features on manufactured products itself. Certainly, there are much more published works out there, working on similar topics, but the stated works are the only ones applying their task to the setting of production and manufacturing. A further popular application is the analysis of process data in the field of Manufacturing Engineering. The TDA method used the most is PH with \(14\) works employing it the one way or the other. UMAP is used in \(9\) works. Some of the works highlight UMAPs favourable properties in terms of topological preservation, but mostly do not discuss or empirically compare this method to other dimension reduction methods. The method used the least is the Mapper Algorithm with only \(4\) works employing it. However, it is an extremely successful method in other area of applications, for example medicine [16]. Thus, we think that the full potential of the Mapper Algorithm in _Industry 4.0_ has not yet being capitalized on. From the perspective of data types, the data type with the most applications of TDA in this very context are time series data. Here, a very popular task is the detection of chatter in machining processes. This application has been subject to a lot of attention in the past years, where the application of TDA is only one of many, but a very competitive approaches. Still, the fact that in a sense more classical TDA data, like point clouds, are not as popular as time series data in this context is a bit surprising. In the application of TDA methods over the data types can be observed that time series data are solved using all methods under observation. Scalar fields as well, but here only one work is using the Mapper Algorithm, whereas the other methods are employed more often. An interesting observation is that the works employing the Mapper Algorithm are only employed on time series data (3) and on wafer maps (1). This is a bit of a surprise, since the Mapper Algorithm as an algorithm is mostly applied to clustering problems, a natural task in point clouds. On point clouds not a single work employing the Mapper Algorithm was to be found in the context of this survey work. It is not surprising that all works in cluster (B) are on time series data only since processes, being fundamentally series of actions, are naturally described as a series of events in time. However, many other data in _Industry 4.0_ can be modelled by point clouds and analyzed with the Mapper Algorithm. Therefore, we encourage practitioners dealing with these type of data to consider the Mapper Algorithm for their analyses. For the future, we would like to see TDA being applied to more applications in the context of industrial production because we see a lot of potential where this could be beneficial. As can be seen within this work, the number of publications is still low, but it is growing. So far this discussion showed already some areas where potential for future research can be seen, as well as some underexploited combination of input types and methods. However, there are still some other areas where we see potential for future research. First, we see a huge potential of TDA to be applied on behavioral data. Runtime measurement data from machinery has a huge potential to be analyzed. Not only for the detection of chatter [28], but also for other tasks, such as the detection of anomalies in the production process, predictive maintenance, or the detection of security incidents. A major topic in OT is cyber-security and the protection of cyber-physical systems, OT security in short [46]. The design of intrusion detection and prevention systems is one of many measurements to protect OT environments. There are various approaches to such systems and a data-driven approach to anomaly detection is one of them, which we again can phrase as a time-series classification task. The application of TDA methods appears to be a natural fit for this task, yet only one work has been found to use TDA, namely by employing UMAP for dimensionality reduction. Here we expect a lot more to come in the future, including the application of PH for anomaly detection. Lastly, we would like to see, with a few years of distance, a recurring survey on the same topic as this work. This would allow to observe the development of the application of TDA in this domain and give insights into the development of the field. Given the nature of the method used in this work, this is highly reproducible and open for other researchers to extend this work. ## VII Conclusion This work presents an overview of the current literature on TDA in the industrial production and manufacturing processes. We show the interconnection of the method from TDA and the domain of industrial production and manufacturing processes. This work contributes to the current literature by providing a comprehensive overview of the current state of the art on the application of TDA in industrial production and manufacturing processes, shows application areas and the used methods of TDA being applied, and highlights the underutilized combinations of application areas and methods of TDA. By employing a transparent and strict method for the search and identification of literature, we ensure the reproducibility of this work. This method exposed \(27\) relevant works, which are analyzed in more detail. These works were clustered manually and the results are assigned to one of three categories, namely _Quality Control on Product Level_, _Quality Control on Process Level_, and _Production Engineering_. All works are briefly discussed, stating the used data format and the employed method of TDA for their specific use-case. With this work we show that TDA is a particular well-suited method for the analysis of complex data sets from sensors and other devices in the domain of industrial production and manufacturing processes. Furthermore, we show that the application of TDA in this domain is still in its infancy and that there is a lot of potential for future research.
2301.07475
Curvilinear object segmentation in medical images based on ODoS filter and deep learning network
Automatic segmentation of curvilinear objects in medical images plays an important role in the diagnosis and evaluation of human diseases, yet it is a challenging uncertainty in the complex segmentation tasks due to different issues such as various image appearances, low contrast between curvilinear objects and their surrounding backgrounds, thin and uneven curvilinear structures, and improper background illumination conditions. To overcome these challenges, we present a unique curvilinear structure segmentation framework based on an oriented derivative of stick (ODoS) filter and a deep learning network for curvilinear object segmentation in medical images. Currently, a large number of deep learning models emphasize developing deep architectures and ignore capturing the structural features of curvilinear objects, which may lead to unsatisfactory results. Consequently, a new approach that incorporates an ODoS filter as part of a deep learning network is presented to improve the spatial attention of curvilinear objects. Specifically, the input image is transfered into four-channel image constructed by the ODoS filter. In which, the original image is considered the principal part to describe various image appearance and complex background illumination conditions, a multi-step strategy is used to enhance the contrast between curvilinear objects and their surrounding backgrounds, and a vector field is applied to discriminate thin and uneven curvilinear structures. Subsequently, a deep learning framework is employed to extract various structural features for curvilinear object segmentation in medical images. The performance of the computational model is validated in experiments conducted on the publicly available DRIVE, STARE and CHASEDB1 datasets. The experimental results indicate that the presented model yields surprising results compared with those of some state-of-the-art methods.
Yuanyuan Peng, Lin Pan, Pengpeng Luan, Hongbin Tu, Xiong Li
2023-01-18T12:41:12Z
http://arxiv.org/abs/2301.07475v3
# Curvilinear object segmentation in medical images based on ODoS filter and deep learning network ###### Abstract Automatic segmentation of curvilinear objects in medical images plays an important role in the diagnosis and evaluation of human diseases, yet it is a challenging uncertainty for the complex segmentation task due to different issues like various image appearance, low contrast between curvilinear objects and their surrounding backgrounds, thin and uneven curvilinear structures, and improper background illumination. To overcome these challenges, we present a unique curvilinear structure segmentation framework based on oriented derivative of stick (ODoS) filter and deep learning network for curvilinear object segmentation in medical images. Currently, a large number of deep learning models emphasis on developing deep architectures and ignore capturing the structural features of curvature objects, which may lead to unsatisfactory results. In consequence, a new approach that incorporates the ODoS filter as part of a deep learning network is presented to improve the spatial attention of curvilinear objects. In which, the original image is considered as principal part to describe various image appearance and complex background illumination, the multi-step strategy is used to enhance contrast between curvilinear objects and their surrounding backgrounds, and the vector field is applied to discriminate thin and uneven curvilinear structures. Subsequently, a deep learning framework is employed to extract various structural features for curvilinear object segmentation in medical images. The performance of the computational model was validated in experiments with publicly available DRIVE, STARE and CHASEDB1 datasets. Experimental results indicate that the presented model has yielded surprising results compared with some state-of-the-art methods. Introduction Curvilinear object segmentation is often encountered in medical image processing and analysis. Their segmentation is crucial in applications like early detection of COVID-19, retinal fundus disease screening, and in diagnosis and treatment of lung diseases [1-5]. Compared with generic and unified structure segmentation in medical images, curvilinear object segmentation faces many unique challenges: 1) low contrast between curvilinear objects and their surrounding backgrounds; 2) thin and uneven curvilinear structures; 3) various complex image representations. In consequence, many methods have been presented to cope with these challenges for curvilinear object segmentation. Numerous algorithms have been proposed for curvilinear object segmentation in medical images, which can be divided into three different categories. The first classification generally designed distinctive handmade features to highlight curvilinear structure representation, such as Hessian matrix-based filters [6,7], tensor-based filters [8,9], stick-based filters [10,11], dynamic evolution models [12,13], active contour models [14,15], and level sets [16]. The second strategy mainly used deep learning methods to isolate and detect thin curved-lines in medical images under different frameworks like Unet networks [17,18], channel and spatial attention networks [19,20], teacher-student networks [21,22], the transformer method and its variants [23,24], DTUNet [25], and CS2Net [26]. The third category typically combined deep learning networks and handmade features to preserve the inherent curvilinear features and improve the segmentation accuracy of curvilinear structures, such as D-GaussianNet [27], local intensity order transformation (LIOT) [28], and the combination of a CNN-based method and a geometry method [29]. Although plausible results were acquired by using different algrithms have been published, segmentation of the weak curvilinear objects still remain a formidable task in medical images. Based on an observation that curvilinear objects have unique shape and structure characteristics, a direct strategy is to detect the thin and uneven curvilinear structures by extracting engineered features in medical images. Following this strategy, Li et al. proposed a hardware-oriented method to accurately acquire curvilinear lines based on the Hessian matrix [6]. Using a different strategy, Xiao et al. presented a derivative of stick filter consisting of three parallel and gapped sticks to highlight curvilinear structure representation [30], but the distinctive approach applied only the magnitude information, ignoring orientation information. To cope with the problem, Peng et al. designed a unique strategy by taking advantage of orientation and magnitude information to distinguish between curvilinear objects and other structures [31]. Recently, Liu et al. applied the geodesic models to alleviate the shortcuts combination problems for curvilinear structure extraction[32]. However, traditional handmade feature extraction algorithms only extract a few features and cannot accurately detect curvilinear structures. In order to make up for the defects of traditional methods, a large number of deep learning models have been proposed to segment curvilinear objects and achieved significant improvement over previous methods in medical images. For example, Ma et al. presented a cascaded deep neural network based on anatomical-aware dual-branch to achieve curvilinear object segmentation in medical images [33]. Similarly, Roy et al. presented a multi-view deep learning network to detect bright thin curved lines [17]. Later, Li et al. designed a novel IterNet based on structural redundancy to highlight curvilinear structure representation for diagnosis of retinal vascular diseases[34]. Yet, their methods based on unet model generally neglect the semantic information and fail to accurately acquire the global features in medical images. Using a different way, Li et al. presented a global transformer and dual local attention network to capture global and local characterizations to achieve satisfactory results in curvilinear object segmentation[23]. Recently, Mou et al. introduced a self-attention mechanism in the Unet model to learn rich hierarchical representations of curvilinear structures[26]. However, deep learning models mainly emphasis on finding powerful architectures and ignore capturing the inherent curvilinear features in medical images. Recently, many distinctive approaches based on deep learning networks and handmade features have been presented to improve the generalizability and the segmentation performance in medical images. For instance, Alvarado-Carrillo et al. designed a new technique that combines destorted gaussian matched filters with adaptive parameters as part of a deep convolutional architecture to detect complicated curvilinear shapes [27]. Using a different way, Shi et al. introduced a novel local intensity order transformation to accurately achieve feature description for curvilinear object segmention in medical images [28]. Although the above methods can achieve very good results in segmentation of curvilinear structures in medical images, detection of the weak curvilinear objects still remain an enormous challenge. In this study, a valuable and reliable scheme based on ODoS filter and deep learning model is presented for curvilinear structure segmentation in medical images. Inspired by previously published works [27; 28], a novel strategy that incorporates the ODoS filter as part of a deep learning network is designed to distinguish between curvilinear structures and undesirable tissues. Subsequently, a deep learning framework is employed to extract various structural features for curvilinear object segmentation in medical images. The main contributions of our work are as follows: A novel strategy that incorporates the ODoS filter as part of a deep learning network is presented to overcome different complexed issues in curvilinear object segmentation like various image appearance, low contrast between curvilinear objects and their surrounding backgrounds, thin and uneven curvilinear structures, and complex background illumination. The IterNet network is applied to find obscured details of the curvilinear objects from the improved ODoS filter, rather than the raw input image. The novel strategy has a good performance in grasping the spatial information of curvature objects in medical images. The merits of deep learning networks and handmade features are tightly integrated to generate an efficient image processing application framework for curvilinear object segmentation in medical images. The rest of this paper is organized as follows. We describe the materials and methods in detail in Section 2, followed by extensive experimental results to verify the validation of our scheme in Section 3. In Section 4, the advantages and disadvantages of different state-of-the-art methods are compared to futher illustrate that our scheme can effectively segment curvilinear objects in medical images. Finally, we conclude and give some perspectives in Section 5. ## 2 Materials and Methods ### Datasets and evaluation protocol In order to illustrate the validation of the presented method, we used three popular datasets, i.e., DRIVE [35], STARE [36], and CHASEDB1 [37] in our experiments. In which, the DRIVE dataset contains 40 565\(\times\)584 color retinal images, which are divided into 20 training images and 20 test images. The STARE dataset includes 20 700\(\times\)605 retinal images, which are split into 10 training images and 10 test images. The CHASEDB1 dataset consists of 28 999\(\times\)960 images, which are separate into 20 training images and 8 test images. Following the typical evaluation protocol for curvilinear object segmentation, the true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) are computed to illustrate the validation of the presented method for curvilinear object segmentation. Then, the classical accuracy (ACC), sensitivity (SE), specificity (SP), and F1-score (F1) are used to evaluate the segmentation performance. Mathematically \[SE=\frac{TP}{TP+FN} \tag{1}\] \[SP=\frac{TN}{TN+FP} \tag{2}\] \[PR=\frac{TP}{TP+FP} \tag{3}\] \[ACC=\frac{TP+TN}{TP+FP+TN+FN} \tag{4}\] \[F1=2\cdot\frac{PR\cdot SE}{PR+SE} \tag{5}\] In which, the ACC indicates the proportion of positive and negative cases of correct classification in all participating samples, and the F1 represents the similarity between the ground truth and the segmentation results. ### Overview of the proposed scheme In this study, we present a novel approach based on ODoS filter and deep learning network for curvilinear object segmentation in medical images. Firstly, a new approach that incorporates the improved ODoS filter as part of a deep learning network is presented to focus on spatial attention of curvilinear objects in medical images. Subsequently, a deep learning framework is employed to extract various structural features for curvilinear object segmentation in medical images. ### The improved ODoS filter In this study, a novel strategy is presented to meet many challenges in curvilinear object segmentation, such as various image appearance, low contrast between curvilinear objects and their surrounding backgrounds, thin and uneven curvilinear structures, and complex background illumination. As shown in Fig.1, the original image is considered as principal part to describe various image appearance and complex background illumination, the multi-step strategy is used to enhance contrast between curvilinear objects and their surrounding backgrounds, and the vector field is applied to detect weak, thin and uneven curvilinear structures. Different medical images have their own unique structures, appearances and backgrounds. To overcome the problem, the original images are applied to describe various image appearance and complex background illumination. The low contrast between curvilinear structures and their surrounding tissues is another challenge for curvilinear object segmentation. To solve the problem, the multi-step strategy is presented to enhance curvilinear objects and suppress other structures. The idea stems from our previously published works [30, 31], we use three sticks (The left (Ls), middle (Ms), and right (Rs)) to describe the structural difference between curvilinear structures and their surrounding tissues, as shown in Fig.2. Where the \(\theta\) represents the orientation and \(S\) is the inter-stick spacing. Fig.1: Multi-feature fusion. Fig.2: The filtering kernel. (The picture is originated from Xiao et al. [30] with permission.). Using \(u_{1}\), \(u_{M}\), and \(u_{R}\) to represent the average intensity along the three sticks, two linear enhancement templates are defined as follows \[\lambda_{\perp,\max}^{S,\theta}\left(x\right)=\max(u_{M}-u_{L},u_{M}-u_{R}) \tag{6}\] \[\lambda_{\perp,\min}^{S,\theta}\left(x\right)=\min(u_{M}-u_{L},u_{M}-u_{R}) \tag{7}\] Unfortunately, the undesired blob shape always enhanced simultaneously in medical images. As a result, an intensity standard deviation is applied to slope the problem \[\lambda_{\parallel}^{S,\theta}\left(x\right)=\sqrt{E\left(I_{j}^{2}\right)- \left(E\left(I_{j}\right)\right)^{2}} \tag{8}\] Here, \(E\) and \(I_{j}\) denote the expected value operator and the intensity of the jth pixel along the middle stick. Therefore, two curvilinear structure measure functions can be defined as \[\ell_{\max}^{S,\theta}\left(x\right)=\lambda_{\perp,\max}^{S,\theta}\left(x \right)-\kappa\cdot\lambda_{\parallel}^{S,\theta}\left(x\right) \tag{9}\] \[\ell_{\min}^{S,\theta}\left(x\right)=\lambda_{\perp,\min}^{S,\theta}\left(x \right)-\kappa\cdot\lambda_{\parallel}^{S,\theta}\left(x\right) \tag{10}\] Here, \(\kappa\) is equal to 0.7 [30]. During the rotation of the filter kernel, the curvilinear structure measure function values could be changed accordingly. Thus, the corresponding multiple direction integration expressions can be represented with \[F_{\max}^{S}\left(x\right)=\max(\max_{1\leq i\leq 2^{*}\left(L-1\right)}\left( \ell_{\max}^{S,\theta_{i}}\right),0) \tag{11}\] \[F_{\min}^{S}\left(x\right)=\max(\max_{1\leq i\leq 2^{*}\left(L-1\right)}\left( \ell_{\min}^{S,\theta_{i}}\right),0) \tag{12}\] Inspired by the application of Hessian matrix decomposition in tubular structure segmentation, a max-min cascaded strategy can be used to suppress the step-edge structures. A mathematical expression is as follows \[F^{S}\left(x\right)=F_{\max}^{S}\left(x\right)oF_{\min}^{S}\left(x\right) \tag{13}\] Here, \(o\) denotes the cascading operator, which means the \(F_{\min}\) is applied after the \(F_{\max}\) filtering. However, the filtering kernel cannot be directly applied to complex curvilinear structures with different thickness. As a result, the multi-step strategy is used to enhance contrast between curvilinear objects and their surrounding backgrounds. As shown in Fig.3, the step is too small, part of curvilinear structures cannot be detected. While the step is too large, a lot of undesired shapes cannot be suppressed. Therefore, a multi-step strategy is applied to discriminate curvilinear structures and their surrounding tissues in medical images. Nevertheless, the greatest challenge is the difficulty of detecting weak, thin and uneven curvilinear structures. Inspired by our previously published methods[10; 30; 31], the vector field is applied to remedy the drawback. The vector representation can be written as \[\theta_{\text{max}}^{i}=\begin{cases}\arg\max(\ell_{\text{max}}^{S,\theta_{i}} ),\text{if }\ell_{\text{max}}^{S,\theta_{i}}>0\\ \\ NaN,else\end{cases} \tag{14}\] Equivalently \[\vec{V}_{\text{max}}\left(\theta_{\text{max}}^{i}\right)=\begin{cases}\left( \cos\theta_{\text{max}}^{i},\sin\theta_{\text{max}}^{i}\right),\text{if }\ell_{\text{max}}^{S,\theta_{i}}>0\\ NaN,else\end{cases} \tag{15}\] Unfortunately, it is difficult to represent vectors with characteristic symbols in two dimensional images. Based on the fact that the vector \(\vec{V}_{\text{max}}\) has 2(L-1) different directions, we skillfully use 2(L-1) symbols to indicate different vectors. The approach can not only tactfully represent different vectors, but also save space consumption. To investigate the effect of the presented approach, an original image is chosen in Fig.4(a) and the \(\vec{V}_{\text{max}}\) vector field is drawn in green arrows in Fig.4(b). Fig.4(c) indicates a zoomed rectangle region of Fig.4(b). For comparision, the transform domain is given in Fig.4(d). As observed, the transform domain can better represent the direction of curvature structures than vector field. Finally, the distinctive strategy that incorporates the improved ODoS filter as part of a deep learning network is presented to highlight the inspection of shape, width, tortuosity, and other characteristics for curvilinear object segmentation. ### Deep learning framework Inspired by previously published works [27, 28, 34], an improved deep learning framework that incorporates the ODoS filter as parts of the internet is presented. The purpose of this strategy is to alleviate the dependence of traditional deep learning techniques on large datasets, by aggregating robustness through a hybrid design that integrates both priori knowledges about the curvilinear objects, as well as deep learning models. In which, IterNet that consists of multiple iterations of the mini-unet uses skip-connection and weight-sharing to facilitate training [34]. Unlike the traditional IterNet model, the improved ODoS filter consists of the first convolutional layer of the presented framework, followed by two conventional layers. In other words, the improved ODoS filter that considers structural features, placed at the beginning of the presented deep learning framework to illustrate that spatial attention should Figure 4: Vector field transform. Figure 5: The improved IterNet model focus on curvilinear structures in medical images. Besides that, we keep the following layers as the same with the original IterNet[28, 34]. ## 3 Experimental results The presented model was implemented in the PyTorch library with an NVIDIA GeForce RTX 3060. In which, the overall optimizer is adaptive moment estimation (Adam) in the program, the initial learning rate is set to 0.0001 and the maximum number of epochs is equal to 100. To facilitate the observation and objective evaluation of the presented model, we apply our scheme on three widely used datasets: DRIVE [35], STARE [36] and CHASEDB1 [37]. As a comparison, four state-of-the-art methods introduced by Shi et al. [28], Sha et al. [38], Cao et al. [39] and Chen et al. [40] were implemented and applied to the same datasets. Both visual inspection and quantitative evaluation illustrated that our scheme has a good performance in curvilinear object segmentation in medical images. ### Visual inspection To demonstrate the effectiveness of the proposed framework, we apply the model trained on three datasets. As illustrated in Fig.6, the original images and the corresponding ground truth are given in the first and second rows. The segmented curvilinear objects with Transformer-Unet [28], Swin-Unet [38], TransUnet [39], LIOT [40] and the presented framework are drawn respectively at the third, fourth, fifth and sixth rows. As labeled with blue shapes, the drawback of the compared methods cannot detect weak, thin and uneven curvilinear structures completely. The main reason is that these compared methods focus on developing deep architectures and ignore capturing the structural features of curvature objects. On the contrary, the presented model can well describe the curvilinear structure features by introducing an ODoS filter to classify the curvilinear strucure from background more effectively. Figure 6: Experimental results with different methods was validated in DRIVE, STARE, and CHASEDB1 datasets. ### Quantitative evaluation The presented method was validated on three publicly available datasets. As shown in Table 1, some high quality evaluation results are illustrated. Compared with four state-of-the-art methods, the presented method has better mean F1-score and accuracy values in most cases. Such good results are attributed to the excellent capturing of curvilinear structure features by the presented model. Both visual inspection and quantitative evaluation exhibited that the presented deep learning framework can outperform these state-of-the-art methods [28, 38, 39, 40]. ## 4 Discussion In this paper, a unique method based on ODoS filter and deep learning network is presented for curvilinear object segmentation in medical images. The presented method has many specific \begin{table} \begin{tabular}{c|c|c|c c c c} \hline Dataset & Year & Methods & F\({}_{1}\) score & ACC & SE & SP \\ \hline DRIVE & 2021 & Unet Transformer & 0.519 & 0.946 & 0.527 & 0.956 \\ & 2021 & Swin-Unet & 0.671 & 0.925 & 0.070 & 0.942 \\ & 2021 & TransUnet & 0.099 & 0.126 & 0.053 & 0.954 \\ & 2022 & LIOT & 0.814 & 0.953 & 0.811 & 0.974 \\ & & Ours & **0.826** & **0.969** & **0.828** & **0.983** \\ \hline STARE & 2021 & Unet Transformer & 0.546 & 0.922 & 0.563 & 0.942 \\ & 2021 & Swin-Unet & 0.623 & 0.933 & 0.567 & 0.984 \\ \hline STARE & 2021 & TransUnet & 0.576 & 0.865 & 0.534 & 0.965 \\ & 2022 & LIOT & 0.810 & 0.960 & 0.816 & 0.977 \\ & & Ours & **0.819** & **0.973** & **0.823** & **0.987** \\ \hline CHASEDB1 & 2021 & Unet Transformer & 0.158 & 0.870 & 0.174 & 0.925 \\ & 2021 & Swin-Unet & 0.626 & 0.954 & 0.605 & 0.978 \\ CHASEDB1 & 2021 & TransUnet & 0.129 & 0.889 & 0.113 & 0.951 \\ & 2022 & LIOT & 0.783 & 0.969 & 0.793 & 0.980 \\ & & Ours & **0.784** & **0.970** & **0.807** & **0.982** \\ \hline \end{tabular} \end{table} Table 1: **Quantitative evaluation with different methods.** characteristics and advantages. First, a unique approach that incorporates the improved ODoS filter as part of a deep learning network is presented to focus on spatial attention of curvilinear objects in medical images. While many deep learning methods pay most attention to develop deep architectures and do almost nothing to capture the structural features of curvature objects, which may lead to unsatisfactory results. Second, the IterNet network is applied to find obscured details of the curvilinear objects from the improved ODoS filter, rather than the raw input image. The novel strategy has a good performance in grasping the spatial information of curvature objects in medical images. Third, the merits of deep learning networks and handmade features are tightly integrated to generate an efficient image processing application framework for curvilinear object segmentation in medical images. Last, the presented deep learning framework is expected to preserve the completeness of weak curvilinear object detection while maximally eliminating the unrelated interferences. Detection of weak curvilinear objects in medical images is important in clinical diagnosis. The proposed method was validated in a comparative way using three publicly available datasets. Both visual inspection and quantitative evaluation indicates that the presented approach can outperform these state-of-the-art methods [28, 38-40] in terms of weak object detection. Compared with manually defined ground truth, the presented method has better mean F1-score and accuracy values in most cases. The main reason is that these compared methods focus on developing deep architectures and ignore capturing the structural features of curvature objects. On the contrary, the combination of the structuring elements and deep learning methods produces better segmentation results in curvilinear object segmentation. Compared with these state-of-the-art methods [28, 38-40], the presented method appears more efficient on weak curvilinear object detection. This is ascribed to a well-designed combination of the improved ODoS filter and deep learning model. Nevertheless,the presented methods have some drawbacks. The primary limitation of the presented method compared these state-of-the-art methods is a longer computation time. The second limitation is that this method mainly used to detect curvilinear objects and cannot be used to detect other shapes. Although the presented method has some disadvantages than these state-of-the-art methods, the completeness of weak curvilinear object detection has been improved with our scheme. ## 5 Conclusion In this paper, we present a distinctive approach based on ODoS filter and deep learning network to focus on spatial attentions for curvilinear object segmentation in medical images. Motivated by an obervation that curvilinear lines and their adjacent interference tissues often have a large magnitude and direction difference, a new method that incorporates the ODoS filter as part of a deep learning network is presented to emphasis on the spatial attention of curvilinear objects in medical images. Another contribution of our scheme is to alleviate the dependence of traditional deep learning techniques on large datasets, by aggregating robustness through a hybrid design that integrates both priori knowledges about the curvilinear objects, as well as deep learning models. The performance of the computational model was validated in experiments with publicly available DRIVE, STARE and CHASEDB1 datasets. Both visual inspection and quantitative evaluation exhibited that the presented deep learning framework can outperform these state-of-the-art methods in curvilinear object segmentation. In the future, more sophisticated image transformation is worthy of investigation for curvilinear segmentation. In addition, designing more effective deep learning models [41, 42] will still be our emphasis of pursuit. ## Funding This research was supported by the Jiangxi Provincial Natural Science Foundation (nos. 20212BAB202007, 20202BAB212004, 20204BCJL23035, 20192ACB21004, 20181BAB202017), the Hunan Provincial Natural Science Foundation (no. 2021JJ30165), the Hunan Special Funds for the Construction of Innovative Province(Huxiang High-level Talent Gathering Project-Innovative talents) (no. 2019RS1072), the Educational Science Research Project of China Institute of communications Education (no. JJYB20-33), the Scientific and Technological Research Project of Education Department in Jiangxi Province (nos. GJJ190356, GJJ210645) and the Science and Technology project of Changsha City (no. kq2001014). ## Disclosures The authors declare no conflicts of interest. ## Data availability Data underlying the results presented in this paper are available in Ref. [35], Ref. [36] and Ref. [37].
2305.15586
Manifold Diffusion Fields
We present Manifold Diffusion Fields (MDF), an approach that unlocks learning of diffusion models of data in general non-Euclidean geometries. Leveraging insights from spectral geometry analysis, we define an intrinsic coordinate system on the manifold via the eigen-functions of the Laplace-Beltrami Operator. MDF represents functions using an explicit parametrization formed by a set of multiple input-output pairs. Our approach allows to sample continuous functions on manifolds and is invariant with respect to rigid and isometric transformations of the manifold. In addition, we show that MDF generalizes to the case where the training set contains functions on different manifolds. Empirical results on multiple datasets and manifolds including challenging scientific problems like weather prediction or molecular conformation show that MDF can capture distributions of such functions with better diversity and fidelity than previous approaches.
Ahmed A. Elhag, Yuyang Wang, Joshua M. Susskind, Miguel Angel Bautista
2023-05-24T21:42:45Z
http://arxiv.org/abs/2305.15586v2
# Manifold Diffusion Fields ###### Abstract We present Manifold Diffusion Fields (MDF), an approach to learn generative models of continuous functions defined over Riemannian manifolds. Leveraging insights from spectral geometry analysis, we define an intrinsic coordinate system on the manifold via the eigen-functions of the Laplace-Beltrami Operator. MDF represents functions using an explicit parametrization formed by a set of multiple input-output pairs. Our approach allows to sample continuous functions on manifolds and is invariant with respect to rigid and isometric transformations of the manifold. Empirical results on several datasets and manifolds show that MDF can capture distributions of such functions with better diversity and fidelity than previous approaches. ## 1 Introduction Machine Learning's pivotal challenge lies in approximating probability distributions from finite observational datasets, with recent strides made in areas like text [8], images [44], and video [29]. The burgeoning interest in diffusion generative models [28; 44; 49] can be attributed to their stable optimization goals, surpassing previous approaches like generative adversarial nets (GANs) [21] or energy-based models (EBMs) [12]. Notably, diffusion generative models demonstrate superior performance over GANs in image-related tasks [10] and exhibit fewer training anomalies [35]. However, fully utilizing the potential of these models across scientific and engineering disciplines remains an open problem. While diffusion generative models excel in domains with Euclidean (_i.e._ flat) spaces like 2D images or 3D geometry and video, many scientific problems involve reasoning about continuous functions on curved spaces (_i.e._ Riemannian manifolds). Examples include climate observations on the sphere [26; 39] or solving PDEs on curved surfaces, which is a crucial problem in areas like quantum mechanics [5] and computational chemistry [20]. In this paper, we introduce Manifold Diffusion Fields (MDF), presenting a step forward in the problem of learning probabilistic models over functions (or fields) defined on Riemannian manifolds. We take the term _function_ and _field_ to have equivalent meaning throughout the paper. These fields \(f:\mathcal{M}\to\mathcal{Y}\) map points from a Riemannian manifold \(\mathcal{M}\) to corresponding values in signal space \(\mathcal{Y}\). MDF is trained on collections of multiple fields and learns a generative model that can sample different fields over a Riemannian manifold. As an illustrative example, let us take the case in which \(\mathcal{M}\) corresponds to a 2-manifold immersed in 3D ambient space and functions \(f\) map points on the manifold to \(\mathbb{R}\). In Fig. 5 we show real samples of such functions for different manifolds, as well as samples generated by MDF. We identify two limitations to be addressed in order to model distributions of functions over manifolds via diffusion generative models. The first one is that manifolds are not naturally equipped with a canonical coordinate system to express points. The second issue is that functions are continuous and thus infinite dimensional, as opposed to points in \(\mathbb{R}^{d}\) like images. As a result, in this paper we make the following contributions: * We borrow insights from spectral geometry analysis to define a coordinate system for points in manifolds using the eigen-functions of the Laplace-Beltrami Operator. * We formulate an end-to-end generative model for functions defined on manifolds, allowing sampling different fields over a manifold. * We empirically demonstrate that our model captures distributions over functions on manifolds better than recent approaches like [14; 55], yielding diverse and high fidelity samples, while being robust to rigid and isometric manifold transformations. Experiments on climate modeling datasets [26] and PDE problems show the practicality of MDF in scientific domains. ## 2 Related Work Our approach extends recent efforts in generative models for continuous functions in euclidean space [11; 13; 14; 55], shown Fig. 1(a), to functions defined over manifolds, see Fig. 1(b). The term Implicit Neural Representation (INR) is used in these works to denote a parameterization of a single function (_e.g._ a single image in 2D) using a neural network that maps the function's inputs (_i.e._ pixel coordinates) to its outputs (_i.e._RGB values). Different approaches have been proposed to learn distributions over fields in euclidean space, GASP [14] leverages a GAN whose generator produces field data whereas a point cloud discriminator operates on discretized data and aims to differentiate real and generated functions. Two-stage approaches [11; 13] adopt a latent field parameterization [45] where functions are parameterized via a hyper-network [23] and a generative model is learnt on the latent or INR representations. In addition, MDF also relates to recent work focusing on fitting a function (_e.g._learning an INR) on a manifold using an intrinsic coordinate system [22; 36], and generalizes it to the problem of learning a probabilistic model over multiple functions defined on a manifold. Finally, recent approaches have used Laplacian embeddings for supervised learning problems on graphs [16; 24; 42; 48]. The learning problem we tackle with MDF can be interpreted as lifting the Riemannian generative modeling problem [7; 9; 19; 47] to function spaces. Fig. 1(b)(c) show the training setting for the two problems, which are related but not directly comparable. MDF learns a generative model over functions defined on manifolds, _e.g._ a probability density over functions \(f:\mathcal{M}\to\mathcal{Y}\) that map points in the manifold \(\mathcal{M}\) to a signal space \(\mathcal{Y}\). In contrast, the goal in Riemannian generative modeling is to learn a probability density from an observed set of points living in a Riemannian manifold \(\mathcal{M}\). For example, in the case of the bunny, shown in Fig. 1(c), a Riemannian generative model learns a distribution of points \(\mathbf{x}\in\mathcal{M}\) on the manifold. MDF is also related to work on Neural Processes [15; 18; 33], which also learn distributions over functions. As opposed to the formulation of Neural Processes which optimizes an ELBO [34] we formulate MDF as a denoising diffusion process in function space, which results in a robust training objective and a powerful inference process. Moreover, our work relates to formulations of Gaussian Processes (GP) on Riemannian manifolds [6; 30]. These approaches are GP formulations Figure 1: (a) Generative models of fields in Euclidean space [11; 13; 14; 55] learn a distribution \(p_{\theta}\) over **functions whose domain** is \(\mathbb{R}^{n}\). We show an example where each function is the result of evaluating a Gaussian mixture with 3 random components in 2D. (b) MDF learns a distribution \(p_{\theta}\) from a collection of **fields whose domain is a general Riemannian manifold**\(f\sim q(f)|f:\mathcal{M}\to\mathcal{Y}\). Similarly, as an illustrative example each function is the result of evaluating a Gaussian mixture with 3 random components on \(\mathcal{M}\) (_i.e._ the Stanford bunny). (c) Riemannian generative models [7; 9; 19; 47] learn a parametric distribution \(p_{\theta}\) from an empirical observations \(\mathbf{x}\sim q(\mathbf{x})|\mathbf{x}\in\mathcal{M}\) of **points \(\mathbf{x}\) on a Riemannian manifold**\(\mathcal{M}\), denoted by black dots on the manifold. of Riemannian generative modeling (see Fig. 1), in the sense that they learn conditional distributions of points on the manifold, as opposed to distributions over functions on the manifold like MDF. ## 3 Preliminaries ### Denoising Diffusion Probabilistic Models Denoising Diffusion Probabilistic Models [28] (DDPMs) belong to the broad family of latent variable models. We refer the reader to [17] for an in depth review. In short, to learn a parametric data distribution \(p_{\theta}(\mathbf{x}_{0})\) from an empirical distribution of finite samples \(q(\mathbf{x}_{0})\), DDPMs reverse a diffusion Markov Chain that generates lat\(\mathbf{x}_{1:T}\) by gradually adding Gaussian noise to the data \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\) for \(T\) time-steps as follows: \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):=\mathcal{N}\left(\mathbf{x}_{t-1};\sqrt{\alpha_{t}}\bm {x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right)\). Here, \(\bar{\alpha}_{t}\) is the cumulative product of fixed variances with a handcrafted scheduling up to time-step \(t\). [28] introduce an efficient training recipe in which: i) The forward process adopts sampling in closed form. ii) reversing the diffusion process is equivalent to learning a sequence of denoising (or score) networks \(\epsilon_{\theta}\), with tied weights. Reparameterizing the forward process as \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\) results in the "simple" DDPM loss: \(\mathbb{E}_{t\sim[0,T],\mathbf{x}_{0}\sim q(\mathbf{x}_{0}),\epsilon\sim\mathcal{N}( \mathbf{I})}\left[\|\epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_ {0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon,t)\|^{2}\right]\), which makes learning of the data distribution \(p_{\theta}(\mathbf{x}_{0})\) both efficient and scalable. At inference time, we compute \(\mathbf{x}_{0}\sim p_{\theta}(\mathbf{x}_{0})\) via ancestral sampling [28]. Concretely, we start by sampling \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and iteratively apply the score network \(\epsilon_{\theta}\) to denoise \(\mathbf{x}_{T}\), thus reversing the diffusion Markov Chain to obtain \(\mathbf{x}_{0}\). Sampling \(\mathbf{x}_{t-1}\sim p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is equivalent to computing the update: \(\mathbf{x}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\frac{1-\alpha_{t}}{ \sqrt{1-\alpha_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t)\right)+\mathbf{z}\), where at each inference step a stochastic component \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is injected, resembling sampling via Langevin dynamics [53]. In practice, DDPMs have obtained amazing results for signals living in an euclidean grid [29; 44]. However, the extension to functions defined on curved manifolds remains an open problem. ### Riemannian Manifolds Previous work on Riemannian generative models [7; 9; 19; 47] develops machinery to learn distribution from a training set of points living on Riemannian manifolds. Riemannian manifolds are connected and compact manifolds \(\mathcal{M}\) equipped with a smooth metric \(g:\ \mathcal{M}\ \times\ \mathcal{M}\to\mathbb{R}_{+}\) (_e.g._ a function to measure distances between any two points in \(\mathcal{M}\)). A core tool in Riemannian manifolds is the tangent space, this space defines the tangent hyper-plane of a point \(\mathbf{x}\in\mathcal{M}\) and is denoted by \(T_{\mathbf{x}}\mathcal{M}\). This tangent space \(T_{\mathbf{x}}\mathcal{M}\) is used to define inner products \(\langle\mathbf{u},\mathbf{v}\rangle_{g},\mathbf{u},\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}\), which in turns defines \(g\). The tangent bundle \(T\mathcal{M}\) is defined as the collection of tangent spaces for all points \(T_{\mathbf{x}}\mathcal{M}\ \forall\mathbf{x}\in\mathcal{M}\). In practice we cannot assume that for general geometries (_e.g._ general 3D meshes) one can efficiently compute \(g\). While it is possible to define an analytical form for the Riemannian metric \(g\) on simple parametric manifolds (_e.g._ hyper-spheres, hyperbolic spaces, tori), general geometries (_i.e._ the Stanford bunny) are inherently discrete and irregular, which can make it expensive to even approximate \(g\). To mitigate these issues MDF is formulated from the ground up without relying on access to an analytical form for \(g\) or the tangent bundle \(T\mathcal{M}\) and allows for learning a distribution of functions defined on general geometries. ### Laplace-Beltrami Operator The Laplace-Beltrami Operator (LBO) denoted by \(\Delta_{\mathcal{M}}\) is one of the cornerstones of differential geometry and can be intuitively understood as a generalization of the Laplace operator to functions defined on Riemannian manifolds \(\mathcal{M}\). One of the basic uses of the Laplace-Beltrami operator is to define a functional basis on the manifold by solving the general eigenvalue problem associated with \(\Delta_{\mathcal{M}}\), which is a foundational technique in spectral geometry analysis [38]. The eigen-decomposition of \(\Delta_{\mathcal{M}}\) are the non-trivial solutions to the equation \(\Delta_{\mathcal{M}}\varphi_{i}=\lambda_{i}\varphi_{i}\). The eigen-functions \(\varphi_{i}:\mathcal{M}\to\mathbb{R}\) represent an orthonormal functional basis for the space of square integrable functions [38; 43]. Thus, one can express a square integrable function \(f:\mathcal{M}\to\mathcal{Y}\), with \(f\in L^{2}\) as a linear combination of the functional basis, as follows: \(f=\sum\limits_{i=1}^{\infty}<f,\varphi_{i}>\varphi_{i}\). In practice, the infinite sum is truncated to the \(k\) eigen-functions with lowest eigen-values, where the ordering of the eigen-values \(\lambda_{1}<\lambda_{2}\cdots<\lambda_{k}\) enables a low-pass filter of the basis. Moreover, [38] shows that the eigen-functions of \(\Delta_{M}\) can be interpreted as a Fourier-like function basis [52] on the manifold, _e.g_. an intrinsic coordinate system for the manifold. In particular, if \(\mathcal{M}=S^{2}\) this functional basis is equivalent to spherical harmonics, and in euclidean space it becomes a Fourier basis which is typically used in implicit representations [54]. MDF uses the eigen-functions of the LBO \(\Delta_{\mathcal{M}}\) to define a Fourier-like positional embedding (PE) for points on \(\mathcal{M}\) (see Fig. 2). ## 4 Method MDF is a diffusion generative model that captures distributions over fields defined on a Riemannian manifold \(\mathcal{M}\). We are given observations in the form of an empirical distribution \(f_{0}\sim q(f_{0})\) over fields where a field \(f_{0}:\mathcal{M}\rightarrow\mathcal{Y}\) maps points from a manifold \(\mathcal{M}\) to a signal space \(\mathcal{Y}\). As a result, latent variables \(f_{1:T}\) are also fields on manifolds that can be continuously evaluated. To tackle the problem of learning a diffusion generative model over fields we employ a similar recipe to [55], generalizing from fields defined in ambient Euclidean space to functions on Riemannian manifolds. The key difference of MDF with respect to [55] is that instead of representing point coordinates in ambient space we use the first \(k\) eigen-functions \(\varphi_{i=1:k}\) of \(\Delta_{M}\) to define a Fourier-like representation on \(\mathcal{M}\). In practice, for general geometries (_e.g_. general 3D meshes with \(n\) vertices) we compute eigenvectors of the normalized graph Laplacian, which converges to the LBO \(\Delta_{\mathcal{M}}\) as \(n\rightarrow\infty\)[2; 3; 4]. The eigen-decomposition of the normalized graph Laplacian can be computed efficiently using sparse eigen-problem solvers [25] and only needs to be computed once during training. We use the term \(\varphi(\mathbf{x})=\sqrt{n}[\varphi_{1}(\mathbf{x}),\varphi_{2}(\mathbf{x}),\ldots,\varphi _{k}(\mathbf{x})]\in\mathbb{R}^{k}\) to denote the normalized eigen-function representation of a point \(\mathbf{x}\in\mathcal{M}\). In Fig. 2 we show a visual comparison of standard Fourier PE on Euclidean space and the eigen-functions of the LBO on a manifold. We adopt an explicit field parametrization [55], where a field is characterized by a set of coordinate-signal pairs \(\{(\varphi(\mathbf{x}_{c}),\mathbf{y}_{(c,0)})\}\), \(\mathbf{x}_{c}\in\mathcal{M}\), \(\mathbf{y}_{(c,0)}\in\mathcal{Y}\), which is denoted as _context set_. We row-wise stack the context set and refer to the resulting matrix via \(\mathbf{C}_{0}~{}=~{}[\varphi(\mathbf{X}_{c}),~{}\mathbf{Y}_{(c,0)}]\). Here, \(\varphi(\mathbf{X}_{c})\) denotes the eigen-function representation of the coordinate portion and \(\mathbf{Y}_{(c,0)}\) denotes the signal portion of the context set at time \(t=0\). We define the forward process for the context set by diffusing the signal and keeping the eigen-functions fixed: \[\mathbf{C}_{t}=[\varphi(\mathbf{X}_{c}),\mathbf{Y}_{(c,t)}=\sqrt{\bar{\alpha }_{t}}\mathbf{Y}_{(c,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{c}], \tag{1}\] where \(\epsilon_{c}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is a noise vector of the appropriate size. We now turn to the task of formulating a score network for fields. Following [55], the score network needs to take as input the context set (_i.e_. the field parametrization), and needs to accept being evaluated continuously in \(\mathcal{M}\). We do this by employing a _query set_\(\{\mathbf{x}_{q},\mathbf{y}_{(q,0)}\}\). Equivalently to the context set, we row-wise stack query pairs and denote the resulting matrix as \(\mathbf{Q}_{0}~{}=~{}[\varphi(\mathbf{X}_{q}),~{}\mathbf{Y}_{(q,0)}]\). Note that the forward diffusion process is equivalently defined for both context and query sets: \[\mathbf{Q}_{t}=[\varphi(\mathbf{X}_{q}),\mathbf{Y}_{(q,t)}=\sqrt{\bar{\alpha} _{t}}\mathbf{Y}_{(q,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{q}], \tag{2}\] Figure 2: **Left:** Fourier PE of a point \(\mathbf{x}\) in 2D euclidean space. Generative models of functions in ambient space [11; 13; 14; 55] use this representation to encode a function’s input. **Right:** MDF uses the eigen-functions \(\varphi_{i}\) of the Laplace-Beltrami Operator (LBO) \(\Delta_{\mathcal{M}}\) evaluated at a point \(\mathbf{x}\in\mathcal{M}\). where \(\epsilon_{q}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is a noise vector of the appropriate size. The underlying field is solely defined by the context set, and the query set are the function evaluations to be de-noised. The resulting _score field_ model is formulated as follows, \(\hat{\epsilon_{q}}=\epsilon_{\theta}(\mathbf{C}_{t},t,\mathbf{Q}_{t})\). Using the explicit field characterization and the score field network, we obtain the training and inference procedures in Alg. 1 and Alg. 2, respectively, which are accompanied by illustrative examples of sampling a field encoding a Gaussian mixture model over the manifold (_i_.\(e\). the bunny). For training, we uniformly sample context and query sets from \(f_{0}\sim\mathrm{Uniform}(q(f_{0}))\) and only corrupt their signal using the forward process in Eq. equation 1 and Eq. equation 2. We train the score field network \(\epsilon_{\theta}\) to denoise the signal portion of the query set, given the context set. During sampling, to generate a field \(f_{0}\sim p_{\theta}(f_{0})\) we first define a query set \(\mathbf{Q}_{T}\ =\ [\varphi(\mathbf{X}_{q}),\ \mathbf{Y}_{(q,T)}\sim\ \mathcal{N}( \mathbf{0},\ \mathbf{I})]\) of random values to be de-noised. Similar to [55] we set the context set to be a random subset of the query set. We use the context set to denoise the query set and follow ancestral sampling as in the vanilla DDPM [28]. Note that during inference the eigen-function representation \(\varphi(x)\) of the context and query sets does not change, only their corresponding signal value. ``` 1:\(\Delta_{\mathcal{M}}\varphi_{i}=\varphi_{i}\lambda_{i}\) // LBO eigen-decomposition 2:repeat 3:\((\mathbf{C}_{0},\mathbf{Q}_{0})\sim\mathrm{Uniform}(q(f_{0}))\) 4:\(t\sim\mathrm{Uniform}(\{1,\ldots,T\})\) 5:\(\epsilon_{c}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), \(\epsilon_{q}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) 6:\(\mathbf{C}_{t}=[\varphi(\mathbf{X}_{c}),\sqrt{\bar{\alpha}_{t}}\mathbf{Y}_{(c,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{c}]\) 7:\(\mathbf{Q}_{t}=[\varphi(\mathbf{X}_{q}),\sqrt{\bar{\alpha}_{t}}\mathbf{Y}_{(q,0)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{q}]\) 8: Take gradient descent step on \(\nabla_{\theta}\left\|\epsilon_{q}-\epsilon_{\theta}(\mathbf{C}_{t},t,\mathbf{ Q}_{t})\right\|^{2}\) 9:until converged ``` **Algorithm 1** Training ## 5 Experiments We validate the practicality of MDF via different experiments where we evaluate unconditional and conditional performance, as well as robustness to manifold transformations. As opposed to generative models over images, we cannot rely on FID [27] type metrics for evaluation since functions are defined on curved geometries. We propose to borrow metrics from generative modeling of point cloud data [1], namely Coverage (COV) and Minimum Matching Distance (MMD). We compute COV and MMD metrics based on the \(l_{2}\) distance in signal space for corresponding vertices in the manifolds. Figure 4: **Left:** MDF sampling algorithm. **Right**: Visual depiction of the sampling process for a field on the bunny manifold. Figure 3: **Left:** MDF training algorithm. **Right**: Visual depiction of a training iteration for a field on the bunny manifold \(\mathcal{M}\). See Sect. 4 for definitions. ### Unconditional inference We evaluate MDF on 3 different manifolds: a sine wave, the Stanford bunny and a human body mesh. These manifolds have an increasing mean curvature \(|K|\), which serves as a measure for how distant they are from being globally Euclidean. On each manifold we define 3 field datasets: a Gaussian Mixture (GMM) with 3 components (where in each field the 3 components are randomly placed on the manifold), MNIST [37] and CelebA-HQ [32] images. We use an off-the-shelf texture mapping approach [50] to map images to manifolds. We compare MDF with Diffusion Probabilistic Fields (DPF) [55] a generative model for fields in ambient space, where points in the manifold are parametrized by the Fourier PE of its coordinates in 3D space. To provide a fair comparison we equate all the hyper-parameters in both MDF and DPF [55]. Tab. 1-2-3 show results for the different approaches and tasks. We observe that MDF tends to outperform DPF [55], both in terms of covering the empirical distribution, resulting in higher COV, but also in the fidelity of the generated fields, obtaining a lower MMD. In particular, MDF outperforms DPF [55] across the board for manifolds of large mean curvature \(|K|\). We attribute this behaviour to our choice of using intrinsic functional basis (_e.g_. eigen-functions of the LBO) to represent a coordinate system for points in the manifold. Fig. 5 shows a side to side comparison of real and generated functions on different manifolds obtained from MDF. We also compare MDF with GASP [14], a generative model for continuous functions using an adversarial formulation. We compare MDF and GASP performance on the CelebA-HQ dataset [32] mapped on the bunny manifold. Additionally, we report results on the ERA5 climate dataset [14], which is composed of functions defined on the sphere \(f:S^{2}\rightarrow\mathbb{R}^{1}\) (see Fig. 5). For the ERA5 dataset we use spherical harmonics to compute \(\varphi\), which are equivalent to the eigen-functions of the LBO on the sphere [38]. To compare with GASP we use their pre-trained models available at [https://github.com/EmilienDupont/neural-function-distributions](https://github.com/EmilienDupont/neural-function-distributions) to generate samples. In the case of CelebA-HQ, we use GASP to generate 2D images and map them to the bunny manifold using [50]. Experimental results in Tab. 5 show that MDF outperforms GASP in both ERA5[26] and CelebA-HQ datasets, obtaining both higher coverage but also higher fidelity in generated functions. This can be observed in Fig. 6 where the samples generated by MDF are visually crisper than those generated by GASP. Furthermore, we ablate the performance of MDF as the number of eigen-functions used to compute the coordinate representation \(\varphi\) increases (_e.g_. the eigen-decomposition of the LBO). For this task we use the bunny and the GMM dataset. Results in Fig. 7 show that performance initially increases with the number of eigen-functions up to a point where high frequency eigen-functions of the LBO are not needed to faithfully encode the distribution of functions. Figure 5: MDF learns a distribution over a collection of fields \(f:\mathcal{M}\rightarrow\mathbb{R}^{d}\), where each field is defined on a general Riemannian manifold \(\mathcal{M}\). We show real samples and MDF’s generations on different datasets of fields defined on different manifolds. **Top:** MNIST digits on the sine wave manifold. **Middle:** ERA5 climate dataset [26] on the 2D sphere. **Bottom:** GMM dataset on the bunny manifold. ### Robustness of MDF We evaluate MDF's robustness to rigid and isometric transformations of the training manifold \(\mathcal{M}\). We use the _cat_ category geometries from [51] and build a dataset of different fields on the manifold by generating 2 gaussians around the area of the tail and the right paw of the cat. Note that every field is different since the gaussians are centered at different points in the tail and right paw, see Fig. 9(a). During training, the model only has access to fields defined on a fixed manifold \(\mathcal{M}\) (see Fig. 9(a)). We then evaluate the model on either a rigid \(\mathcal{M}_{\text{rigid}}\) (shown in Fig. 9(b)) or isometric \(\mathcal{M}_{\text{iso}}\) (Fig. 9(c)) transformation of \(\mathcal{M}\). Qualitatively comparing the transfer results of MDF with DPF [55] in Fig. 9(d)-(e), we see a marked difference in fidelity and coverage of the distribution. In Fig. 9 we show how performance changes as the magnitude of a rigid transformation (_e.g._ a rotation about the \(z-\)axis) of \(\mathcal{M}\) increases. As expected, the performance of DPF [55] sharply declines as we move away from the training setting, denoted by a rotation of \(0\) radians. However, MDF obtains a stable performance across transformations, this is due to the eigen-function basis being intrinsic to \(\mathcal{M}\), and thus, invariant to rigid transformations. In addition, in Tab. 4 we show results under an isometric transformation of the manifold (_e.g._ changing the pose of the cat, see Fig. 9(c)). As in the rigid \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{GMM} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CelebA-HQ} \\ \hline & COV\(\uparrow\) & MMD \(\downarrow\) & COV\(\uparrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & MMD \(\downarrow\) \\ \hline MDF & **0.444** & 0.01405 & **0.564** & **0.0954** & 0.354 & **0.11601** \\ \hline DPF [55] & 0.352 & **0.01339** & 0.552 & 0.09633 & **0.361** & 0.12288 \\ \hline \hline \end{tabular} \end{table} Table 1: COV and MMD metrics for different datasets on the _wave_ manifold (mean curvature \(|K|=0.004\)). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & COV\(\uparrow\) & MMD \(\downarrow\) & COV\(\uparrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & MMD \(\downarrow\) \\ \hline MDF & **0.575** & **0.00108** & **0.551** & **0.07205** & **0.346** & **0.11101** \\ \hline DPF [55] & 0.472 & 0.00120 & 0.454 & 0.11525 & 0.313 & 0.11530 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on the _bunny_ manifold (mean curvature \(|K|=7.388\)). As the mean curvature increases the boost of MDF over DPF [55] becomes larger across all datasets. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{ERA5 [26]} & \multicolumn{2}{c}{CelebA-HQ on \(\mathcal{M}\)} \\ \hline & COV\(\uparrow\) & MMD \(\downarrow\) & COV\(\uparrow\) & MMD\(\downarrow\) \\ \hline MDF & **0.347** & **0.00498** & **0.346** & **0.11101** \\ \hline GASP [14] & 0.114 & 0.00964 & 0.309 & 0.38979 \\ \hline \hline \end{tabular} \end{table} Table 5: MDF outperforms GASP on ERA5[26] and CelebA-HQ both in terms of fidelity and distribution coverage. For GASP, we generate CelebA-HQ images and texture map them to the bunny manifold using [50]. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{GMM} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CelebA-HQ} \\ \hline & COV\(\uparrow\) & MMD \(\downarrow\) & COV\(\uparrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & MMD \(\downarrow\) \\ \hline MDF & **0.551** & **0.00100** & **0.529** & **0.08895** & **0.346** & **0.11416** \\ DPF [55] & 0.479 & 0.00112 & 0.472 & 0.09537 & 0.318 & 0.14502 \\ \hline \hline \end{tabular} \end{table} Table 3: _Human_ manifold (mean curvature\(|K|=25.966\)). At high mean curvatures MDF consistently outperforms DPF [55]. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{GMM} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CelebA-HQ} \\ \hline & COV\(\uparrow\) & MMD \(\downarrow\) & COV\(\uparrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & MMD \(\downarrow\) \\ \hline MDF & **0.55** & **0.00108** & **0.551** & **0.07205** & **0.346** & **0.11101** \\ \hline DPF [55] & 0.472 & 0.00120 & 0.454 & 0.11525 & 0.313 & 0.11530 \\ \hline \hline \end{tabular} \end{table} Table 4: Training MDF on a manifold \(\mathcal{M}\) and evaluating it on an isometric transformation \(\mathcal{M}_{\text{iso}}\) does not impact performance, while being on par with training directly on the transformed manifold. Figure 6: CelebA-HQ samples generated by MDF and GASP [14] on the bunny. Figure 7: Performance of MDF as a function of the number of eigen-functions of the LBO, measured by COV and MMD metrics. As expected, performance increases initially as more eigen-functions are used, followed by a plateau phase for more than \(k=32\) eigen-functions. setting, the performance of DPF [55] sharply declines under an isometric transformation while MDF keeps performance constant. In addition, transferring to an isometric transformation (\(\mathcal{M}\rightarrow\mathcal{M}_{\text{iso}}\)) performs comparably with directly training on the isometric transformation (\(\mathcal{M}_{\text{iso}}\rightarrow\mathcal{M}_{\text{iso}}\)) up to small differences due to random weight initialization. We also provide transfer results to different discretizations of \(\mathcal{M}\). To do so, we train MDF on a low resolution discretization of a manifold and evaluate transfer to a high resolution discretization. We use the GMM dataset and the bunny manifold at 2 different resolutions: 1394 and 5570 vertices, which we get by applying loop subdivision [40] to the lowest resolution mesh. Theoretically, the Laplacian eigenvectors \(\varphi\) are only unique up to sign, which can result in ambiguity when transferring a pre-trained model to a different discretization. Empirically we did not find this to be an issue in our experiments. We hypothesize that transferring MDF from low to high resolution discretizations is largely a function of the number of eigen-functions used to compute \(\varphi\). This is because eigen-functions of small eigenvalue represent low-frequency components of the manifold which are more stable across different discretizations. In Fig. 10 we report transfer performance as a function of the number of eigen-functions used to compute \(\varphi\). We observe an initial regime where more eigenfunctions aid in transferring (up to \(64\) eigen-functions) followed by a stage where high-frequency eigen-functions negatively impact transfer performance. ### Conditional inference on PDEs In this section we evaluate MDF on conditional inference tasks. In particular, we create a dataset of different simulations of the heat diffusion PDE on a manifold. As a result, every sample in our training distribution \(f_{0}\sim q(f_{0})\) is a temporal field \(f:\mathcal{M}\times\mathbb{R}\rightarrow\mathbb{R}\). We create a training set of \(10\)k samples where each sample is a rollout of the PDE for \(10\) steps given initial conditions. We generate initial conditions by uniformly sampling 3 gaussian heat sources of equivalent magnitude on the manifold and use FEM [46] to compute the rollout over time. For this experiment we use a Figure 8: Robustness of MDF and DPF [55] with respect to rigid transformations of \(\mathcal{M}\). The distribution of fields learned by MDF is invariant with respect to rigid transformations, while DPF [55] collapses due to learning distributions in ambient space. Figure 9: (a) **Training set** composed of different fields \(f:\mathcal{M}\rightarrow\mathbb{R}\) where 2 gaussians are randomly placed in the tail and the right paw of the cat. Fields generated by **transferring** the MDF pre-trained on \(\mathcal{M}\) to (b) a rigid and (c) an isometric transformation of \(\mathcal{M}\). Fields generated by **transferring** the DPF [55] pre-trained on \(\mathcal{M}\) to (d) a rigid and (e) an isometric transformation of \(\mathcal{M}\). version of the bunny mesh with \(602\) vertices as a manifold \(\mathcal{M}\) and set the diffusivity term of the PDE to \(D=0.78\). We then train MDF on this training set of temporal fields \(f:\mathcal{M}\times\mathbb{R}\rightarrow\mathbb{R}\), which in practice simply means concatenating a Fourier PE of the time step to the eigen-functions of the LBO. We tackle the forward problem where we are given the initial conditions of the PDE and the model is tasked to predict the forward dynamics on a test set of \(60\) held out samples. To perform conditional inference with MDF we follow the recipe in [41] which has been successful in the image domain. We show the forward dynamics predicted by FEM [46] on Fig. 11(a) and MDF Fig. 11(b) for the same initial conditions in the held out set. We see how MDF successfully captures temporal dynamics, generating a temporal field consistent with observed initial conditions. Evaluating the full test set MDF obtains an mean squared error \(\text{MSE}=4.77e10-3\). In addition, MDF can directly be used for inverse problems [31]. Here we focus on inverting the full dynamics of the PDE, conditioned on sparse observations. Fig. 11(c) shows sparse observations of the FEM rollout, amounting to observing \(10\%\) of the field. Fig. 11(d) shows a probabilistic solution to the inverse PDE problem generated by MDF which is consistent with the FEM dynamics in Fig. 11(a). ## 6 Conclusions In this paper we introduced MDF a diffusion probabilistic model that is capable of capturing distributions of functions defined on general Riemannian manifolds. We leveraged tools from spectral geometry analysis and use the eigen-functions of the manifold Laplace-Beltrami Operator to define an intrinsic coordinate system on which functions are defined. This allows us to design an efficient recipe for training a diffusion probabilistic model of functions whose domain are arbitrary geometries. Our results show that we can capture distributions of functions on manifolds of increasing complexity outperforming previous approaches, while also enabling the applications on of powerful generative priors to fundamental scientific problems like forward and inverse solutions to PDEs. Figure 11: (a) Forward prediction of the heat diffusion PDE computed with FEM [46]. (b) Conditionally sampled field generated by MDF. (c) Sparse observations of the FEM solution for inverse prediction. (d) Conditionally sampled inverse solution generated by MDF. Figure 10: Transferring MDF from low to high resolution discretizations as a function of the number of eigen-functions. We observe that eigen-functions of small eigenvalue transfer better since they encode coarse (_i.e_. low-frequency) information of the manifold.
2301.08746
Characterization of a deep-depletion 4K x 4K CCD Detector System designed for ADFOSC
We present the characterization of the CCD system developed for the ADFOSC instrument on the 3.6m Devasthal Optical Telescope (DOT). We describe various experiments performed to tune the CCD controller parameters to obtain optimum performance in single and four-port readout modes. Different methodologies employed for characterizing the performance parameters of the CCD, including bias stability, noise, defects, linearity, and gain, are described here. The CCD has grade-0 characteristics at temperatures close to its nominal operating temperature of $-120^\circ$C. The overall system is linear with a regression coefficient of 0.9999, readout noise of 6 electrons, and a gain value close to unity. We demonstrate a method to calculate the dark signal using the gradient in the bias frames at lower temperatures. Using the optimized setting, we verify the performance of the CCD detector system on-sky using the ADFOSC instrument mounted on the 3.6m DOT. Some science targets were observed to evaluate the detector's performance in both imaging and spectroscopic modes.
Dimple, T. S. Kumar, A. Omar, K. Misra
2023-01-20T11:12:09Z
http://arxiv.org/abs/2301.08746v1
# Characterization of a deep-depletion 4K x 4K CCD Detector System designed for ADFOSC ###### Abstract We present the characterization of the CCD system developed for the ADFOSC instrument on the 3.6m Devasthal Optical Telescope (DOT). We describe various experiments performed to tune the CCD controller parameters to obtain optimum performance in single and four-port readout modes. Different methodologies employed for characterizing the performance parameters of the CCD, including bias stability, noise, defects, linearity, and gain, are described here. The CCD has grade-0 characteristics at temperatures close to its nominal operating temperature of \(-120^{\circ}\)C. The overall system is linear with a regression coefficient of 0.9999, readout noise of 6 electrons, and a gain value close to unity. We demonstrate a method to calculate the dark signal using the gradient in the bias frames at lower temperatures. Using the optimized setting, we verify the performance of the CCD detector system on-sky using the ADFOSC instrument mounted on the 3.6m DOT. Some science targets were observed to evaluate the detector's performance in both imaging and spectroscopic modes. characterization, ADFOSC, CCD figure table *Dimple, [email protected] ## 1 Introduction The 3.6m DOT was commissioned at the Devasthal observatory of Aryabhatta Research Institute of observational scienceES (ARIES), Nainital (India)[1]. The Devasthal Observatory is situated in the Himalayan regions of Uttarakhand at \(\sim 2450\) meter above the mean sea level with geographical coordinates of \(29^{\circ}.360\) N, \(79^{\circ}.690\) E. This location lies in the middle of the \(180^{\circ}\)-wide longitude-gap between the Canary Islands (\(\sim 20^{\circ}\) W) and Eastern Australia (\(\sim 160^{\circ}\) E), making it suitable for observations of time-critical astronomical events due to the availability of several moderate aperture telescopes. The DOT uses a \(f/9\) Ritchey-Chr\({}^{\prime}\)etien (RC) system supported on an alt-azimuth mount[2, 3]. The aperture of this telescope is appropriate for medium-resolution spectroscopy and observations of faint sources. A low-dispersion spectrograph-cum-imager, ARIES Devasthal Faint Object Spectrograph (ADFOSC), has been developed in ARIES for spectroscopy and imaging of the celestial objects[4]. The spectrograph uses a fixed focal reducer, which converts the incoming f/9 optical beam from the telescope into a faster \(\sim f/4.2\) beam. The spectrograph can be used in both spectroscopic and imaging modes by selecting the instrument's corresponding optical elements (e.g. filters, grism, slit, etc.) with the help of a GUI-based instrument control software [5]. In either mode, a charge-coupled device (CCD) is required to detect and record the data. A CCD detector system/imager has been designed and assembled in ARIES in technical collaboration with the Herzberg Institute of Astrophysics (HIA), Canada. We performed a detailed characterization of the CCD system before commissioning it for scientific observations, both in the laboratory and on the sky. This included estimating parameters like bias level, readout noise, and thermal noise in the dark room. We then performed iterative experiments in the laboratory to optimize the overall system performance and verified the CCD for cosmetic defects. We demonstrate a method to calculate the dark signal of the CCD at different temperatures using the bias frames. As the CCD is developed for the ADFOSC instrument, we also estimated the spectral dispersion on the detector using the lamp spectra. After optimization in the laboratory environment, we performed similar experiments over the night sky on the 3.6m DOT to verify the on-sky performance of the detector system. The paper discusses the different methodologies employed for characterizing the performance of the CCD system. The test setup used for performing different tests to optimize the system parameters is detailed in section 3. We also discuss various experiments performed to determine and optimize the CCD characteristics. The integration of the CCD system with the ADFOSC instrument and results of the on-sky tests are discussed in section 4. To evaluate the performance of the CCD system on science targets, we observed transient sources during the observing cycle 2020C2 of the 3.6m DOT. The results of the scientific observations are presented in section 5. Figure 1: The ADFOSC CCD camera setup in the laboratory. The Camera consists of a CCD detector, a compressive gauge, a dewar which is cooled using a heat-exchange cryogenic system. ## 2 The CCD detector system The CCD is a \(4096\times 4096\) format back-illuminated e2v 231-84 CCD sensor having square pixels of 15-micron size. It is a deep-depleted sensor capable of enhancing the sensitivity toward the longer wavelengths (\(\sim 700-1000\) nm ) of the optical spectrum. The CCD has an imaging area of \(61.4\)\(\times\)\(61.4\)\(\mathrm{mm}^{2}\), providing a Field of View (FoV) of \(\sim 13.6\times 13.6\) arcmin\({}^{2}\) and a spectral dispersion in the range \(0.1-0.2\) nm/pixel. The CCD has four readout ports (0, 1, 2, 3). The \(\sim 16\) million pixels can be read from any of the four amplifiers or through four amplifiers simultaneously. The four-port readout decreases the readout time by a factor of four. However, it requires additional processing to match the bias levels of the four quadrants. Since the readout noise would differ for the four different amplifiers, each quadrant's respective signal-to-noise ratio (SNR) would also be different. In the case of the ADFOSC instrument, we have implemented a single port readout via port-0, which provides the lowest readout noise. The readout frequency is fixed at \(\sim 160\) kHz, providing a readout time of \(\sim 104\) sec. A Bonn shutter is mounted at the camera entrance with an aperture size of \(100\) mm \(\times 100\) mm. The shutter employs servo motors for fast operation and offers \begin{table} \begin{tabular}{l l} \hline **CCD Characteristic** & **Value** \\ \hline CCD & \(e2v\)\(231-84\) / Grade-0 \\ Pixel Size & \(15\)\(\mu\)m \\ Readout Frequency & \(160\) kHz \\ Operating Temperature & \(-120^{\circ}\)C \\ Bias level & \(1134\pm 2.62\) ADU \\ Gain & \(1.00\pm 0.04\) e\({}^{-}\)/ADU \\ Readout Noise & 6 e\({}^{-}\)/pixel \\ Full well capacity & \(408\) ke\({}^{-}\)/pixel \\ Saturation level & 65535 ADU \\ \hline \end{tabular} \end{table} Table 1: CCD Characteristics uniform exposure at the detector plane. Using this shutter, a minimum exposure time of \(\sim 1\) ms is possible with an uncertainty of 300 \(\mu\)s. The detector system consists of the CCD detector, a clock-shaping fan-out circuit board, and a generic Astronomical Research Cameras (ARC) controller1 to generate the suitable clock and bias voltages for the detector. The controller hardware has two video cards for reading four ports with 16-bit Analog to digital converter (ADC) units to interface and digitize the four channels of the CCD. Additionally, different bin settings are implemented to read the image in different binning patterns for photometry and spectroscopy. Footnote 1: [http://astro-cam.com](http://astro-cam.com) The CCD sensor is cooled to \(-120^{\circ}\)C to minimize the dark signal. The CCD dewar is evacuated for several hours using an oil-free turbo molecular vacuum pump before deep cooling. The dewar is equipped with a Pirani vacuum gauge for monitoring its vacuum pressure. Once the vacuum reaches below \(\sim 1\) x \(10^{-3}\) Torr, the closed-cycle (Joule-Thomson) cryogenic heat-exchange system supplied by Brooks Automation, USA, is switched on. The overall process of cryo-cooling the CCD from an ambient temperature of \(25^{\circ}\)C to \(-120^{\circ}\)C takes between 4 to 5 hours. This temperature is stabilized and held constant within \(0.01^{\circ}\)C using a Lakeshore 335 Proportional Integral Derivative (PID) temperature controller2. For this purpose, a small heater and a temperature sensor are mounted on the cold plate below the sensor. A charcoal-filled getter is used to absorb outgassing inside the dewar. The charcoal gets activated to absorb gases at cryogenic temperatures and helps attain a high vacuum. An ultimate vacuum of \(\sim 3\) x \(10^{-7}\) Torr is usually attained with this system at \(-120^{\circ}\)C. Footnote 2: [http://irtfweb.ifa.hawaii.edu/-s2/software/gpib-eth-ls335/335_Manual.pdf](http://irtfweb.ifa.hawaii.edu/-s2/software/gpib-eth-ls335/335_Manual.pdf) ## 3 Characterization of the CCD system Detailed characterization of the CCD includes the estimation of bias level, bias stability, readout noise (RN), gain, defects, linearity, and saturation level of the CCD. This section describes the laboratory-based test setup and the experiments performed to determine these parameters. We tested the CCD performance using all four ports, numbered zero to three. However, the read noise is found to be the lowest for port-0; hence single port readout mode using port-0 is currently being used for acquiring the scientific data. The paper focuses on characterizing parameters for port-0 of the CCD system. ### Test setup and Data Acquisition We set up the CCD system on an electrostatic discharge (ESD) safe, dark room of the ARIES optics laboratory for performing the experiments. A light-emitting diode (LED) operated at a constant regulated voltage was used as an illumination source for the experiment. We covered the CCD window with an Aluminium plate with a pinhole for the light to enter. We fixed the source on the pinhole to avoid any fluctuations in light intensity due to any change in the source's position. The sub-systems, namely, the temperature controller, cryogenic pump, pressure gauge, etc., were carefully grounded to a common point to avoid noise entering from ground loops. The entire system was again reconfigured when the ADFOSC was mounted on the 3.6m DOT for sky tests. We acquired the data using the Owl3 software provided along with the ARC controller. The software offers different dialog boxes to control the controller parameters, including gain, readout, binning, etc., and saves the acquired images in standard Flexible Image Transport System (FITS) format. Different modules of Python6 like Astropy7-9 and ccdproc10 were used to process the FITS image files. ### Bias level and readout noise A positive offset is generally provided to the CCD electronics to avoid negative counts in the output of the CCD. The mean offset value, or the bias value, is optimized in a way that it is large enough to avoid the non-linear regime of the CCD amplifiers but not too large to reduce the dynamic range Figure 2: Master bias of the CCD created using median combining 50 bias frames. of the CCD. We set the bias level slightly above thousand counts to balance the above factors. We estimated the bias level of the CCD using the bias frames, which are the CCD images with a zero-second exposure. Several bias frames were acquired, and we used fifty of them to generate a master bias frame by taking their median value. We did this using Mediancombine task of ccdproc software module. Fig. 2 shows the median bias frame, and the corresponding histogram is shown in Fig. 3. The width of the distribution represents the RN of the CCD, which is the number of electrons introduced by the readout electronics while reading out each pixel. We estimated the RN using the Janesick method (equation 1) [11]. We created a difference image using Figure 3: Histogram of the master bias frame with a mean value of \(1134.01\pm 2.62\) counts. two bias images and estimated its standard deviation (\(\sigma_{Bias1-Bias2}\)). The explanation of the Gain value used in equation 1 is provided in section 3.5. We found the RN value to be 6.20 analog to digital units (ADU) or 6.20 \(e^{-1}\) for the gain value of \(\sim\) 1. However, the achieved noise is more than twice the value in the e2v datasheet at 160kHz. We observed some interference patterns in the image. These patterns are likely to be responsible for this increased noise. The probable cause could be ground loops outside and inside the dewar, length of cables and imperfect shielding scheme. These have been controlled to a large extent by iteratively evaluating various schemes like shortening the cables and star connecting the ground points of the auxiliary devices like chiller, pressure gauge, temperature controller etc. Also, grounding permutations were tried with the four-port video cables. Finally, the shielding of the video cables was grounded only at the controller end and left open at the CCD connector end, which resulted in a lower noise floor. There is still scope for improvement as the ground loops inside the dewar have not been evaluated. This evaluation will be attempted later since the CCD has already been commissioned for observations. \[RN=\frac{Gain\times\sigma_{Bias1-Bias2}}{\sqrt{2}}, \tag{1}\] We calculated the standard deviation of 50 bias frames to verify this RN value, as shown in Fig. 4. The mean of these standard deviation values is 6.38 ADU, which is consistent with the value calculated using equation 1. **Fig 4** A plot showing the variation of the standard deviation of 50 bias frames ### Linearity The CCD system should have a linear response to the incident light for scientific observations. However, several factors can introduce non-linearity in CCD performance. The controller clock periods and overlaps should be timed for complete charge transfer during the readout process. Moreover, there should be a delay between this transfer and the correlated double sampling instant to allow the transients to settle to avoid any induced noise or glitch and introduce non-linearity. We verified the signal waveform using a 1 GHz digital oscilloscope to ensure the above before connecting the interface board to the detector. Other factors critical for linearity are bias voltages: voltage output drain (VOD) and voltage reset drain (VRD). The CCD manufacturer provided a range of values for the voltages, VOD from 25 to 31 volts and VRD from 16 to 19 volts. To check the behaviour of the CCD at different voltages, we initially set these voltages near minimum values and iteratively increased these voltages within this range. We rejected some of the voltage combinations that provided very low-bias levels. For other combinations, experiments were performed to check the linearity of the CCD. We used an LED source (section 3.1) to illuminate the CCD and acquired images with an incremental increase in exposure time. We obtained a pair of images for each exposure time to detect any variation in the source intensity. We noticed that the counts are identical for the pair of images. We estimated the non-linearity (the relative difference between our measurements and the best-fit linear curves) by fitting a linear function to the variation of mean counts with exposure time for each voltage combination. We used the linregress function from the stats library under Python for this purpose. Fig. 5 shows the non-linearity curves for various combinations of VOD and VRD in different count regions. For most voltage combinations, the non-linearity is negligible in the higher count regime. The non-linearity, however, shows up in the lower count regime and is significant for certain voltages. For a combination of VOD = 29 volts and VRD = 16.5 volts, the non-linearity is the lowest. For this voltage combination, the value of the regression coefficient (\(R^{2}\)) is \(0.9999\), which is almost equal to unity (see Fig. 6). We considered this voltage combination as the optimum value for the CCD system. ### Saturation level The maximum capacity of a CCD pixel to store the photo-electrons is its full well capacity, beyond which the pixels saturate. Since the available 16-bit ADC of the controller saturates at a value of 65535, the controller's gain setting helps to select the dynamic range. As we are interested in accurate photometry of faint objects, a gain of unity is selected, constraining the saturation point to 65535. The selection of system gain is discussed in section 3.5. Illumination of the CCD until its saturation point limited the counts to 65535, the saturation point of the 16-bit ADC. It is Figure 5: Non-linearity curves at different operating voltages in the lower count regime (left panel) and in the higher count region (right panel). Non-linearity is the minimum for a combination of VOD=29 volts and VRD=16.5 volts. **Fig 6** Linearity curve at VOD = 29 volts and VRD = 16.5 volts. A linear fit to the data gives the regression coefficient as 0.9999. The horizontal dashed line indicates the saturation level. demonstrated and shown in Fig. 6 where a bright source illuminates the detector, and the ADC saturates at 65535 counts. If the science cases demand the utilization of full well capacity, the user can select a gain setting close to 3 or higher electrons per ADU. ### The system gain The gain of a CCD system is defined in terms of ADU, which corresponds to the number of electrons assigned to one digital unit in the output image. The available gain values are 1, 2, 5, and 10 \(\mathrm{e^{-}/ADU}\) in the controller. The gain values can be selected using the software at runtime. The saturation level of the CCD should correspond to the saturation level of the ADC to utilize the full well capacity. Since the full well capacity of the CCD is \(408\)\(\mathrm{k}\mathrm{e^{-}}\) (as mentioned in the result sheet of the supplied detector), a gain of 10 is suitable to match the saturation levels. However, for the detection of photon-limited faint objects, a gain of 1 is implemented using the controller parameters. The electronic gain of the CCD system is the product of gain values introduced by each stage of the readout electronics. The inherent gain of the CCD, defined by the output capacitor, is 7 \(\mu\) V/\(\mathrm{e^{-}}\). A series of Op-amp stages within the controller further amplify this gain. Initially, it is preamplified with a gain of 4 and passed through a gain selection stage, offering a range of gain values: 1, 2, 4.75, and 9.5. A bias adjustment stage after the integrator provides a gain of 2. Hence, an amplification of 56 \(\mu\) V/\(\mathrm{e^{-}}\) is obtained with these four stages. Since the 16-bit ADC operating at a reference voltage of 10 V provides a bin size of 152.588 \(\mu\) V/ADU, the integration of the Op-amp integrator is adjusted to provide an additional gain factor of 2.725 to achieve the desired system gain of 1 \(\mathrm{e^{-}/ADU}\). Since the integrator time can only be adjusted in increments of 40 ns, the closest possible value of 0.998 \(\mathrm{e^{-}/ADU}\) is set. We experimentally verified this gain setting using the Janesick method[11] given by equation 2 where (\(S\)) is the mean of the signal acquired by the CCD, and \(\sigma_{S}^{2}\) is the variance. \[\sigma_{S}^{2}=\frac{S}{G}+\sigma_{R}^{2}, \tag{2}\] We acquired a pair of images at each exposure and estimated the mean signal after bias subtraction and cosmic-ray removal from the image. Further, these images were normalized by subtracting Figure 7: Photon transfer curve (PTC) of the CCD obtained in the laboratory environment. The measured value of the gain is \(1.00\pm 0.04\) e\({}^{-}\)/ADU. one image from the other for compensating the flat-field effect. We used the resulting image to estimate the variance of the signal (\(\sigma_{S}^{2}\)). Fig. 7 shows the photon transfer curve (PTC) for the CCD. To estimate the gain, the PTC was fitted with a linear function using linregress. The estimated gain is 1.00 \(\pm\) 0.04 \(\mathrm{e}^{-}/\mathrm{ADU}\), which matches the electronic gain value of the system within the errorbar. ### Thermal noise In the CCD detectors, electron charge-density increases exponentially with an increase in temperature due to the thermal generation of electrons. A CCD must be cooled optimally to minimize the dark signal. To determine this optimum temperature, we calculated the dark signal at various temperatures ranging from \(-35^{\circ}\)C to \(-120^{\circ}\)C using the bias frames. We acquired several bias frames at different temperatures and generated master bias frames at each temperature. The left panel of Fig. 8 shows the master bias frame at \(-50^{\circ}\)C. A gradient in counts is visible in the master bias Figure 8: Masterbias at \(-50^{\circ}\)C and deviation of the counts of all pixels from the zeroth pixel counts. A clear gradient can be seen in the image as the last pixels get more time to generate the dark signal. due to the finite readout time of the CCD. The zeroth pixel gets the lowest time to generate dark electrons. The last pixel accumulates dark counts over full readout time, hence has the maximum number of thermally generated electrons. If the readout time and the gain are known, then by comparing the counts of the first and the last pixel, we can measure the number of electrons generated per pixel per second. Using this method, we estimated the dark signal at different temperatures. The right panel of Fig. 8 shows the deviation of counts in each pixel from the zeroth counts. The farther the pixel number is from the readout port, the larger the dark count and the larger the deviation from the zeroth counts. To determine the slope of this gradient, we fitted a polynomial in counts vs. pixel number data using the polyfit function of Python. It is seen that a linear function provides the best fit, as shown in the left panel of Fig. 8. We used the slope to calculate the difference in counts between the first and the last pixel. We divided this difference by the total readout time to obtain dark counts generated per second. Since the bias frames were acquired in \(4\times 4\) binning, it was further scaled by a factor of 16 after subtracting the RN. Below \(-100^{\circ}\)C temperature, the thermal noise becomes less than RN; hence, we could estimate the dark signal values up to \(-95^{\circ}\)C. The dark signal values at different temperatures are listed in table 2. As shown in Fig. 9, the dark signal varies exponentially with temperature. Below -80\({}^{\circ}\)C, the \begin{table} \begin{tabular}{|c l|c c|} \hline Temp. & Dark Current & Temp. & Dark Current \\ (\({}^{\circ}\)C) & (e-/pix/sec) & (\({}^{\circ}\)C) & (e-/pix/sec) \\ \hline -35.0 & \(39.89\pm 0.017\) & -60.0 & \(1.03\pm 0.002\) \\ -40.0 & \(21.30\pm 0.009\) & -65.0 & \(0.42\pm 0.002\) \\ -42.5 & \(15.36\pm 0.007\) & -70.0 & \(0.18\pm 0.002\) \\ -50.0 & \(5.37\pm 0.003\) & -80.0 & \(0.10\pm 0.001\) \\ -55.0 & \(2.89\pm 0.002\) & -90.0 & \(0.05\pm 0.001\) \\ -57.5 & \(1.87\pm 0.002\) & -95.0 & \(0.01\pm 0.002\) \\ \hline \end{tabular} \end{table} Table 2: Dark current at different temperatures as estimated using the gradient of bias frames. dark signal is negligible, suggesting that the CCD can be used below this temperature with minimal thermal noise. ### CCD defects The CCD may have some pixels that might not respond to light optimally due to defects in the CCD structure. These can be point defects, hot defects, or dark/dead pixels. We are employing a grade-0 CCD detector (the CCD detector with minimum possible defects) as mentioned by the manufacturer. To check the CCD for point and column defects, we examined the response of all the pixels at different temperatures. Fig. 10 shows the deviation of counts from the mean bias counts for each pixel of the CCD operated at different temperatures (the method to obtain these plots is described in section 3.6). Some pixels are seen to behave differently at higher temperatures Figure 9: Variation of dark signal with temperature. The dark signal is negligible below \(-80^{\circ}\)C. Figure 10: The response of CCD pixels at different temperatures is shown in the figure. The deviation of the counts from the zeroth counts is fitted with a polynomial (black line). The dark signal is calculated from the best fit. There are a few pixels which are behaving differently at higher temperatures. At lower temperatures, the CCD behaves as grade-0 CCD. exhibiting high counts. Though they appear to be hot pixels, the counts are found to decrease with decreasing temperature. Eventually, below \(-110^{\circ}\)C, the CCD acts as a nominal grade-0 CCD without any point and column defects. ## 4 On-sky verification After optimizing the performance of the CCD system in the laboratory, we verified the on-sky performance. The CCD was integrated with the ADFOSC instrument and mounted on the axial port of 3.6m DOT. This section describes the estimation of gain, linearity, bias level, and bias stability using on sky observation with the instrument. ### Bias Stability We calculated the bias level using the methodology described in section 3. The mean bias level equal to \(1133.85\pm 2.48\) matches the laboratory estimated value, i.e. \(1134.01\pm 2.62\). Since fluctuation in bias level can introduce errors in photometric estimates, we acquired and examined several bias frames to check the stability of the bias. Fig. 4 shows the variation of the mean bias level for 30 different nights spread across an observing cycle of three months. The mean bias level fluctuates within a fraction of a count, ensuring bias stability. ### Linearity and gain We chose the standard field available at the zenith at the time of observations to validate the linearity and gain of the CCD. Multiple images of Landolt standard field SA110[12] were acquired in r-band with an exposure time ranging from 5 sec to 100 sec. Before using the images for characterization purposes, we pre-processed the images with basic steps of bias subtraction, flat correction and cosmic-ray removal using the ccdproc. Fig. 12 shows the pre-processed CCD image of the Figure 11: Variation within the mean counts of bias frames during different nights. The bias level is stable within one count. The red and black dotted lines show the mean bias level estimated in the lab and on-sky. **Fig 12** CCD image of the Landolt standard field SA110. The standard stars are between 10 to 16 mag in V-band. field SA110, which contains both bright and faint stars (with magnitudes ranging from 10 to 16 mag in the V-band). We used the faint stars to check the linearity in the lower count region and the bright stars to estimate the saturation level of the CCD. Fig. 13 shows CCD linearity with \(R^{2}\) = \(0.9997\) and a non-linearity percentage of 0.30. The CCD system is seen to saturate at 65535 counts for a gain of 1 e\({}^{-}\)/ADU. The estimated gain value using the method described in section 3.5 is \(1.04\pm 0.13\) e\({}^{-}\)/ADU, which is close to the value estimated in the laboratory. On-sky gain estimation is also affected by Figure 13: Variation of mean counts with exposure time from on-sky experiments. The black line represents the best linear fit with a regression coefficient of 0.9997. **Fig 14**: Photon transfer curve (PTC) of the CCD as obtained from the sky experiments. The gain value is estimated as \(1.04\pm 0.13\) e\({}^{-}\)/ADU. the sky variation, which results in a slightly higher error bar. The mean-variance plot is shown in Fig. 14. ## 5 Performance of the CCD We used the CCD for imaging and spectroscopic observations of various science targets after optimizing it in the lab and successfully verifying it on-sky. This section demonstrates the performance of the CCD system in both imaging and spectroscopic modes with observations of GRB and Supernovae sources. The on-sky performance of ADFOSC on different science targets is discussed in more detail in Omar et al. 2019.[4] Figure 15: Field of GRB 210217A afterglow imaged with the ADFOSC in r-band. ### Imaging We observed the optical afterglow of GRB 210217A using the ADFOSC in imaging mode. These observations were performed on 18th February 2021 in the r-band at 23:59:18 UT, at \(\sim 1.7\) days after the burst. Owing to the faintness and rapid decay rate of GRB afterglows, a series of eight images, each with an exposure time of 300 seconds, were acquired. The images were stacked after pre-processing (as described in the previous section) to improve the signal-to-noise ratio. The optical afterglow is visible in the stacked image as shown in Fig. 15. The photometric estimate of the afterglow is \(22.32\pm 0.16\) mag (AB). ### Spectroscopy The spectrograph provides three sets of gratings: 300 gr/mm, 420 gr/mm, and 600 gr/mm. We acquired the lamp spectra using the Mercury Argon (HgAr) calibration lamp to estimate the spectral dispersion. We used the find peaks module of scipy[13] to extract the spectral peaks from the obtained spectra. We identified the column number corresponding to each peak and compared it with the wavelength-calibrated lamp spectrum atlas. We defined initial polynomial solutions using Figure 16: The lamp spectrum of Mercury Argon lamp using 770R grating. The left panel shows the spectrum in pixel scale. The vertical lines indicate the column numbers for the first and last emission lines identified in the spectrum in the left figure. The right panel shows the spectrum in wavelength scale. these matched wavelength pairs and calculated the best-fit polynomial coefficients to transform between the column number and wavelength. The left panel of Fig. 16 shows the lamp spectrum obtained using a 300gr/nm grating element in pixel scale. The right panel shows the calibrated spectrum in the wavelength scale. A spectral dispersion of 0.20 nm/pixel is estimated for this grating. For gratings elements, 420 gr/mm and 600 gr/mm, the estimated values of spectral dispersion are 0.14 nm/pixel and 0.10 nm/pixel, respectively. We acquired the spectrum of the supernova SN 2020jfo using \(1.5^{{}^{\prime\prime}}\) slit and 420 gr/mm grating with an exposure time of 900 sec on 13th January 2021 at 23:12:53 UT. Ailawadhi et al.[14] describe Figure 17: Spectrum of SN 2020jfo obtained using ADFOSC at \(\sim 254\) days after the discovery of supernova. We identified different absorption lines in the spectrum and compared these with the spectra taken from other instruments/telescopes. the spectral reduction technique. The absorption features in the spectrum obtained by ADFOSC were identified and matched with the spectra obtained from other telescopes, as shown in Fig. 17. It is noticed that spectral features at longer wavelengths are visible, indicating the high sensitivity of the CCD detector (deep-depleted) in the long-wavelength regime. ## 6 Conclusions We present the methodology employed to characterize the performance of a CCD system developed for integrating with a low dispersion spectrograph instrument, ADFOSC, on the 3.6m DOT. Various experiments were initially performed in the laboratory to characterize and optimize different critical parameters of the CCD system. We also verified the estimated parameters on the sky by mounting the instrument on the 3.6m DOT. We evaluated the bias level during the on-sky tests and examined its stability over several observing nights. We experimentally identified an optimum combination of bias voltages: VOD and VRD, to operate the CCD with minimum non-linearity. The readout performance of the CCD is satisfactory. However, some interference patterns in the image contribute to readout noise. Through different experiments, we tuned and verified the gain parameter corresponding to 1 \(\mathrm{e}^{-}/\mathrm{A}\mathrm{U}\) for detecting faint objects. We calculated the dark current at different temperatures using bias frames at lower temperatures and established an optimum operating temperature of the CCD. The CCD acts as a grade-0 detector with no hot pixels at optimum temperature. The regression coefficient values and the gain parameter obtained on-sky are consistent with the values obtained in the laboratory. After verifying the satisfactory performance, we observed the science targets both in imaging and spectroscopic modes. We carried out the imaging of GRB 210217A[15] field and the spectroscopy of supernova SN 2020jfo using ADFOSC successfully[14]. ## Acknowledgments We thank Greg Burley and Tim Hardy from NRC Herzberg Astronomy and Astrophysics Research Centre, Canada, for their help in developing the CCD system. We thank the ARIES 3.6 m DOT engineering team and staff members for providing the necessary support during development, verification, and commissioning work. We would also like to thank Dr. Raya Dastidar for helping us with spectroscopic data reduction.
2308.04477
A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages
Large Language Models (LLMs) are advanced Artificial Intelligence (AI) systems that have undergone extensive training using large datasets in order to understand and produce language that closely resembles that of humans. These models have reached a level of proficiency where they are capable of successfully completing university exams across several disciplines and generating functional code to handle novel problems. This research investigates the coding proficiency of ChatGPT 3.5, a LLM released by OpenAI in November 2022, which has gained significant recognition for its impressive text generating and code creation capabilities. The skill of the model in creating code snippets is evaluated across 10 various programming languages and 4 different software domains. Based on the findings derived from this research, major unexpected behaviors and limitations of the model have been identified. This study aims to identify potential areas for development and examine the ramifications of automated code generation on the evolution of programming languages and on the tech industry.
Alessio Buscemi
2023-08-08T15:02:32Z
http://arxiv.org/abs/2308.04477v1
# A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages ###### Abstract Large Language Models (LLMs) are advanced Artificial Intelligence (AI) systems that have undergone extensive training using large datasets in order to understand and produce language that closely resembles that of humans. These models have reached a level of proficiency where they are capable of successfully completing university exams across several disciplines and generating functional code to handle novel problems. This research investigates the coding proficiency of ChatGPT 3.5, a LLM released by OpenAI in November 2022, which has gained significant recognition for its impressive text generating and code creation capabilities. The skill of the model in creating code snippets is evaluated across 10 various programming languages and 4 different software domains. Based on the findings derived from this research, major unexpected behaviors and limitations of the model have been identified. This study aims to identify potential areas for development and examine the ramifications of automated code generation on the evolution of programming languages and on the tech industry. ChatGPT, Large Language Models, Coding, Programming Languages ## I Introduction Natural Language Processing (NLP) is an interdisciplinary field of Artificial Intelligence (AI) that focuses on enabling computers to understand, interpret, and generate human language in a way that is both meaningful and contextually relevant [1]. Large Language Models (LLMs) are powerful NLP systems that have been trained on vast amounts of data to understand and generate human-like language [2, 3, 4]. They are massive neural networks with hundreds of millions to billions of parameters, which enable them to capture intricate patterns and dependencies in language data. These models undergo a pre-training phase where they are exposed to massive amounts of text data from the internet. The capabilities of LLMs are remarkable considering the seemingly straightforward nature of the training methodology. Auto-regressive transformers [5] are pretrained on an extensive corpus of self-supervised data, followed by alignment with human preferences via techniques such as Reinforcement Learning from Human Feedback (RLHF) [6]. One of the critical advantages of LLMs is their ability to perform transfer learning [7]. After pre-training, the model can be fine-tuned on specific tasks, such as language translation, sentiment analysis, question-answering, and more. This fine-tuning process adapts the model to perform well on targeted tasks with relatively smaller amounts of task-specific data. LLM exhibit contextual understanding, meaning they can comprehend the meaning of a word or phrase based on the surrounding context in a sentence or paragraph. This enables them to generate coherent and contextually appropriate responses. These models have a wide range of applications, including chatbots, virtual assistants, content generation, language translation, sentiment analysis etc. [8]. Recently, a number of LLMs have been progressively employed to produce and debug code, which opens the door to a number of new scenarios and prospects in software development [9]. Examples of LLMs architectures include Generative Pretrained Transformer (GPT) [10], BERT [4], LLaMa [11], BARD [12] PaLM [13] and LaMBDA [14], One of the most notable models is ChatGPT 3.5 by OpenAI, which is built based on the GPT-3.5 architecture. Since its release in November 2022, this model has garnered significant attention due to its remarkable ability to actively participate in discussions and deliver substantial responses comparable to those generated by humans [15, 16]. ChatGPT has undergone extensive evaluation on several challenges for humans, such as university admission exams across multiple faculties and bar exams, thereby demonstrating its ability to perform at a level comparable to humans [17]. In the context of coding related tasks, ChatGPT has demonstrated unprecedented capabilities on understanding, generating and debugging code [18]. This technology offers a promising prospect for facilitating communication and collaboration between human developers and machine intelligence through the provision of a conversational interface designed to aid with coding tasks. In this study, we conduct a comparative analysis of the performance of ChatGPT 3.5 across various programming languages, with respect to its time performance, the length and the executability of the generated code. This work aims to acquire insights on the strengths and limitations of ChatGPT in various programming languages. Specifically, the focus is on comprehending the fundamental characteristics that contribute to certain languages being more suitable for code generation than others. The main contributions of this work can be summarized as follows: 1. We challenge ChatGPT 3.5's code generation capabilities with respect of 10 programming languages, based on a pool of 40 coding tasks. 2. We present a comparative analysis of the performance of the model across the selected programming languages, to identify strengths and weakness in understanding the assigned tasks and producing the code. 3. We identify some critical limitations of the model and propose possible directions for further investigation on automated code development. The rest of the paper is organized as it follows. Section II introduces some background concepts which are necessary for the comprehension of this study. In Section III and Section IV, we present respectively the methodology adopted in this work and the results obtained with it. In Section V, we discuss future work and the implications of automated code generation for the software industry. Section VI concludes the paper. ## II Background This section aims at providing some background knowledge regarding generative AI and ChatGPT. ### _Generative AI_ Generative AI refers to a class of AI techniques that focus on generating new content. Unlike traditional AI models that perform classification or prediction tasks, generative models aim to generate new data that is similar to the training data they have been exposed to. Generative AI has witnessed significant advancements in recent years, thanks to breakthroughs in deep learning and neural network architectures [19]. One prominent type of generative AI is Generative Adversarial Networks (GANs) [20]. GANs consist of two components, a generator network and a discriminator network. The generator learns to generate synthetic data samples, such as images or text, while the discriminator network tries to distinguish between the real and generated data. Through adversarial training, the generator and discriminator improve iteratively, resulting in increasingly realistic and high-quality generated outputs. Another influential generative AI approach is the family of autoregressive models [21]. These models generate data by conditioning the generation of each element on the previously generated elements. They learn the statistical patterns and dependencies in the training data and use that knowledge to generate coherent and contextually relevant outputs. Generative AI has found applications in various domains, including NLP, computer vision, music, image and video generation [22]. In NLP, generative models have been used for text and code generation, translation, and dialogue systems. In this scope, LLMs have the ability to generate human-like text or other data based on the patterns and information they have learned during their training. As a matter of fact, ChatGPT was also employed to write or rephrase some of the paragraphs of Section I and Section II. Although generative AI has demonstrated remarkable capabilities, there are still existing obstacles that need to be addressed. The task of producing outputs that are both realistic and diverse poses a significant challenge, as models have a tendency to provide information that is plausible but lacks variation or deviates from reality. Current research is dedicated to enhancing the resilience, maintainability, and comprehensibility of generative models. This aims to empower users with greater precision in manipulating the generated outputs and comprehending the decision-making mechanisms employed by the model. ### _ChatGPT_ ChatGPT is a popular LLM from OpenAI, which is an extension of the GPT series. The original GPT model was introduced in 2018, followed by the more advanced GPT-2 in 2019, GPT-3 in 2020, GPT-3.5 in 2022 and GPT-4 in 2023. In November 2022, OpenAI released ChatGPT 3.5, a model built on GPT-3.5 which facilitates interactive and dynamic conversations with users. In March 2023, ChatGPT 4, based on GPT-4 was released, reporting superior capabilities compared to its predecessor in most domains. However, at the time of writing, ChatGPT 4 is available only upon paid subscription and, as a consequence, it is not being used by the general public. For this reason, we have chosen to direct our efforts towards ChatGPT 3.5, which currently holds the highest level of popularity in the field of LLM. ChatGPT's architecture is based on the Transformer Neural Network, which has become the standard for various natural language processing tasks [23]. Transformers leverage the concept of self-attention mechanisms to effectively model long-range dependencies and capture contextual information. ChatGPT consists of a stack of transformer encoder layers. Each layer contains two main components: a multi-head self-attention mechanism and a Feedforward Neural Network (FNN). The self-attention mechanism allows the model to weigh the importance of different words within a sentence based on their relevance to the context. It enables ChatGPT to capture the relationships between words and understand the overall meaning of the input text. In the multi-head self-attention mechanism, the model computes multiple attention distributions in parallel, allowing it to attend to different parts of the input sequence simultaneously. This helps the model capture diverse perspectives and dependencies within the text. The FNN in each layer incorporates non-linear transformations to further process the information obtained from the self-attention mechanism. This network is responsible for generating the final representations of the input text, which are then used for generating the output responses. The model learns to predict the next word in a given text sequence based on the preceding context. This pre-training phase enables ChatGPT to learn the statistical patterns and structures of human language. ChatGPT underwent training using extensive corpora of textual data, encompassing diverse sources such as literary works, scholarly publications, and online content. OpenAI employed a dataset known as the Common Crawl [24], a publicly accessible collection of billions of web pages, making it as one of the most extended text databases currently accessible. It is to be noted that the selection of the dataset can have an influence on the efficacy of the model, as it dictates the extent of linguistic diversity and the range of themes to which the model is exposed. In the domain of programming, ChatGPT has the capability to aid developers by creating code snippets or offering help on inquiries pertaining to programming. ChatGPT can be utilized for a variety of purposes, such as: * ChatGPT can generate code snippets based on the examples it has been trained on. It can help developers by suggesting potential code implementations or providing templates for specific programming tasks. * It can assist developers in understanding programming language syntax, usage, and APIs. It can provide explanations, offer insights into specific language features, and suggest appropriate API methods. * Developers can seek guidance from ChatGPT when encountering errors or bugs in their code. While it cannot replace traditional debugging practices, the model can provide suggestions or point out potential issues that developers can investigate further. * ChatGPT can offer explanations for programming concepts, algorithms, and design patterns. It can help developers understand the underlying principles of software development and guide them in applying those concepts effectively. * It can provide assistance in navigating programming language documentation and other technical resources. It can help developers locate relevant information, find examples, or clarify ambiguities in documentation. ## III Methodology In this section we describe in detail the methodology followed in this work to test the coding capabilities of ChatGPT with respect to different programming languages. ### _Selected Languages_ In order to evaluate the coding skills of ChatGPT, a collection of 10 programming languages was chosen. These programming languages are listed in Table I. The selected languages encompass a diverse range of programming paradigms (such as imperative, object-oriented, and functional), memory management strategies, performance characteristics, and domain-specific capabilities, making them relevant and utilized in contemporary programming practices. ### _Setup_ In this work, we query ChatGPT 3.5 via Openai's API available for Python [35]. Specifically, we employ Python 3.11.2 to send requests to GPT 3.5 and process its output. When communicating with ChatGPT, we define three parameters: 1. **Version of the model** - ChatGPT 3.5 is available in multiple versions. In this work, we use the Turbo version, which is described by OpenAI as the most capable GPT-3.5 model which was trained until September 2021. 2. **Role of the model** - it serves to set up the model behavior for conversation. In this study, we set the role of the model to software developer by passing the string _"You are a software developer"_. 3. **Query of the user** - What is the request from the user to the model. In Section III-D we describe the template used to query the model. The remaining settings for training ChatGPT are configured with their default values. In particular, we maintain the _temperature_ at its default value of 1. The temperature serves as a parameter that governs the degree of randomization. The range of values for the model behavior is from 0 to 2. As the temperature approaches 0, the model behavior becomes increasingly deterministic and repetitious. Conversely, as the temperature approaches 2, the model output becomes more random. By leaving the temperature to its default value 1, we want to test the model's typical behavior, without forcing it being more predictable or more creative. ### _Tasks_ We designed a set of 40 coding tasks, which were selected from diverse sources, including university websites that offer exercises for undergraduate students and platforms that provide coding challenges to prepare for technical interviews [36, 37]. The tasks are divided in four categories: 1. **Data Science (DS)** - ChatGPT is asked to generate code for commonly used algorithms in the field of Data Science, specifically focusing on data processing and classification tasks. 2. **Games** - ChatGPT is asked to write 2 versions - one simple and one complex - of well known games. 3. **Security** - ChatGPT is challenged on tasks which aims at either enhancing security or simulating adversarial behavior. 4. **Simple Algorithms (Algos)** - ChatGPT is challenged on producing algorithms involving strings and mathematical operations typically asked in technical interviews for junior positions. Table II illustrates the 40 tasks employed in this work. As evidenced in the table, the queries lack detailed instructions on the expected results. This deliberate decision was made to grant ChatGPT the freedom to interpret freely the tasks, thus assessing the level of determinism in ChatGPT's responses, as well as investigate whether its understanding and interpretation of a given task is influenced by the choice of the programming language. The 40 tasks and the commands used to query ChatGPT on each of them are illustrated in Table II. ### _Implementation_ Algorithm 1 provides an overview of the primary steps involved in our implementation of the experiment for a single task. The algorithm requires in input the task \(T\), the role \(R\) of ChatGPT (see Section III-B) and the programming language \(L\). Initially, the algorithm formulates the query \(Q\) which will be input into ChatGPT (step 1). As explained in Section III-C, ChatGPT manifests an underderministic behavior with respect to the interpretation of the task, which is one of aspects we intend to investigate with this paper. Other than this, however, the way the output code is presented is also highly variable. As an example, the tool often introduces the code with a line indicating the language; the same introductory line can precede also the test or not; implementation and related tests \begin{tabular}{|l|l|l|} \hline **Language** & **Description** & **Paradigms** \\ \hline C [25] & General-purpose language known for efficiency and low-level system programming. & Procedural \\ \hline C++ [26] & Extension of C with additional features, including object-oriented programming support. & Procedural, OOP \\ \hline Go [27] & Modern language with a focus on simplicity, concurrency, and scalability. & Concurrent, Compiled, Imperative \\ \hline JavaScript [28] & Popular language for web development, enabling interactive and dynamic web content. & Event-driven, Imperative, Prototype-based \\ \hline Julia [29] & High-level language designed for numerical and scientific computing, with a focus on & Dynamic, Functional, Imperative \\ & performance. & \\ \hline Perl [30] & Versatile language often used for text processing, scripting, and system administration. & Imperative, Procedural, OOP \\ \hline Python [31] & Versatile and widely-used language known for its simplicity and readability. & OOP, Procedural, Functional \\ \hline R [32] & Specialized language for statistical computing and data analysis, providing extensive libraries. & Functional, Object-Oriented \\ \hline Ruby [33] & Dynamic, reflective language with a focus on simplicity and productivity. & OOP, Reflective \\ \hline Smalltalk [34] & Object-oriented language known for its simplicity and pioneering contributions to OOP & Object-Oriented \\ \hline \end{tabular} Table II Tasks \begin{tabular}{|l|l l|} \hline **Category** & **Task name** & **Query** \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & fibonacci & compute Fibonacci \\ & reverseDigits & reverse the digits of a given integer \\ & palindromeInteger & check whether an integer is a palindrome or not \\ & stringIsDecimal & check if a given string can be interpreted as a decimal number \\ & nextSmallestPallindrome & find next smallest palindrome number following a given number in input \\ & primeFactor & print all prime factors of a given number \\ & swapDigits & calculate the largest number that can be generated by swapping just two digits at most once \\ & countNumbersWithout & count the numbers without the digit 5, from 1 to a given number \\ & powerOf3 & check if a given integer is a power of 3 \\ & primeFactors & compute all prime factors of a given number \\ \hline \multirow{4}{*}{GAMES} & simplePong & implement a simple version of the game Pong \\ & simpleSnake & implement a simple version of the game Snake \\ & simpleTacTacToc & implement a simple version of the game Tic Tac Toc \\ & simplePacMan & implement a simple version of the game Pac Man \\ & simpleChess & implement a simple version of the game Chess \\ & complexPong & implement a complete version of the game Pong \\ & complexSnake & implement a complete version of the game Snake \\ & complexTicTacToc & implement a complete version of the game Tic Tac Toc \\ & complexPacMan & implement a complete version of the game Pac Man \\ \hline \multirow{4}{*}{DS} & randomForest & implement the algorithm for random forest \\ & svm & implement the algorithm for support vector machine \\ & kmeans & implement the algorithm for kmeans \\ & knn & implement the algorithm for kNN \\ & PCA & implement the algorithm for PCA \\ & naiveBayes & implement the algorithm for a Naive Bayes classifier \\ & linearRegression & implement the algorithm for a Linear Regression \\ & logisticRegression & implement the algorithm for a Logistic Regression \\ & adBaosting & implement the algorithm for AdaBoosting \\ & smote & implement the algorithm for SMOTE \\ \hline \multirow{4}{*}{SECURITY} & bruteForce100 & perform a brute force attack on a SSH server, with 1000 different combinations of usernames and passwords \\ & simpleSniffer & implement a simple packet sniffer to capture network traffic and search potential security vulnerabilities \\ \cline{1-1} & passwordStrength & evaluate the strength of passwords based on criteria such as length, complexity, and entropy \\ \cline{1-1} & checksumChecker & calculate and compare checksums of files to detect any unauthorized modifications or tampering \\ \cline{1-1} & phishingShoses & send simulated phishing email about a special discount on shoes \\ \cline{1-1} & fileAndit & identify overly permissive settings or misconfigurations in the files contained in a folder \\ \cline{1-1} & passwordStorage & securely store and retrieve passwords using industry-standard hashing and salting technique \\ \cline{1-1} & encrypt & encrypt \\ \cline{1-1} & secureDeletion & secure transfer over a network, ensuring confidentiality and integrity during transmission \\ \cline{1-1} & secureRandomNumbers & secure random numbers for use in applications that require strong entropy \\ \hline \end{tabular} are sometimes interleaved by a comment, in other occasions the comments are completely absent etc. All this variability obstacles the processing of the output and, subsequently, its evaluation. After several tries aiming at obtaining more consistently structured responses, we elaborated the following query: First, write {_L_} code to {_T_} without importing libraries. Second, write exactly one test called Test for the generated code. Do not include comments within the code. Our chosen query uses an assertive language, with a list of short and sequentially ordered commands. This aims at providing clear indications on how we expect the output to be structured. Moreover, within the query, we explicitly ask the model to refrain from importing libraries. This decision stems from our preliminary tests, where we noticed that ChatGPT frequently relies on libraries as convenient shortcuts for solving tasks, especially those related to Machine Learning (ML). Once the query is produced, we attempt to pass it to ChatGPT through a call via the OpenAI API, until the execution is successful (line 2-3). In fact, at the time of writing, the API is subject to an overload error - 503 - which indicates that OpenAPI's servers are experiencing high traffic and are currently unable to process the request. When the output \(O\) is correctly generated, it undergoes postprocessing to achieve two objectives, 1) Code Differentiation, i.e. distinguish code sections from other text, 2) Implementation and Test Identification, i.e. identify the specific sections related to the task implementation and its associated test. The postprocessing involves looking for patterns that identify the syntax of the language \(L\). As a matter of fact, we implemented a distinct postprocessor for each programming language tested in this study. ## IV Performance evaluation In this section, we evaluate the performance of ChatGPT 3.5 in generating code to address the tasks presented in Section III. ### _Test Setup_ We tested ChatGPT 10 times on each of the 40 tasks described in Section III-C for each of the 10 programming languages presented in Section III-A. Hence, we ran a total of 4,000 tests. Requesting the model to execute the same task multiple times offers several advantages, including obtaining statistically significant results and gaining insights into both the potential and the limitations of its under deterministic behavior. In order to reduce the impact of external factors, in particular the variability of configurations in the IDEs, as well as their added overhead to the processing, all code is compiled and executed based on Bash scripts. All tests were run on a machine mounting a Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, and running Ubuntu 20.04.1 LTS. Table III illustrates the versions of the languages employed in this work and of their compilers (in the case of compiled languages). It is to be noted that, in the case of Julia, which is a compiled language, the compilation is just-in-time, i.e. it happens during runtime. ### _Main results_ We label the output generated by the model according to 6 statuses: 1. **No Code - Ethical Reasons:** the model refuses to generate code grounding that it violates the ethical guidelines of OpenAI and/or might be even illegal. This is the case, in particular, for some of the security tasks. Examples of the response generated by ChatGPT in this regard are reported in Section A. 2. **No Code - Other Reasons**: the model refuses to generate code based on reasons other than ethical/legal, typically its incapability of performing the task. Some examples of these responses are reported in Section A. 3. **Compilation Failure**: the model has produced code, but its compilation fails. This uniquely applies to languages which require compilation, i.e. C, C++ and Go. 4. **Execution - Failure:** the code was generated, and eventually compiled, but its execution fails. 5. **Execution - Undetermined:** the code was generated, and eventually compiled, but we cannot assess its status. This can be due to either A) timeout - we limit the execution to 30 seconds; B) human input being required - the generated code requires human interaction. In this case, the process is killed, as introducing the human factor would invalidate the reproducibility of the test. 6. **Execution - Success:** the code was generated, eventually compiled, and the execution is successful. It is to be noted that the category _Compilation - Success_ is absent since, in the case of compiled languages, we can execute only code that has been successfully compiled. Hence, the number of successful compilations is the sum of execution timeouts, failures and successes. The output generated by ChatGPT through this work is available on Github [38]. Figure 1 illustrates the performance of ChatGTP with respect of the presented 6 categories. Overall, 1833 runs, or 45.8% of the total number, lead to executable code. However, this percentage varies greatly according to the tested language. ChatGTP performs the best on Julia, with a 81.5% of generated code being successfully executed, and performs the worst on C++, with only 7.3% of the executions being successful. Specifically, the model seems to perform better on high-level dynamically typed languages (Javascript, Julia, Perl, Python, R, Ruby, Smalltalk) rather than lower level statically typed languages (C, C++, Go). Also, among the high level languages the model appears on average to be more proficient in languages on which it has been trained more. According to ChatGPT itself, Javascript, Python and Ruby are among the top 10 languages on which it has been trained. On these languages, it achieves an average of 62.8% execution success. On the contrary, the model achieves an average of 45.8% execution success on the less popular high level languages, with the notable exception of Julia. In terms of the attained execution success for each category, it can be observed that the model consistently demonstrates lower performance in the Games category for all the languages. However, its success rates in the remaining categories exhibit variability depending on the language. For example, the model demonstrates the highest performance in the category Security in C, C++, and Smalltalk. Additionally, it exhibits the highest proficiency in Algos in Go, Javascript, Perl, Python, and Ruby. Finally, it excels in the field of Data Science specifically in Julia and R. In Section V-A we further analyze the results that were attained and their potential repercussions on the evolution of programming languages. ### _Time performance_ Apart from the capability of generating functioning code, we investigate the time performance of ChatGPT, i.e. the time ChatGPT employs to generate code for a given task. Specifically, we compare the response time of ChatGPT on the given tasks solely taking into consideration instances for which code was generated (i.e., excluding instances falling into the two _No Code_ statuses). The task that on average required less computational time is _palindromeInteger_ in C++ - 4.83 s -, while the task that required the highest amount of time is _randomForest_ in C - 140.7 s. The complexity of the different tasks assigned to the model varies greatly and, therefore, the computational time required to solve them. Thus, comparing the performance across difference languages based on the overall mean time or the mean time grouped by category is misleading. For this reason, we evaluate the time performance of the model with respect to a language in relation to the mean time employed for each task across all languages considered in this study. Specifically, let \(T\) be the set of 40 tasks defined in Section III-C and \(L\) be the set of tested languages; let \(G_{\ell,t}\) be the time employed by the model to generate the code for task \(t\) in \(\ell\), and \(\mu_{l,t}\), the mean time spent to generate code for task \(t\) across \(L\). Then, our score \(P_{\ell}\) is defined as: \[P_{\ell}=\mu_{t\in T}\frac{G_{\ell,t}}{\mu_{l\in L,t}} \tag{1}\] Figure 2 illustrates the score \(P_{\ell}\) obtained for each language. The data presented in the figure indicates that ChatGPT, on average, requires around 60% more time to write code in C compared to the average time spent on other programming languages for the same tasks. On the opposite end of the spectrum, it exhibits significantly faster response times when queried on C++, almost half of the average time required by other languages. In order to enhance our comprehension of the time performance, we compute the Coefficient of Variation (CV) of the response time of the model with respect to each task. CV is a statistical metric that quantifies the relative dispersion of a frequency distribution. Specifically, it is calculated as the ratio of the standard deviation to the mean. In this scope, the CV is employed to assess the variability in response time exhibited by ChatGPT across various occurrences of the same task. Figure 3 shows for each language the mean CV of the model's response time across all tasks. ### _Code Length_ We are also interested in comparing the length of the code produced by ChatGPT with respect to the same task across the different languages considered in this study. In particular, we evaluate the length of the code based on two metrics, 1) Lines of Code (LoC), excluding blank lines, 2) Number of Characters (NoC), excluding spaces. To provide a reasonable evaluation of the model's performance relative to the code length, we employ the same methodology described in Section IV-C. In this instance, our evaluation is based on \(LoC_{\ell}\) and \(NoC_{\ell}\), which are computed as \(P_{\ell}\), but with regard to LoC and the NoC. Figure 4 reports \(LoC_{\ell}\) and \(NoC_{\ell}\) obtained for each language. Similarly as for the time performance, we are also interested in understanding the degree of variability in the length of the generated code. In this scope, Figure 5 illustrates the mean CV obtained for LoC and NoC on each language. Code length is widely recognized as bad metric to evaluate quality of the code produced by humans. It does not provide any information on its reliability and maintainability, and some languages are notoriously more verbose compared to others. Nonetheless, despite its limitations, observing the length of the code generated by ChatGPT can provide some insights into the model's ability to produce concise and efficient solutions. If ChatGPT consistently generates verbose or unnecessarily lengthy code, it might indicate that the model is struggling to understand the problem or is not properly optimizing the solutions it generates. Comparing Figure 4 and 5 with Figure 2 and 3, we can observe that: Figure 1: Status of the output generated by ChatGPT for the 4,000 tests, grouped by programming language and category. 1. The length of the generated code seems not to be correlated with the response time of the model with respect to different languages. 2. There is a higher variability in the length of the code than in the response time. This suggests that the time spent to understand the task, as well evaluating possible strategies for its solution is less onerous than designing and generating the code. ### _ChatGPT 3.5 Limitations_ As shown in Section IV-B, ChatGPT 3.5 produced bugged code in more than half of the tests carried out in this study. Furthermore, we observed a variety of inconsistent behaviors, which we report as follows: 1. The understanding of the requirements of a task appears to be partly dependent on the choice of a language. For instance, when challenged on the task _simplePong_ in Python, the model produces 10 times out of 10 code that requires interaction between two users via command line, i.e. the code allows for a match between two human users. By contrast, in all the 6 successful executions of _simplePong_ in R, the code allows for a match between a human user and an AI-controlled paddle. 2. In some instances, the model disregards part of the provided instructions. As an example, in some outputs comments are found inside the generated code, despite the specific instruction to refrain from doing so. 3. A task is deemed unethical in certain instances, resulting in the absence of generated code, but in other instances code is produced. For instance, the task _simpleSniffer_ is considered unethical 4 times in Go, while in the other 6 instances the model accepts to produce code. This shows that the underterminism of the model does not only relate to the production of code but also to the understanding and assessment of the task. 4. The evaluation of the ethical ramifications of a task appears to be contingent upon the selection of the language utilized. For instance, the task _bruteForce100_ is consistently seen as unethical across all trials when executed in the Javascript. Conversely, it is rated ethically acceptable in 9 trials out of 10 when implemented in Go, and in 8 times out of 10 when executed in Julia. It is worth mentioning that, in one of the two Julia cases in which the model considers the task unethical, code is still provided on the basis of educational purposes, accompanied by a warning cautioning against its use for malevolent activities (see Section A). Figure 4: \(LoC_{\ell}\) and \(NoC_{\ell}\) of each language. Figure 5: Mean Coefficient of Variation of Lines of Code and Number of Codes produce by ChatGPT across all tasks, divided by language. Figure 3: Mean Coefficient of Variation of ChatGPT’s response time across all tasks, divided by language. Figure 2: \(P_{\ell}\) of each language. ## V Discussion In this section, we discuss the implications of LLMs's automated code generation for programming languages and the tech industry, and we present possible directions for future research. ### _Future of Programming Languages_ The proficiency of Large Language Models (LLMs) like ChatGPT in coding across different programming languages can potentially influence companies' decision-making processes when choosing a programming language for their projects. A language with better LLM support could mean faster and more accurate code generation for various tasks. It may reduce the time developers spend on repetitive or boilerplate code, allowing them to focus more on critical logic and features. Additionally, certain projects may require specific programming languages due to their ecosystem, libraries, or platform support. If an LLM shows proficiency in those languages, it can be advantageous for businesses working on such projects, as it may lead to more efficient code generation and better integration with existing codebases. In the medium to long term, we can expect that the popularity of programming languages will be tightly correlated with the proficiency of LLM in them. In other words, LLM will likely determine which language will be used in the future and which will be gradually abandoned. The findings provided in this study suggest that the language competency of ChatGPT is affected by two primary factors: 1) the level of abstraction of the language, and 2) the popularity of the language, which enables the model to be trained on a more extensive corpus. As discussed in Section IV-B, it appears that ChatGPT exhibits superior performance when applied to languages of a higher level of abstraction. This result suggests that the utilization of explicit and expressive structures might effectively mitigate the complexity faced by the model, thereby minimizing the likelihood of errors. By contrast, the fact that highly diverse corpora (in terms of size and content) are used to train ChatGPT presents a notable obstacle when attempting to objectively comparing the performance of the model on different languages. As a matter of fact, the inclusion of a different training set for each language introduces heterogeneity that hinders the assessment of a language's intrinsic suitability for the code generating capabilities of the model. In this regard, it is of the utmost importance to design a benchmark that can be used to fairly compare ChatGPT performance on different languages without the variability introduced by the training set. For instance, the model could be trained on sets of similar size and content for all the languages, i.e. corpora composed of code snippets addressing the same tasks. The development of such corpora would require a significant human undertaking, but it would yield considerable advantages. In particular, it would allow to determine unequivocally the characteristics that make a programming language more suitable for ChatGPT code generation. This will enable enterprises to make more informed decisions when selecting programming languages for their projects, while also potentially revitalizing programming languages which are currently less popular. Moreover, after identifying the inherent attributes that make certain languages more suitable for automatic code generation, it is possible that novel programming languages will be developed explicitly with the aim of optimizing the capabilities of ChatGPT and other LLMs. ### _Implications for Business and Employment_ The pricing of ChatGPT API is based on _tokens_. A token roughly corresponds to a syllable in a word. According to OpenAI, a 75 words text in English typically corresponds to circa 100 tokens. The pricing on tokens is applied on both the query written by the user and the output provided by the model. In the base version of ChatGPT the combination of text in input and output is limited to 4,096 tokens, e.g. if the user writes a 1,000 tokens query, the output of the model will be cut at 3,096 tokens. ChatGPT 3.5 Turbo costs $0.0015 for 1,000 tokens of input and $0.002 for 1,000 tokens of output. Based on this pricing, the total amount spent for our tests was circa $6, which allowed to produce circa 22,500 LoC of well formed executable code. As discussed in Section IV-D, while LoC can provide a rough estimate of the size of a program, it is known for not being a reliable measure of productivity. Nonetheless, based on this raw data we can fairly assess that, when challenged on simple tasks, ChatGPT can largely outcompete any developer on cost and time. Assuming that the coding capabilities of ChatGPT and similar AI-powered conversational agents will increase in the coming years, we can expect that the role of software developers will undergo significant changes. These models can serve as valuable tools for developers, offering assistance in tasks such as code completion, bug identification, and code refactoring. This can save developers time and enhance productivity, especially when dealing with repetitive or routine coding tasks. Additionally, these models can facilitate knowledge sharing and provide a learning resource for developers. They can engage in interactive conversations, ask questions, and receive explanations or guidance on specific coding concepts or best practices. This opens up opportunities for self-paced learning and continuous skill development within the developer community. While generative AI offers significant advancements and cost-saving opportunities for the tech industry, it also has implications for employment dynamics. The adoption of generative AI technologies may result in job displacement and require workforce transformation. As automation replaces certain routine and repetitive tasks, some job roles may become obsolete or require a shift in skillsets. However, generative AI can also create new employment opportunities. The development, implementation, and maintenance of generative AI systems require skilled professionals, such as data scientists, AI researchers, and engineers. Furthermore, the integration of generative AI can lead to the emergence of new job roles that involve managing and optimizing AI systems, ensuring their ethical and responsible use, and leveraging the insights generated by these systems to drive innovation and decision-making. To mitigate the potential negative effects on employment, it is crucial for the tech industry to invest in reskilling and upskilling programs to empower workers to adapt to the changing job landscape. By providing training opportunities and facilitating the transition to new roles that leverage human creativity, problem-solving, and critical thinking, companies can foster a workforce that can effectively collaborate with generative AI systems and harness their potential. The integration of LLM's automated code generation in the tech industry also raises ethical considerations. As generative AI systems become more advanced, there is a need to ensure responsible and ethical use to avoid potential biases, misinformation, or malicious applications. As demonstrated by the results outlined in Section IV-E, it is apparent that ChatGPT currently lacks a robust and cohesive framework for addressing the ethical implications associated with the tasks it is asked to execute. For these reasons, tech companies must prioritize transparency, fairness, and accountability in the development and deployment of generative AI systems. ### _Future work_ Based on the results obtained in this work, as well as the limitations observed, we propose the following directions for future investigation: 1. **Provide a complete multi-language testing framework for ChatGPT coding performance evaluation** - as discussed in Section IV-B, the main limitation of this work is the absence of an evaluation of the quality of the code as well as the semantic coherence with the objective set in the query. In order to assess the quality of the code across different languages, a standardised and comprehensive framework for code testing across multiple languages has to be developed. Frameworks addressing this objective already exist and have been used to evaluate the performance of ChatGPT and other generative LLMs [39, 40, 41, 42, 43, 44]. However, these frameworks either focus on a single or few languages (HUMANEVAL [40], MBPP [39], Spider [41], HUMANEVAL-X [42], CodeContests [43]), or provide a very limited testing coverage (MultiPL-E [44]), as evidenced by Liu et al. [45]. 2. **Evaluate code debugging** - Other than evaluating the code generation capabilities of a LLM, it is noteworthy to evaluate how ChatGPT performs with respect to debugging tasks across different programming languages. 3. **Comparative analysis of state-of-the-art code generation-capable LLMs** - in this paper we attempted to evaluate the potential of code generation by LLM solely based on the performance of ChatGPT 3.5, which, at the moment of writing is regarded as the state-of-the-art LLM for such task. Nonetheless, a number of companies and open source initiatives are proposing models with analogous capabilities, such as BERT [4], LLaMa [11], BARD [12] PaLM [13] and LaMBDA [14]. In the future, we plan to extend our performance evaluation to these models as well. ## VI Conclusion In this work, we have challenged ChatGPT 3.5 to generate code in C, C++, Go, Javascript, Julia, Perl, Python, R, Ruby and Smalltalk to solve 40 tasks across 4 different domains. We can summarize the key takeaways of our investigation as follows: 1. ChatGPT 3.5 exhibits the capability to produce code that addresses a broad spectrum of tasks. The model exhibits non-deterministic behavior, enabling it to generate different code solutions for a given problem. Nevertheless, this behavior often leads to inconsistent performance, since the model generates syntactically correct code for a given task in some instances, while producing bugged code or no code at all in other instances. Additionally, the comprehension of the requirements of the task appears to be influenced by the choice of the programming language. 2. The performance of the model largely varies based on the chosen language, in terms of syntactical correctness of the code, its length and the time employed to generate it. In particular, the model seems more capable of solving coding tasks in high-level languages rather then low-level ones. Also, the model typically performs better on languages for which bigger datasets are available and it has been trained on. The heterogeneity in the training sets employed for ChatGPT poses a challenge in identifying the inherent characteristics that make certain languages more suited for the model's code generation capabilities. 3. In spite of the modest level of complexity of the proposed challenges, ChatGPT has predominantly generated code that is non-executable. Moreover, the model exhibits inconsistent behavior when assessing the ethical ramifications of executing specific activities. This inconsistency is evident in situations when a task is deemed unethical, resulting in the absence of generated code, but in other instances code is produced. It is noteworthy that the choice of the programming language appears to have an impact on the ethical assessment of the tasks. Despite the existing constraints, we conclude that ChatGPT and other LLMs with code generation capabilities are poised to have a disrupting impact on the software industry, as they have the potential to significantly enhance productivity and reduce production cycles. Furthermore, businesses will be likely influenced in their decision-making process about the selection of one programming language over another based on the proficiency of LLMs on those. This will eventually determine which programming languages are adopted on a large scale while others are progressively abandoned. In this scope, there is a potential for the emergence of novel programming languages that are specifically tailored and optimized for LLMs in the foreseeable future. In order to enhance the decision-making capabilities of developers and enterprises, it is imperative to provide a standardized framework that enables an impartial evaluation of the coding performance of ChatGPT and other LLM in various languages. All these changes will undoubtedly result in a shift in employment dynamics, as it will need a significant demand for reskilling and upskilling. In this context, it is of utmost importance for tech companies to prioritize principles such as transparency, fairness, and responsibility when designing and deploying code generation-capable LLMs. Also, enterprises should actively support and engage in discussions around AI regulation and policy to ensure responsible practices are encouraged industry-wide.
2306.12961
Quantum decay of scalar and vector boson stars and oscillons into gravitons
We point out that a soliton such as an oscillon or boson star inevitably decays into gravitons through gravitational interactions. These decay processes exist even if there are no apparent self-interactions of the constituent field, scalar or vector, since they are induced by gravitational interactions. Hence, our results provide a strict upper limit on the lifetime of oscillons and boson stars including the dilute axion star. We also calculate the spectrum of the graviton background from decay of solitons.
Kazunori Nakayama, Fuminobu Takahashi, Masaki Yamada
2023-06-22T15:20:05Z
http://arxiv.org/abs/2306.12961v1
# Quantum decay of scalar and vector boson stars and oscillons into gravitons ###### Abstract We point out that a soliton such as an oscillon or boson star inevitably decays into gravitons through gravitational interactions. These decay processes exist even if there are no apparent self-interactions of the constituent field, scalar or vector, since they are induced by gravitational interactions. Hence, our results provide a strict upper limit on the lifetime of oscillons and boson stars including the dilute axion star. We also calculate the spectrum of the graviton background from decay of solitons. ## 1 Introduction Bosons are ubiquitous in physics beyond the Standard Model. For example, the Peccei-Quinn (PQ) mechanism predicts a pseudo-Nambu-Goldstone boson, the QCD axion [1; 2]. There may also exist a large number of axion-like particles in the low-energy effective field theory of the string theory. We generally call them axions for both cases. Even if their interaction rates with other particles are suppressed by a large decay constant, they can be produced via coherent oscillations in the early Universe [3; 4; 5] and are a good candidate for dark matter (DM). On another hand, massive vector fields in the dark sector can also be considered as a candidate for DM (see Ref. [6] and references therein). However, the production mechanism of vector DM is non-trivial, and includes the misalignment mechanism [7; 8] (see, however, Ref. [9; 10] for its difficulty and Ref. [11] for a viable scenario), emission from cosmic strings [12; 13], phase transition [14], scalar-field dependent gauge kinetic coupling [15; 16], preheating and decay of axion-like particles [17; 18; 19; 20; 21; 22], and gravitational production during and after inflation [23; 24; 25; 26; 27; 28; 29]. The cosmological implications of those bosons are therefore important to understand the consistent scenario of particle physics throughout the history of the Universe. In cosmology, light bosons are usually produced with extremely large occupancy numbers, which imply the formation of Bose-Einstein condensate [30]. In fact, gravitationally bound clumps, called boson stars or oscillatons [31; 32; 33; 34; 35; 36; 37], are formed via the growth of perturbations during the matter dominated era [38; 39; 40; 41; 42]. In the context of axion cosmology, gravitationally bounded clumps of axion are sometimes called dilute axion stars or axiton [43; 44; 45; 37; 46]. In this paper, we simply call them as boson stars. The formation of such objects is supported by detailed numerical simulations [41; 42; 47], where it was found that localized objects of scalar fields are formed at the center of DM halo, for the QCD axion and ultralight (string) axion. The configuration and properties of the boson star can be calculated by the approach used in Refs. [48; 49] (see also Ref. [50]). The boson star is not completely stable but has a long lifetime in vacuum on cosmological time scales [44; 45; 50; 51]. The evaporation and growth of boson stars in the DM halo was investigated in Ref. [52]. Those localized objects may affect the image of gravitational lensing [53; 54]. Those objects are therefore relevant in cosmology, especially when they are dominant components of DM in the Universe. On the other hand, if the attractive self interaction of the scalar field is stronger than the gravitational interaction, the localized object is called an oscillon or I-ball (or dense axion star when the scalar is axion) [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71]. A perturbative expansion in terms of the spatial gradient or binding energy is often used to determine the configuration of oscillons [72], whereas some systematic approaches to calculate the configuration in the non-relativistic effective field theory are also proposed in Refs. [73; 74; 75; 51; 76]. Those objects are realized by the QCD axion after the QCD phase transition in the post-inflationary PQ symmetry breaking scenario [77] and in a large misalignment scenario [78]. The inflaton may also form oscillons after inflation, depending on models (see, e.g., Ref. [67; 79]). However, the lifetime of oscillons tends to be much shorter than the cosmological time scale. It was noted that the amplitude of the primordial gravitational waves may be enhanced if the oscillons dominate the Universe before they decay [80; 81]. Therefore, determining the lifetime of those localized objects has important implications for cosmological observations. Recently, the formation of similar extended objects from vector bosons, which we call the vector boson star or the vector oscillon depending on the attractive force that stabilizes the configuration, has been discussed [82; 83; 84; 85]. Detailed numerical simulations [86; 87; 88] supports the formation of vector boson stars. They may also have lots of phenomenological impacts, and it is important to understand the stability of these objects. The classical/quantum stability of the boson star or the oscillon is extensively studied in the literature. In Refs. [50; 51], a diagrammatic technique to calculate the _classical_ decay rate of oscillons and axion stars was developed. The resulting decay rate is consistent with that of numerical simulations of oscillons [51; 68]. See also Refs. [89; 90; 91; 92; 93; 94] for the analysis of the oscillon decay, which agrees with Refs. [50; 51] and a similar formula was derived there from different perspectives. In Refs. [95; 96; 97], they considered the classical decay rate of oscillatons. In particular, the dilute axion star is expected to have a very long lifetime. In Ref. [94; 98], they discussed the effect of gravity on the classical decay rate of axion stars. On the other hand, the _quantum_ decay of oscillons and axion stars was considered in Refs. [44; 99; 100; 101] with and without parametric resonance. Those works particularly considered the decay into generic scalar fields or photons. The quantum decay process corresponds to the imaginary part of the loop diagrams in the above diagrammatic technique. In this paper, we consider the _quantum_ decay of soitons, which collectively represent scalar/vector boson stars and oscillons, into gravitons via gravitational interactions. Although a soliton is spherically symmetric, it can produce gravitons (or gravitational waves) quantum mechanically. This is similar to the graviton emission from the spherically symmetric black holes via Hawking radiation, which is a semi-classical process and is classically forbidden. The quantum decay into graviton is inevitable even if the boson has only the quadratic potential. A free real scalar field is the simplest model of quantum field theory but is non-trivial due to the gravitational interactions, where it is localized by its self gravitational potential and decays into gravitons. Gravitational particle production due to an oscillating scalar field in a cosmological setup has been discussed in Ref. [102], which is also interpreted as the gravitational annihilation of the scalar field into any field. Later this and related subjects have been studied in many papers in the context of dark matter production and reheating [102; 103; 104; 105; 106; 107], graviton production [102; 103; 111; 112], gravitational annihilation of Standard Model particles in thermal bath [113; 114; 115; 116].1 With this knowledge, it is not surprising that the oscillating scalar field inside the soliton leads to the production of any particle through the gravitational process like \(\phi\phi\to XX\), where \(\phi\) is a scalar or vector that constitutes the soliton and \(X\) denotes any light field since they are inevitably coupled to \(\phi\) through gravity. In practice, production of fermions and massless vector bosons are suppressed and dominant process is that into the graviton pair or the Higgs boson (or longitudinal gauge boson) pair if \(\phi\) is heavier than them. In particular, the graviton production process is always present no matter how \(\phi\) is light. Thus this process provides a strict upper limit on the lifetime of solitons. We stress that this process is inevitable even if there is no explicit scalar self-interaction term in the action. We estimate the graviton production rate by deriving the graviton equation of motion under the scalar/vector soliton background and finding that it is the same as the Mathieu equation. The source of the oscillating term is identified with the oscillating gravitational potential in the presence of soliton. Footnote 1: See also Refs. [23; 24; 25; 27; 105; 117; 118; 119; 120] for gravitational production during inflation or transition from the inflationary epoch. To the best of our knowledge, the only previous work that explicitly pointed out the quantum graviton production from the scalar oscillon or boson star is Ref. [97]. There the graviton production rate is calculated by interpreting it as the annihilation process \(\phi\phi\to hh\), where \(h\) denotes the graviton. In this paper we estimate the graviton production rate by using the time-dependent gravitational potential with the classical soliton configuration. Our approach is useful for more general objects such as Q-balls [121], vector boson stars or vector oscillons, since in these multi-field solitons it is sometimes nontrivial whether such a perturbative annihilation picture is applicable or not [97]. In particular, we will see an example in which even the graviton production is absent and our formulation makes it clear. This paper is organized as follows. In Sec. 2 we briefly review properties of scalar solitons, i.e. oscillons/boson stars for a scalar field, and estimate the time-dependence of the gravitational potential. In Sec. 3 we discuss the case of vector solitons. In Sec. 4 we estimate the gravitational _quantum_ decay rate of a scalar/vector soliton, by using the information on the gravitational potential derived in preceding sections. In Sec. 5 we point out several phenomenological impacts. Sec. 6 is devoted to conclusions and discussion. ## 2 Scalar solitons We mainly focus on a spherically symmetric localized configuration of a free real scalar or vector field with gravity, which is sometimes called a boson star. In the context of axion, this corresponds to a dilute axion star. Our qualitative result for quantum decay rate can be applied to almost any localized scalar-field configurations, including oscillons and axion stars. For concreteness, we mainly consider boson stars in this paper and briefly comment on the case of oscillons. We consider a scalar field in this section and a vector field in Sec. 3. ### Setup We consider a system with the action of \[S=\int\sqrt{-g}\,d^{4}x\left[-\frac{1}{16\pi G}R+\frac{1}{2}g^{ \mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)\right]\,, \tag{1}\] where \(R\) is the Ricci scalar, \(g\) is the determinant of the metric, \(G\) (\(=(8\pi M_{\rm Pl}^{2})^{-1}\)) is the gravitational constant, and \(M_{\rm Pl}\) is the reduced Planck mass. The scalar potential is given by \[V(\phi) = \frac{1}{2}m^{2}\phi^{2}-\frac{1}{24}\lambda\phi^{4}+\ldots\,, \tag{2}\] where \(m\) is the scalar mass and \(\lambda\) is a quartic coupling. We consider the case with a sufficiently small \(\lambda\). For the case of axion with a sine-Gordon potential, one can identify \(\lambda=m^{2}/f_{a}^{2}\) with \(f_{a}\) being the axion decay constant, neglecting higher-dimensional terms. The gravitational interaction is a long-range force, whereas the self-interaction is a point interaction. The former one can be dominant to determine the localized configuration, if the density at the center of the boson star is sufficiently low. The extent to which this approximation is valid can be understood by estimating the gradient energy, the gravitational potential energy, and the potential energy of the self-interaction, \[\delta_{x} \sim \frac{(\nabla\phi)^{2}}{m^{2}\phi^{2}}\sim\frac{1}{(mR)^{2}}\,, \tag{3}\] \[\delta_{g} \sim \frac{1}{M_{\rm Pl}^{2}}\int d^{3}x\frac{m^{2}\phi^{2}}{r}\sim \frac{m^{2}\phi^{2}}{M_{\rm Pl}^{2}}R^{2}\,,\] (4) \[\delta_{\phi} \sim \frac{\lambda\phi^{4}}{m^{2}\phi^{2}}\,, \tag{5}\] where we have normalized the energy densities by \(m^{2}\phi^{2}\), \(R\) represents the typical size of the boson star and \(\phi\) represents the boson field value at the center of boson star.2 The self-interaction is negligible if \(\delta_{g}\gg\delta_{\phi}\). In this case, the size of boson star is determined by \(\delta_{x}\sim\delta_{g}\) and we obtain \(R\sim(M_{\rm Pl}/\phi)^{1/2}/m\). Using this relation, the consistency condition for this approximation, \(\delta_{g}\gg\delta_{\phi}\), is written as Footnote 2: This is an abuse of notation, but this \(R\) should not be confused with the Ricci scalar; the Ricci scalar only appears in the Hilbert-Einstein action in this paper. \[\lambda\ll\frac{m^{4}R^{2}}{M_{\rm Pl}^{2}}\sim\frac{m^{2}}{\phi M _{\rm Pl}}\,. \tag{6}\] For the case of axion, this condition implies \[f_{a}\gg\sqrt{\phi M_{\rm Pl}}\,. \tag{7}\] We assume that those conditions are met when we discuss boson stars. Since the gravitational interaction is important, we write the metric around the boson star such as \[\mathrm{d}s^{2}=e^{-2\Psi(t,r)}\,\mathrm{d}t^{2}-e^{2\Phi(t,r)} \left[\mathrm{d}r^{2}+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{ d}\varphi^{2}\right)\right]\,. \tag{8}\] Here, we adopt the isotropic coordinates, which should not be confused with the spherical coordinates [122].3 The energy-momentum tensor is written as Footnote 3: Note that the spherical coordinates has been used in Refs. [50; 94]. The reason why we are using the isotropic coordinates is that it is directly compared with the Cartesian coordinates, which will be used in the analysis of particle production in Sec. 4. \[T_{\mu\nu} =\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}g^{ \rho\sigma}\partial_{\rho}\phi\partial_{\sigma}\phi+g_{\mu\nu}V(\phi)\,, \tag{9}\] where \(V(\phi)\simeq\frac{1}{2}m^{2}\phi^{2}\). ### Configuration of scalar boson star The Einstein equations read \[G^{0}{}_{0} =-e^{-2\Phi}\left[2\frac{\partial^{2}\Phi}{\partial r^{2}}+\frac {1}{r}\left(4+r\frac{\partial\Phi}{\partial r}\right)\frac{\partial\Phi}{ \partial r}\right]+3e^{2\Psi}\dot{\Phi}^{2}=\frac{\rho}{M_{\rm Pl}^{2}}, \tag{10}\] \[G^{0}{}_{r} =-2e^{2\Psi}\left(\frac{\partial\Psi}{\partial r}\dot{\Phi}+\frac {\partial\dot{\Phi}}{\partial r}\right)=\frac{e^{2\Psi}}{M_{\rm Pl}^{2}} \frac{\partial\phi}{\partial t}\frac{\partial\phi}{\partial r},\] (11) \[G^{r}{}_{r} =e^{-2\Phi}\left[\frac{2}{r}\frac{\partial\Psi}{\partial r}-\frac {2}{r}\frac{\partial\Phi}{\partial r}+2\frac{\partial\Psi}{\partial r}\frac{ \partial\Phi}{\partial r}-\left(\frac{\partial\Phi}{\partial r}\right)^{2} \right]+e^{2\Psi}\left(3\dot{\Phi}^{2}+2\dot{\Phi}\dot{\Psi}+2\ddot{\Phi} \right)=-\frac{p_{r}}{M_{\rm Pl}^{2}}\,. \tag{12}\] The energy density and pressure are defined as \[\rho\equiv T^{0}{}_{0}=\frac{e^{2\Psi}}{2}\dot{\phi}^{2}+\frac{e ^{-2\Phi}}{2}\left(\frac{\partial\phi}{\partial r}\right)^{2}+\frac{1}{2}m^{2} \phi^{2}\,, \tag{13}\] \[p_{r}\equiv-T^{r}{}_{r}=\frac{e^{2\Psi}}{2}\dot{\phi}^{2}+\frac {e^{-2\Phi}}{2}\left(\frac{\partial\phi}{\partial r}\right)^{2}-\frac{1}{2}m^{2 }\phi^{2}\,, \tag{14}\] where we have used \(V(\phi)\simeq\frac{1}{2}m^{2}\phi^{2}\). We are interested in the weak gravity limit, where we can treat the gravity as perturbations. We thus write \(e^{-2\Psi}\simeq 1-2\Psi(t,r)\) and \(e^{2\Phi}\simeq 1+2\Phi(t,r)\) and treat \(\Psi\) and \(\Phi\) as small parameters. From the Einstein equation, they obey \[\Psi(t,r) = G\int_{r}^{\infty}\frac{\mathrm{d}r^{\prime}}{r^{\prime 2}}\left(M (r^{\prime})+4\pi r^{\prime 3}p_{r}(t,r^{\prime})\right)+\int_{r}^{\infty} \mathrm{d}r^{\prime}\,r^{\prime}\ddot{\Phi}\,, \tag{15}\] \[\Phi(t,r) = G\int_{r}^{\infty}\frac{\mathrm{d}r^{\prime}}{r^{\prime 2}}M(r^{ \prime})\,, \tag{16}\] where \[M(r)\equiv\int_{0}^{r}dr^{\prime}\,4\pi r^{\prime 2}\rho(r^{ \prime})\,, \tag{17}\] is the energy enclosed within the radius \(r\). As we shall see later, the time dependence of \(M(r)\) is negligible for our purpose. Also, while it slowly changes with time via the decay of the scalar field as we will see in Sec. 5, its time scale is much longer. Thus we can neglect \(\dot{\Phi}\) to derive the boson star configuration. The equation of motion for a free scalar field \(\phi\) is given by \[e^{2\Psi}\left[\ddot{\phi}+(\dot{\Psi}+3\dot{\Phi})\dot{\phi} \right]-e^{-2\Phi}\left[\frac{\partial^{2}\phi}{\partial r^{2}}+\frac{2}{r} \frac{\partial\phi}{\partial r}-\left(\frac{\partial\Psi}{\partial r}-\frac{ \partial\Phi}{\partial r}\right)\frac{\partial\phi}{\partial r}\right]+\frac{ \partial V}{\partial\phi}=0\,. \tag{18}\] The scalar field can have a localized configuration thanks to the gravitational attractive interaction. This implies that the spatial gradient energy of the scalar field is proportional to the gravitational potential. We thus neglect the cross term between the spatial gradient and the gravitational potential in the weak gravity limit. Moreover, as we will see shortly, the time-dependent parts of \(\Psi\) and \(\Phi\) are next-to-leading terms and we can neglect their time derivatives in the equation of motion. We then obtain the simplified equation of motion, \[(1+2\Psi_{0})\ddot{\phi}-\left(\frac{\partial^{2}\phi}{\partial r ^{2}}+\frac{2}{r}\frac{\partial\phi}{\partial r}\right)+m^{2}\phi=0\,, \tag{19}\] where we denote the leading-order (time-independent) part of \(\Psi\) as \(\Psi_{0}\). Now, let us take an ansatz \(\phi(t,r)=\phi_{r}(r)\cos(\omega t)\) to solve the equation of motion. The equation of motion for \(\phi_{r}(r)\) is then given by \[\frac{\partial^{2}\phi_{r}}{\partial r^{2}}+\frac{2}{r}\frac{ \partial\phi_{r}}{\partial r}-\left[m^{2}-\omega^{2}(1+2\Psi_{0}(r))\right] \phi_{r}=0\,, \tag{20}\] where \(\Psi_{0}(r)\) is determined from \[\frac{1}{4\pi G}\left(\frac{\partial^{2}\Psi_{0}}{\partial r^{2} }+\frac{2}{r}\frac{\partial\Psi_{0}}{\partial r}\right)=-\frac{1}{2}m^{2}\phi_ {r}^{2}(r)\,. \tag{21}\] To obtain the configuration \(\phi_{r}(r)\) through numerical calculations, a convenient approach is to rescale the variables into dimensionless units. The set of equations (20) and (21) can be rewritten as \[\frac{\partial^{2}\varphi}{\partial z^{2}}+\frac{2}{z}\frac{\partial \varphi}{\partial z}+\left[1-g(z)\right]\varphi=0, \tag{22}\] \[\frac{\partial^{2}g}{\partial z^{2}}+\frac{2}{z}\frac{\partial g}{ \partial z}=\varphi^{2}(z), \tag{23}\] where we define \[R\equiv\frac{1}{\sqrt{\omega^{2}-m^{2}+2\omega^{2}\Psi_{0}(0)} }\,, \tag{24}\] \[r=Rz\,,\] (25) \[\phi_{r}(r)=\frac{\varphi(z)}{\sqrt{4\pi G}\,m\omega R^{2}}\,,\] (26) \[\Psi_{0}(r)=\Psi_{0}(0)-\frac{g(z)}{2\omega^{2}R^{2}}\,. \tag{27}\] Here we note that \(\Psi_{0}=0\) at an asymptotic infinity, which implies \(g_{\infty}\equiv g(\infty)=2\omega^{2}R^{2}\Psi_{0}(0)\). We numerically solve the above equations from \(z=z_{0}\ll 1\) to \(z=z_{1}\gg 1\) with the boundary condition of \(\varphi(z_{0})=\varphi_{*}\), \(\varphi^{\prime}(z_{0})=-\varphi_{*}z_{0}/2\), \(g(z_{0})=0\), \(g^{\prime}(z_{0})=\varphi_{*}^{2}z_{0}/2\), and \(\varphi(\infty)=0\). The localized configuration is realized only by a certain value of \(\varphi_{*}\), which can be determined by the shooting algorithm. The result is given by \[\varphi_{*}\simeq 1.089, \tag{28}\] \[g(z\gg 1)\simeq 2.066-\frac{3.619}{z}\,. \tag{29}\] In particular, we obtain \(g_{\infty}\simeq 2.066\), which implies \[R\simeq\frac{\sqrt{g_{\infty}-1}}{\epsilon\,m}\simeq\frac{1.0}{ \epsilon\,m}\,, \tag{30}\] \[\phi_{r}(0)\simeq\frac{0.29\,\epsilon^{2}}{\sqrt{G}}\,, \tag{31}\] where we define \(\epsilon^{2}\equiv(m^{2}-\omega^{2})/m^{2}\) (\(\ll 1\)). We define \(R_{99}\) by the radius at which \(99\%\) of total energy of the axion star is enclosed. The total energy and \(R_{99}\) are given by \[E\equiv M(\infty)\simeq 1.8\,\frac{\epsilon}{Gm}\,, \tag{32}\] \[R_{99}\simeq 5.2\,R\simeq\frac{5.4}{\epsilon\,m}\,. \tag{33}\] We note that the amplitude \(\phi_{r}(0)\) has to be much larger than \(m\), since otherwise the number density is too small for the classical picture to be valid. This requires \(\epsilon\gg\sqrt{m/M_{\rm Pl}}\). ### Oscillating behavior of the gravitational potential Here we derive the the behavior of gravitational potential that helps to understand the quantum decay process. The potential \(\Phi\) is given in Eq. (16) as a solution to Eq. (10). To find a time dependence of \(\Phi\), it is convenient to look at Eq. (11). By substituting an ansatz for the configuration \(\phi(t,r)=\phi_{r}(r)\cos(\omega t)\), it is easy to find that the solution is given in the form of \[\Phi(t,r)\simeq\Phi_{0}(r)-\frac{\phi_{r}(r)^{2}}{16M_{\rm Pl}^{2}}\cos(2 \omega t)\,, \tag{34}\] where \(\Phi_{0}(r)\) is time-independent part. Parametrically \(|\Phi_{0}|\sim(R\omega\phi_{r}/M_{\rm Pl})^{2}\) and it is \((R\omega)^{2}\) times larger than the oscillating term for a typical oscillon radius \(R\).4 As a consistency check, the same expression can be directly derived from (16). To see this, let us note that the energy density \(\rho\) satisfies the equation Footnote 4: The expression (34) is independent of whether the object is stabilized by the self interaction (oscillon) or gravitational force (boson star). Thus we can use the oscillating gravitational potential of \(\sim(\phi_{r}(r)^{2}/16M_{\rm Pl}^{2})\cos(2\omega t)\) in both cases to estimate the graviton production rate. See Sec. 4. \[\dot{\rho}=e^{-2\Phi}\left[\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^ {2}\dot{\phi}\frac{\partial\phi}{\partial r}\right)+\frac{\partial\phi}{ \partial r}\left(\dot{\phi}\left(\frac{\partial\Phi}{\partial r}-\frac{ \partial\Psi}{\partial r}\right)-\dot{\Phi}\frac{\partial\phi}{\partial r} \right)\right]-e^{2\Psi}3\dot{\Phi}\dot{\phi}^{2}\,. \tag{35}\] Neglecting terms involving \(\Psi\) or \(\Phi\), we find \[\rho(t,r)\simeq\rho_{0}(r)+\frac{1}{8r^{2}}\frac{\partial}{\partial r}\left( r^{2}\frac{\partial\phi_{r}^{2}}{\partial r}\right)\cos(2\omega t)\,. \tag{36}\] By substituting this into Eq. (16) we obtain the same expression as (34). From Eq. (12), we also obtain the time-dependence of \(\Psi\) as \[\Psi(t,r)\simeq\Psi_{0}(r)+\Psi_{1}(r)\cos(2\omega t)\,, \tag{37}\] where \(|\Psi_{0}|\sim(R\omega\phi_{r}/M_{\rm Pl})^{2}\) and \(|\Psi_{1}|\sim(\phi_{r}/M_{\rm Pl})^{2}\). The time dependence of the gravitational potential is essential to understand the quantum decay of the boson star, i.e., the production of gravitationally coupled particles such as gravitons as we will see in Sec. 4. ### The case of complex scalar: Q-balls Here we give comments on the case of a complex scalar \(\phi\) with global U(1) symmetry \(\phi\to\phi e^{i\theta}\) with an arbitrary constant \(\theta\). The action is \[S=\int\sqrt{-g}\,d^{4}x\left[-\frac{1}{16\pi G}R+g^{\mu\nu}\partial_{\mu}\phi ^{*}\partial_{\nu}\phi-V(|\phi|)\right]\,. \tag{38}\] In this case, it is known that there appear soliton-like objects, called Q-balls [121; 123], if the potential is shallower than the quadratic potential. Note that, in contrast to oscillons/I-balls, the potential does not have to be dominated by a quadratic term [61]. Q-balls are known to play essential roles in the dynamics of supersymmetric flat directions [124; 125; 126] in the context of Affleck-Dine baryogenesis [127; 128]. The Q-ball solution is written as \(\phi(t,r)=\phi_{r}(r)e^{i\omega t}\) with a real function \(\phi_{r}(r)\). A crucial difference from the case of real scalar is that it allows a solution in which the gravitational potential \(\Phi\) does not oscillate with time. It is easily seen from the complex scalar version of (11): \[{G^{0}}_{r}=-2e^{2\Psi}\left(\frac{\partial\Psi}{\partial r}\dot{\Phi}+\frac{ \partial\dot{\Phi}}{\partial r}\right)=\frac{e^{2\Psi}}{2M_{\rm Pl}^{2}}\left( \frac{\partial\phi^{*}}{\partial t}\frac{\partial\phi}{\partial r}+\frac{ \partial\phi}{\partial t}\frac{\partial\phi^{*}}{\partial r}\right)\,. \tag{39}\] For the Q-ball solution, the right hand side is zero and hence the gravitational potential is constant with time. Therefore, no gravitational particle production happens in this limit and such a configuration is stable even against to the graviton production (see Sec. 4). It can be understood as a consequence of the U(1) symmetry and the conserved global charge associated with it. If the orbit is initially elliptical in the complex plane, it will approach to the circular orbit by quantum decay process, including the graviton emission even if there are no explicit U(1) breaking terms. One may add Planck-suppressed higher-dimensional terms that explicitly break U(1) symmetry. They are motivated by a low-energy effective field theory for quantum gravity, where there should be no global symmetry in nature. Those U(1) symmetry breaking terms distort the dynamics of the complex scalar field from the circular orbit in the complex plane. Then the gravitational potential \(\Phi\) has an oscillating term that can result in gravitational emission. As the amplitude of scalar field decreases, the effect of higher-dimensional U(1) breaking terms becomes weaker. Therefore the graviton emission rate becomes smaller as the Q-ball emits more gravitons. ## 3 Vector solitons In this section we consider vector solitons and discuss the properties of the gravitational potentials, particularly focusing on their time dependence. While the most discussion is parallel to the case of scalar solitons discussed in the previous section, a crucial difference is that there are polarization degrees of freedom in the case of vector solitons. We follow the analysis in Ref. [82] (see also Ref. [83]) and extend it to calculate the time-dependence of gravitational potential. ### Setup The action we consider is given by \[S=\int d^{4}x\sqrt{-g}\left[-\frac{R}{16\pi G}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu} +\frac{1}{2}m^{2}A^{\mu}A_{\mu}\right]\,, \tag{40}\] where \(m\) is the Stuckelberg mass for the dark photon \(A^{\mu}\) and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is its field strength. The energy-momentum tensor is then given by \[{T^{\mu}}_{\nu}=m^{2}A^{\mu}A_{\nu}-F^{\mu\alpha}F_{\nu\alpha}+\delta^{\mu}_{ \nu}\left[-\frac{m^{2}}{2}A_{\alpha}A^{\alpha}+\frac{1}{4}F^{\alpha\beta}F_{ \alpha\beta}\right]\,. \tag{41}\] The Maxwell equations read \[\nabla_{\mu}F^{\mu\nu}=\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}F^{\mu \nu}\right)=-m^{2}A^{\nu}\,, \tag{42}\] where we note that \[F^{\mu\nu}=g^{\mu\alpha}\left(\partial_{\alpha}A^{\nu}-A_{\beta} \partial_{\alpha}g^{\nu\beta}\right)-g^{\nu\alpha}\left(\partial_{\alpha}A^{ \mu}-A_{\beta}\partial_{\alpha}g^{\mu\beta}\right)\,. \tag{3.4}\] This particularly implies the Lorentz condition \[\nabla_{\nu}A^{\nu}=0\,. \tag{3.5}\] Since we consider a massive gauge field, there are three degrees of freedom for the gauge field and \(A^{0}\) is a non-dynamical field. We are interested in the weak-gravity limit, where the metric is treated as a perturbation from the Minkowski metric. In the Newtonian gauge, we write \[ds^{2}=(1-2\Psi)d\tau^{2}+2V_{i}dx^{i}d\tau-\left[(1+2\Phi)\delta_{ij}-h_{ij} \right]dx^{i}dx^{j}\,, \tag{3.6}\] where \(\partial_{i}V_{i}=0\) and we also choose a gauge to eliminate the transverse vector degree of freedom. The transverse-traceless metric perturbation \(h_{ij}\) satisfies \(\partial_{i}h_{ij}=\partial_{i}h_{ji}=0\), and \(h_{ii}=0\). This completely fixes the gauge: \(\Psi\), \(\Phi\) and \(V_{i}\) are given in terms of \(A_{\mu}\). We first neglect \(h_{ij}\) to determine the profile of vector star and later take it into account to discuss the quantum decay. In particular, we have \[\sqrt{-g}=1+(3\Phi-\Psi)\,, \tag{3.7}\] for \(h_{ij}=0\). ### Configuration of vector boson star The Lorentz constraint Eq. (3.5) is written as \[\partial_{0}A^{0}+\partial_{i}A^{i}+A^{0}\partial_{0}\left(3\Phi- \Psi\right)+A^{j}\partial_{j}(3\Phi-\Psi)=0\,. \tag{3.8}\] We are interested in the mode that oscillates with a frequency of \(\omega\simeq m\). This implies \(A^{0}=\mathcal{O}(A^{i}/(mR))\) at the leading order, where \(R\) is the typical size of the object. For \(mR\gg 1\), the Maxwell equations and the Lorentz condition are greatly simplified such as \[\partial_{0}^{2}A^{j}-\Delta A^{j}+\partial_{0}\left(5\Phi+\Psi \right)\partial_{0}A^{j}+\left[(1-2\Psi)m^{2}+2\partial_{0}^{2}\Phi\right]A^{ j}\simeq 0\,, \tag{3.9}\] \[\partial_{0}A^{0}+\partial_{i}A^{i}\simeq 0\,, \tag{3.10}\] where \(\Delta=\partial_{i}\partial_{j}\delta^{ij}\). Here we included the leading-order terms that has time derivatives of gravitational potentials, which are negligible compared with the other terms, though. The Einstein equation leads to \[-\Delta\Phi=\frac{1}{2M_{\rm Pl}^{2}}T^{0}{}_{0}\,, \tag{3.11}\] \[-\frac{2}{3}\Delta^{2}(\Psi-\Phi)=\frac{1}{M_{\rm Pl}^{2}}\left[ \delta^{il}\delta^{jk}\partial_{i}\partial_{j}-\frac{\delta^{kl}\Delta}{3} \right]T^{k}{}_{l}\,,\] (3.12) \[\Delta V_{i}=\frac{2}{M_{\rm Pl}^{2}}T^{0}{}_{i}^{\rm(T)}\,,\] (3.13) \[-\partial_{i}\dot{\Phi}=\frac{1}{2M_{\rm Pl}^{2}}T^{0}{}_{i}^{ \rm(L)}\,, \tag{3.14}\] where the superscript (T) and (L) indicate the transverse and longitudinal component, respectively. Here, the \((0,0)\) and \((0,i)\) component of the energy-momentum tensor are given by \[T^{0}{}_{0} \simeq\frac{m^{2}}{2}\left(A^{0}\right)^{2}+\frac{m^{2}}{2}\sum_{i }\left(A^{i}\right)^{2}(1+2\Phi)+\frac{1}{2}\sum_{i}(\partial_{0}A^{i})^{2}(1+ 2\Phi+2\Psi)\] \[\qquad+\sum_{i}\partial_{0}A^{i}\partial_{i}A^{0}+\frac{1}{2} \sum_{i,j}\left(\partial_{i}A^{j}\partial_{i}A^{j}-\partial_{i}A^{j}\partial_{ j}A^{i}\right), \tag{3.15}\] \[T^{0}{}_{i} \simeq-m^{2}A^{0}A^{i}-\sum_{j}\partial^{0}A^{j}(\partial_{j}A^{i }-\partial_{i}A^{j})\,, \tag{3.16}\] where we include terms up to \(\mathcal{O}(k^{2}(A^{i})^{2})\) for \(T^{0}{}_{0}\) and leading-order terms for \(T^{0}{}_{i}\). Now, let us suppose \[A^{i}=n^{i}f(r)\cos(\omega t+\theta^{i})\,, \tag{3.17}\] where \(n_{i}\) is a unit vector representing the polarization and \(\theta^{i}\) is the phase. Then, at the leading order, the equation of motion for \(f\) is independent of the polarization and is given by \[\frac{\partial^{2}f}{\partial r^{2}}+\frac{2}{r}\frac{\partial f}{ \partial r}-\left[m^{2}-\omega^{2}(1+2\Psi_{0}(r))\right]f=0\,, \tag{3.18}\] where \(\Psi_{0}(r)\) is the leading-order term of \(\Psi\) and is given by \[\frac{1}{4\pi G}\left(\frac{\partial^{2}\Psi_{0}}{\partial r^{2} }+\frac{2}{r}\frac{\partial\Psi_{0}}{\partial r}\right)=-\frac{1}{2}m^{2}f^{ 2}(r)\,. \tag{3.19}\] The latter equation comes from Eq. (3.11) by noting \(\Psi_{0}\simeq\Phi_{0}\) from Eq. (3.12) at the leading order. These are identical to the equations for the scalar fields (see Eqs. (2.20) and (2.21)). So the mass, size, and amplitude at the center of the vector boson star are the same with those for scalar boson star. ### Oscillating behavior of the gravitational potential Similarly to the case of scalar boson star studied in the previous section, we are interested in the behavior of the gravitational potential \(\Psi\) and \(\Phi\) and its time dependence since it is essential to evaluate the quantum decay rate. Compared with the scalar case, it is nontrivial since there are four components \(A_{\mu}\) and several configurations of the vector polarization are known [82; 83; 84].5 Our short conclusion is that in the massive vector case, at least for known configurations, the gravitational potential oscillates with a similar amplitude. In deriving it, \(A^{0}\) plays an important role since it is related to \(A^{i}\) through the Lorentz constraint (3.10). Footnote 5: Recall that the gravitational potential is inevitably oscillating for a real scalar case, but it can be time-independent for a complex scalar as shown in Sec. 2.4. Using the ansatz Eq. (3.17), we obtain the constraint from the Lorentz condition Eq. (3.10) such as \[A^{0}=-\frac{1}{\omega}\sum_{i}n^{i}\partial_{i}f(r)\sin(\omega t +\theta^{i})\,. \tag{3.20}\] Substituting these into \(T^{0}{}_{0}\), we obtain \[T^{0}{}_{0} \simeq\frac{m^{2}}{2\omega^{2}}\left(\sum_{i}n^{i}\partial_{i}f\sin( \omega t+\theta^{i})\right)^{2}+\frac{m^{2}}{2}\sum_{i}\left(n^{i}f(r)\cos( \omega t+\theta^{i})\right)^{2}(1+2\Phi)\] \[+\frac{\omega^{2}}{2}\sum_{i}(n^{i}f(r)\sin(\omega t+\theta^{i})) ^{2}(1+2\Phi+2\Psi)\] \[+\sum_{i}n^{i}f(r)\sin(\omega t+\theta^{i})\left(\sum_{j}n^{j} \partial_{i}\partial_{j}f\sin(\omega t+\theta^{j})\right)\] \[+\frac{1}{2}\sum_{i,j}\left(n^{j}n^{j}\partial_{i}f\partial_{i}f \cos^{2}(\omega t+\theta^{j})-n^{i}n^{j}\partial_{i}f\partial_{j}f\cos(\omega t +\theta^{i})\cos(\omega t+\theta^{j})\right)\,.\] In the following, we consider two extremely polarized configurations to see the time-dependence of gravitational potential. #### 3.3.1 Linear polarization Let us consider the case with a linear polarization: \(n^{1}=1\), \(n^{2}=0\), \(n^{3}=0\), \(\theta^{i}=0\). We then obtain \[T^{0}{}_{0} \simeq\frac{1}{2}\hat{x}^{2}(f^{\prime})^{2}\sin^{2}(\omega t)+ \frac{m^{2}}{2}(1+2\Phi)f^{2}+\frac{(\omega^{2}-m^{2})+2\omega^{2}\Psi}{2}f^{ 2}\sin^{2}(\omega t) \tag{3.22}\] \[+f\left(f^{\prime}/r+\hat{x}^{2}(f^{\prime\prime}-f^{\prime}/r) \right)\sin^{2}(\omega t)+\frac{1}{2}f^{\prime 2}(1-\hat{x}^{2})\cos^{2}( \omega t)\,,\] \[=\frac{m^{2}}{2}(1+2\Phi)f^{2}+\left(-\frac{ff^{\prime\prime}}{2} +\hat{x}^{2}\left(ff^{\prime\prime}+\frac{f^{\prime}f^{\prime}}{2}-\frac{ff^{ \prime}}{r}\right)\right)\sin^{2}(\omega t)\] \[+\frac{1}{2}f^{\prime 2}(1-\hat{x}^{2})\cos^{2}(\omega t)\,,\] where \(\hat{x}\equiv x/r\), the prime denotes the derivative with respect to \(r\) and we used Eq. (3.18) in the last line. The large parenthesis in front of \(\sin^{2}(\omega t)\) is given by \(\sim f^{2}/R^{2}\) with \(R\) being the typical size of the vector boson star. Noting that \[\Delta f^{2}(r)=2\left(ff^{\prime\prime}+f^{\prime 2}+\frac{2ff^{\prime}}{r} \right)\sim\frac{f^{2}}{R^{2}}\,, \tag{3.23}\] and Eq. (3.11), we approximate the gravitational potential as \[\Phi(t,r)\sim\Phi_{0}(r)+\frac{f^{2}}{16M_{\rm Pl}^{2}}\cos(2\omega t)\,, \tag{3.24}\] which should be regarded as an order-of-magnitude estimate. Here, we particularly omit the deviation from the spherical symmetry. #### 3.3.2 Circular polarization Let us consider the case with a maximally circular polarization: \(n^{1}=1/\sqrt{2}\), \(n^{2}=1/\sqrt{2}\), \(n^{3}=0\), \(\theta^{1}=0\), \(\theta^{2}=\mp\pi/2\), \(\theta^{3}=0\). We first note that \(\sum_{i}n^{i}\partial_{i}f\sin(\omega t+\theta^{i})=f^{\prime}(\hat{x}\sin( \omega t)\mp\hat{y}\cos(\omega t))/\sqrt{2}=f^{\prime}\sin(\omega t\mp\phi_{x}) \sin\phi_{z}/\sqrt{2}\), where \(x=r\cos\phi_{x}\sin\phi_{z}\) and \(y=r\sin\phi_{x}\sin\phi_{z}\). We then obtain \[T^{0}{}_{0} \simeq\frac{m^{2}}{2}(1+2\Phi)f^{2}+\frac{(\omega^{2}-m^{2})+2 \omega^{2}\Psi}{4}f^{2}+\frac{1}{4}f^{\prime}f^{\prime} \tag{3.25}\] \[+\frac{1}{4}\sin^{2}\phi_{z}f^{\prime}f^{\prime}\left(\sin^{2}( \omega t\mp\phi_{x})-\cos^{2}(\omega t\mp\phi_{x})\right)\] \[+\frac{1}{2}f\left[\partial_{x}^{2}f\sin^{2}(\omega t)\mp 2 \partial_{x}\partial_{y}f\cos(\omega t)\sin(\omega t)+\partial_{y}^{2}f\cos^{ 2}(\omega t)\right].\] Here, the third line can be rewritten as \[\frac{1}{2}\left[\frac{ff^{\prime}}{r}+\left(ff^{\prime\prime}-\frac{ff^{ \prime}}{r}\right)\sin^{2}(\omega t\mp\phi_{x})\sin^{2}\phi_{z}\right]. \tag{3.26}\] Using Eq. (3.18), we finally obtain \[T^{0}{}_{0} \simeq\frac{m^{2}}{2}(1+2\Phi)f^{2}+\frac{1}{4}\left(f^{\prime}f^ {\prime}-ff^{\prime\prime}\right)-\frac{1}{4}f^{\prime}f^{\prime}\cos^{2}( \omega t\mp\phi_{x})\sin^{2}\phi_{z} \tag{3.27}\] \[+\frac{1}{2}\left(ff^{\prime\prime}+\frac{1}{2}f^{\prime}f^{ \prime}-\frac{ff^{\prime}}{r}\right)\sin^{2}(\omega t\mp\phi_{x})\sin^{2}\phi _{z}\,.\] Thus we have the oscillating term.6 Again, from Eq. (3.11), we approximate the gravitational potential as Footnote 6: Recently Ref. [85] studied the real photon emission due to the higher dimensional interactions between the photon and dark photon. It has been found that some of the operators do not contribute to the photon emission for circular polarized vector solitons, since \(|A_{i}|^{2}=\text{const.}\), neglecting the small \(A_{0}\) contribution. For the graviton emission case, we do not have a freedom to choose such effective operators and the inclusion of the \(A_{0}\) contribution is important. \[\Phi(t,r)\sim\Phi_{0}(r)+\frac{f^{2}}{16M_{\rm Pl}^{2}}\cos(2\omega t\pm 2 \phi_{x})\,. \tag{3.28}\] This should be regarded as an order-of-magnitude estimate because we particularly omit the deviation from the spherical symmetry. This is the same orders-of-magnitude result as the linear polarized case. ## 4 Quantum decay into gravitons Now let us estimate the quantum production rate of the graviton in the boson star. As far as we consider the graviton wavelength much shorter than the typical size of the boson star \(R\), we can approximate the metric as if it is spatially flat.7 Then the the equation of motion of the gravitational wave \(h_{\lambda}\), with \(\lambda=+,\times\) being the polarization, is given by Footnote 7: See Ref. [100] for more precise discussion about this approximation for the case of scalar particle production. \[\ddot{h}_{\lambda}+(3\dot{\Phi}+\dot{\Psi})\dot{h}_{\lambda}+e^{-2(\Psi+\Phi) }k^{2}h_{\lambda}=0\,, \tag{4.1}\] where \(k\) denotes the wavenumber. One may think that both the oscillating \(\Psi\) and \(\Phi\) may induce the particle production, but we need to take care of the \(e^{-2(\Psi+\Phi)}\) factor in front of the \(k^{2}\) term. Actually it turns out that oscillation of \(\Psi\) does not lead to particle production. In order to see this, let us define a new time variable \(d\tau\equiv e^{-(\Psi+\Phi)}dt\). Then the equation of motion (10) is rewritten as \[\partial_{\tau}^{2}h_{\lambda}+2(\partial_{\tau}\Phi)(\partial_{\tau}h_{ \lambda})+k^{2}h_{\lambda}=0\,. \tag{11}\] With this time variable, \(\Psi\) disappeared and it is time dependence of \(\Phi\) that leads to particle production. This is further simplified by defining \(\widetilde{h}_{\lambda}\equiv e^{\Phi}h_{\lambda}\) as8 Footnote 8: A new time variable is the analogue of the conformal time and \(b\) is the analogue of the cosmic scale factor in the Freedman-Robertson-Walker universe. \[\partial_{\tau}^{2}\widetilde{h}_{\lambda}+\left(k^{2}-\frac{\partial_{\tau}^ {2}b}{b}\right)\widetilde{h}_{\lambda}=0\,, \tag{12}\] where \(b\equiv e^{\Phi}\). The time-dependent effective mass term is given by \[\frac{\partial_{\tau}^{2}b}{b} \simeq\left(\frac{\omega\phi_{r}}{2M_{\rm Pl}}\right)^{2}\cos(2 \omega t)\quad\text{for scalar}\,, \tag{13}\] \[\frac{\partial_{\tau}^{2}b}{b} \simeq\left(\frac{\omega f}{2M_{\rm Pl}}\right)^{2}\cos(2\omega t )\quad\text{for vector}\,, \tag{14}\] from Eq. (34) for the scalar case. Since the scalar and vector cases give similar results, below we collectively denote by \(\phi\) both the scalar and vector boson for notational simplicity. Equation (12) is the same as the Mathieu equation and hence the modes with \(k\simeq\omega\) are enhanced. Once the equation of motion is identified with the Mathieu equation, we can use the same technique to solve this equation (e.g. the Bogoliubov coefficient technique) and derive the production rate in the narrow width limit, as found in literature [129; 130; 131; 132]. In the case of purely gravitational production, more details are found in Refs. [105; 106; 102]. Although we do not repeat the calculation details here, the point is that it is regarded as a perturbative annihilation of the scalar or vector into the graviton pair9\(\phi\phi\to hh\), since \((\phi_{r}/2M_{\rm Pl})^{2},(f/2M_{\rm Pl})^{2}\ll 1\). The graviton production rate from a boson star is given by Footnote 9: One should be careful about this interpretation. It is justified after deriving the time dependence of \(\Phi\), which exhibits \(\cos(2\omega t)\) behavior. For the case of Q-ball, as shown in Sec. 2.4, the gravitational potential does not oscillate and this interpretation is not justified. \[\Gamma_{\rm grav}=\Gamma(\phi\phi\to hh)=\frac{\mathcal{C}}{256\pi}\frac{ \phi_{R}^{2}m^{3}}{M_{\rm Pl}^{4}}\,, \tag{15}\] where \(\phi_{R}\) denotes the typical amplitude of \(\phi\) in the scalar boson star or \(A_{i}\) in the vector boson star and we approximated \(\omega\simeq m\), and \(\mathcal{C}\) is an \(\mathcal{O}(1)\) coefficient that represents the effect of radial shape of the boson star. Note that this production mechanism is purely quantum. The production through (12) requires initial seed of the graviton field \(\widetilde{h}_{\lambda}\), which is provided by the zero-point fluctuation in the vacuum. Thus it is not understood as a gravitational wave production from classical sources. Although a spherically symmetric boson star cannot be a classical source of gravitational waves, it can enhance the graviton quantum fluctuations existing there. Here we comment on the possibility of parametric resonance. The equation of motion (4.3) is exactly the form of the Mathieu equation, and hence one may wonder whether the parametric resonant amplification of the graviton can happen or not. To see this, let us compare the growth rate of the graviton amplitude and the escape rate of the produced graviton from the soliton. If the former is larger than the latter, the parametric resonance can happen. Otherwise gravitons do not accumulate in the phase space with narrow momentum rage around \(k\sim\omega\) and no Bose enhancement would be expected [99, 100]. The growth rate is characterized by the parameter \(\mu\), which is defined as \(h_{\lambda}\propto e^{\mu t}\). In our case it is given by \(\mu\sim\omega\phi_{R}^{2}/M_{\rm Pl}^{2}\)[132]. On the other hand, the escape rate is simply given by \(\sim 1/R\sim\omega\phi_{R}/M_{\rm Pl}\) for the self-gravitating boson star, which is larger than the growth rate \(\mu\). For the oscillon with self-interactions, the escape rate is even larger. Therefore, the escape rate is always larger than the growth rate for \(\phi_{R}\ll M_{\rm Pl}\). Thus we conclude that there is no parametric resonance effect and the graviton production rate is linear in time with the perturbative rate (4.6). All the Standard Model particles are also gravitationally coupled to \(\phi\) and \(A^{\mu}\) and hence their pair production can happen through the \(s\)-channel graviton exchange. Among them, massless fermion and transverse vector boson production are suppressed due to their conformal nature. The production rate of fermions is \((m_{f}/m)^{2}\) times smaller than the graviton production rate, where \(m_{f}\) is the fermion mass. On the other hand, scalar production is not suppressed unless it is conformally coupled to the Ricci scalar. The equation of motion of a free scalar \(\chi\), which is minimally coupled to gravity, is given by \[e^{2\Psi}\left[\tilde{\chi}+(3\dot{\Phi}+\dot{\Psi})\dot{\chi} \right]-e^{-2\Phi}\vec{\nabla}^{2}\chi+m_{\chi}^{2}\chi=0\,, \tag{4.7}\] where \(m_{\chi}\) is the mass of \(\chi\). Similarly to the graviton case, it is simplified by introducing \(\widetilde{\chi}\equiv e^{\Phi}\chi\) as \[\partial_{\tau}^{2}\widetilde{\chi}+\left(k^{2}-\frac{\partial_{ \tau}^{2}b}{b}+b^{2}m_{\chi}^{2}\right)\widetilde{\chi}=0\,, \tag{4.8}\] where we have moved to the Fourier space. Therefore, if \(k\simeq m\gg m_{\chi}\), the scalar production rate is the same as the graviton production rate up to a factor 2 coming from the graviton polarization degrees of freedom. In the Standard Model, the production rate of the Higgs boson is the same order of the graviton production rate if \(m\) is much larger than the Higgs mass. In the electroweak symmetry breaking phase, the production of longitudinal massive vector bosons is also the same order. If \(0.1\,{\rm GeV}\lesssim m\lesssim 1\,{\rm GeV}\), the boson stars may also efficiently produce pions or other mesons.10 If there exist other scalar fields lighter than \(\phi\) or \(A^{\mu}\), such as axions, they should also be produced with the same production rate. Phenomenological implications of gravitational decay In this section, we discuss time evolution of the boson star with a gravitational decay rate of (4.6) and then calculate the spectrum of graviton background. The following discussion applies to the case of both scalar and vector. For notational simplicity we simply express them by \(\phi\), but it can be as either scalar or vector. ### Decay of boson star If there are no direct interactions between \(\phi\) (or \(A^{\mu}\)) and other lighter fields, the gravitational processes are the only processes that cause the decay of boson star. Classical decay processes such as \(3\phi\to\phi\) are possible mediated by the gravitational interaction [97], but it is likely to be exponentially suppressed for \(\omega R\gg 1\). In this limit, the graviton production is an inevitable decay process and it gives a strict lower bound on the lifetime of oscillons/boson stars. Via the graviton production, a boson star mass \(M\) decreases according to \[\dot{M}(t)=-\Gamma_{\rm grav}(t)M(t)\,. \tag{5.1}\] Note that the decay rate (4.6) depends on the amplitude of the scalar/vector in the boson star \(\phi_{R}\) and hence the decay rate decreases as the boson star mass decreases. By using the relation \(M\sim m^{2}\phi_{R}^{2}R^{3}\) and the equilibrium condition \(GM\sim(Rm^{2})^{-1}\), we find \(\Gamma_{\rm grav}\propto M^{4}\). Then we can solve (5.1) as \[M^{4}(t)=\frac{M_{i}^{4}}{1+\Gamma_{\rm grav}^{0}t},\qquad\quad\Gamma_{\rm grav }^{0}\equiv\frac{\mathcal{C}}{1024\pi}\frac{\phi_{R,i}^{2}m^{3}}{M_{\rm Pl}^{ 4}}\,, \tag{5.2}\] where we assumed the initial boson star mass \(M_{i}\) in the limit \(t\to 0\) and \(\phi_{R,i}\) denotes the initial scalar/vector amplitude at the boson star formation. Correspondingly, we obtain \[\phi_{R}^{2}(t)=\frac{\phi_{R,i}^{2}}{1+\Gamma_{\rm grav}^{0}t}\,. \tag{5.3}\] Therefore, the boson star mass decreases rather slowly as \(M\propto t^{-1/4}\) for \(t\gtrsim(\Gamma_{\rm grav}^{0})^{-1}\). We call \(\tau_{\rm grav}\equiv(\Gamma_{\rm grav}^{0})^{-1}\) as the boson star lifetime, although it does not decay exponentially. Numerically it is estimated as \[\tau_{\rm grav}=(\Gamma_{\rm grav}^{0})^{-1}\simeq 1\times 10^{16}\,{\rm sec} \,\left(\frac{M_{\rm Pl}}{\phi_{R,i}}\right)^{2}\left(\frac{1\,{\rm GeV}}{m} \right)^{3}\,. \tag{5.4}\] It easily exceeds the present age of the universe for, e.g., light axion-like particles \(m\lesssim 1\,{\rm eV}\). If \(m>\mathcal{O}(100)\,{\rm GeV}\), the boson stars may efficiently produce the Higgs boson and longitudinal weak bosons after the Big-Bang Nucleosynthesis epoch with a rate similar to the graviton production. This gives a constraint on the abundance of boson star.11 We will come back to this issue in detail elsewhere. Footnote 11: If \(0.1\,{\rm GeV}\lesssim m\lesssim 1\,{\rm GeV}\), the boson stars may also efficiently produce pions or other mesons after the recombination epoch, which would also give a tight constraint. Once the amplitude \(\phi_{R}(t)\) becomes smaller than \(m\), the classical picture of the boson star may no longer be valid. This happens at \(t\sim\left[\Gamma_{\rm grav}^{0}(m/\phi_{R,i})^{2}\right]^{-1}\). Then one has to treat the bosons as individual particles with negligible Bose enhancement effect. ### Decay of oscillon and Q-ball Here we comment on the case with oscillon, where the self-interactions support the localized configuration. In most cases, oscillons classically decay into individual particles via self-interactions. In a certain case, however, the classical decay processes mediated by the self-interactions can be exponentially suppressed for \(\omega R\gg 1\)[51].12 In this limit, the graviton production is an inevitable decay process and it gives strict lower bound on the lifetime of oscillon. Footnote 12: For the Gaussian profile, the suppression factor should read \(\sim\exp(-(\omega R)^{2})\)[51, 50]. One should note that the decay rate is sensitive to the exact radial profile, since a small change in the radial profile may result in completely different functional form for its Fourier component, which is essential for the estimation of the classical decay rate. In this paper we do not go into details of this subject and just assume that it is exponentially suppressed for \(\omega R\gg 1\). The graviton production rate is given by the same formula (4.6). In order to discuss the evolution of the oscillon, we need to know the relation between \(\phi_{R}\) and \(R\). It is nontrivial and highly model-dependent. Nevertheless, we can make an orders-of-magnitude estimate for the typical time scale at which the gravitational process becomes important. Independently of the relation between \(\phi_{R}\) and \(R\), we can repeat a similar procedure to the case of boson star, and find a typical time scale \(\tau_{\rm grav}\), the same as (5.4), although the time dependence of the boson star mass \(M(t)\) at \(t>\tau_{\rm grav}\) may differ from the boson star case. Thus Eq. (5.4) gives a reasonable estimate for the oscillon lifetime for \(\omega R\gg 1\) when the classical decay processes are exponentially suppressed.13 Footnote 13: For the so-called exact I-ball/oscillon [71], there are no classical decay processes. Although fluctuations with particular modes may grow in analogy with the parametric resonance, still the final state is considered to be stable [92]. In such a case, the graviton production is the only energy loss channel. As we discussed in Sec. 2.4, the quantum decay of Q-ball into graviton is forbidden by the conserved global charge for the case with the exactly spherical orbit in the complex plane. If one adds higher-dimensional U(1)-breaking terms, the orbit in the complex plane becomes elliptical and the quantum decay process to the graviton becomes open. The amplitude of scalar field decreases as it decays, which makes decay slower. The resulting time-dependence for the graviton emission rate is different from the case with boson stars and provides an unique signals in gravitational wave spectrum.14 Footnote 14: The presence of U(1) breaking term itself tends to make Q-balls unstable. For the gravity mediation type Q-ball, for example, it has been shown that Q-balls decay within a simulation time if the magnitude of the U(1) breaking term is sizable [133]. However, still it is difficult to estimate the instability time scale for general (or much smaller) value of the U(1) breaking term. See also Refs. [134, 135] for related works. Moreover, dissipation via scatterings with thermal plasma makes the orbit in the complex plane being elliptical [136, 137]. This depends on the size of Q-balls and temperature of the ambient plasma. Below we make an assumption that the elliptical orbit becomes circular only through the graviton emission. This gives a strict lower limit on the lifetime of the “elliptical” Q-ball, while it gives an upper bound on the graviton abundance. Below we estimate the effect of graviton production on the Q-ball. Let us suppose that the orbit of the complex scalar field \(\phi\) is given by \(\phi(t)=\varphi_{R}\cos(\omega t)+i\varphi_{I}\sin(\omega t)\) inside the Q-ball with \(\varphi_{R}\) and \(\varphi_{I}\) being the amplitude of the real and imaginary component.15 Without loss of generality, we take \(\varphi_{R}>\varphi_{I}\) initially. From Eq. (39), the gravitational potential \(\Phi(t,r)\) is given by \[\Phi(t,r)\simeq\Phi_{0}(r)-\frac{\varphi_{R}^{2}(t,r)-\varphi_{I}^{2}(t,r)}{16M_{ \rm Pl}^{2}}\cos(2\omega t). \tag{100}\] Repeating the same discussion for the case of real scalar, we obtain the graviton production rate due to the oscillating gravitational potential as \[\Gamma_{\rm grav}=\frac{\mathcal{C}}{256\pi}\frac{\left(\varphi_{R}^{2}-\varphi _{I}^{2}\right)\omega^{3}}{M_{\rm Pl}^{4}}. \tag{101}\] It is expected that \(\varphi_{R}^{2}-\varphi_{I}^{2}\) decreases through the graviton production and this process stops when \(\varphi_{R}=\varphi_{I}\). On the other hand, the Q-ball charge \(Q\) should be conserved as far as there is no explicit U(1) breaking term. It is roughly estimated as \(Q\sim\omega\varphi_{R}\varphi_{I}\times R^{3}\sim\varphi_{R}\varphi_{I}/( \omega^{2}K^{3})\) where we assumed that the Q-ball radius is given by \(R\sim 1/(\omega K)\) with some numerical constant \(K\). Thus the final value will be \(\varphi_{R}=\varphi_{I}\sim\omega\sqrt{K^{3}Q}\equiv\bar{\varphi}\). Keeping this in mind, one can solve (100) in the case of Q-ball. For \(\varphi_{R}\gg\varphi_{I}\), the situation is similar to the case of real scalar. We obtain \[\varphi_{R}^{2}(t)\simeq\frac{\varphi_{R,i}^{2}}{1+\Gamma_{\rm grav}^{0}t}, \qquad M(t)\simeq\frac{M_{i}}{1+\Gamma_{\rm grav}^{0}t},\qquad\Gamma_{\rm grav }^{0}=\frac{\mathcal{C}}{256\pi}\frac{\varphi_{R,i}^{2}\,\omega^{3}}{M_{\rm Pl }^{4}}. \tag{102}\] Note that we have assumed that the Q-ball mass is given by \(M\simeq\omega^{2}(\varphi_{R}^{2}+\varphi_{I}^{2})\times R^{3}\sim(\varphi_{R} ^{2}+\varphi_{I}^{2})/(\omega K^{3})\). The amplitude \(\varphi_{R}\) decreases according to this result. Eventually \(\varphi_{R}\) becomes close to \(\varphi_{I}\) at \(t=\bar{t}\) and the above approximation breaks down. After this epoch, by defining \(\varphi_{R}=\bar{\varphi}+\delta\varphi\) and \(\varphi_{I}=\bar{\varphi}-\delta\varphi\) and noting that \(M\sim(\bar{\varphi}^{2}+\delta\varphi^{2})/(\omega K^{3})\), we obtain \[\delta\varphi(t)=\delta\varphi(\bar{t})-2\overline{\Gamma}_{\rm grav}\bar{ \varphi}(t-\bar{t}),\qquad M(t)\simeq M(\bar{t}),\qquad\overline{\Gamma}_{\rm grav }=\frac{\mathcal{C}}{256\pi}\frac{\bar{\varphi}^{2}\,\omega^{3}}{M_{\rm Pl}^{4 }}\sim\bar{t}^{-1}. \tag{103}\] Note that \(\overline{\Gamma}_{\rm grav}\) is defined by \(\Gamma_{\rm grav}\simeq\overline{\Gamma}_{\rm grav}4\delta\varphi/\bar{\varphi}\), so the graviton production vanishes for \(\delta\varphi\to 0\). Since \(\delta\varphi(\bar{t})\sim\bar{\varphi}\), we can easily see that \(\delta\varphi(t)\) becomes zero after the time \(t\sim\overline{\Gamma}_{\rm grav}^{-1}\) and the graviton production stops thereafter. The above results are approximately combined in a convenient way as \[M(t)\simeq M_{i}\frac{1+\overline{\Gamma}_{\rm grav}t}{1+\Gamma_{\rm grav}^{0}t },\qquad\Gamma_{\rm grav}(t)\simeq\Gamma_{\rm grav}^{0}\frac{1-\overline{\Gamma} _{\rm grav}t}{1+\Gamma_{\rm grav}^{0}t}, \tag{104}\] for \(t<\overline{\Gamma}_{\rm grav}^{-1}\). ### Graviton background One of the immediate consequences of the gravitational soliton decay is the formation of cosmic graviton background with a characteristic frequency spectrum.16 Even if they do not completely decay within the age of the universe due to the relatively slow decrease of the soliton mass \(M\), the most fraction of their energy goes into the graviton radiation within the time scale of \(\Gamma_{\rm grav}\). The present energy density of the graviton per logarithmic energy is calculated as \[\frac{d\rho_{\rm GW}}{d\ln E} =E^{2}\int dz\frac{N(z)n_{\rm sol}(z)a^{3}(z)\Gamma_{\rm grav}(z)}{ H(z)}\frac{dN_{h}}{dE^{\prime}} \tag{111}\] \[=\frac{2E^{4}}{m^{4}}\frac{\Gamma_{\rm grav}(z_{E})\rho_{\rm sol}( z_{E})}{H(z_{E})}\,, \tag{112}\] where \(z\) denotes the redshift, \(N(z)=M(z)/m\) is the number of \(\phi\) in the soliton, \(n_{\rm sol}(z)\) is the soliton number density, \(a(z)=(1+z)^{-1}\) is the cosmic scale factor, \(H(z)\) is the Hubble constant, \(E^{\prime}=(1+z)E\) and we approximate the produced graviton spectrum as a line: \(dN_{h}/dE^{\prime}=2\delta(E^{\prime}-m)\). In the second line we have performed the integral and \(z_{E}=m/E-1\) and defined the soliton energy density \(\rho_{\rm sol}(z)=M(z)n_{\rm sol}(z)\). Fig. 1 shows the graviton spectrum in the present universe from the decay of boson stars, assuming that the boson stars formed in the early universe lose their energy only through the graviton production.17 The spectrum is represented in terms of \(\Omega_{\rm GW}\equiv\frac{d\rho_{\rm GW}}{d\ln E}/\rho_{\rm crit}\) with \(\rho_{\rm crit}\) being the present critical density of the universe, normalized by the soliton energy density parameter \(\Omega_{\rm sol}\). The soliton energy density parameter is defined by \(\Omega_{\rm sol}\equiv\rho_{\rm sol}/\rho_{\rm crit}\) with neglecting the effect of decay. In the left panel, the five lines show the case of \(m=100,10,1,10^{-1},10^{-2}\,{\rm GeV}\) with \(\phi_{R,i}=M_{\rm Pl}\). Looking at the line of \(m=100\,{\rm GeV}\), for example, the low frequency part corresponds to the gravitons produced at an early epoch when \(t\Gamma^{0}_{\rm grav}<1\), which exhibits the scaling \(\Omega_{\rm GW}\propto E^{3}\). The middle part corresponds to those produced when \(t\Gamma^{0}_{\rm grav}>1\). Since the decay is not exponential but power-law as shown in Sec. 5.1, we obtain a spectrum with a power-law exponent different from the low-frequency part. It behaves as \(\Omega_{\rm GW}\propto E^{5/8}\) in the matter-dominated universe and \(\Omega_{\rm GW}\propto E^{1/2}\) in the radiation-dominated universe at the time of production. Finally there is a cutoff at \(E=m\) and a softening of the spectrum around the cutoff frequency is caused by the effect of cosmological constant. These are unique features of the gravitational wave spectrum from the solitons, which is distinguished from, for example, a homogeneous scalar field decaying into two gravitons [152]. The right panel shows the \(\phi_{R,i}\) dependence on the spectrum for \(m=100\,{\rm GeV}\). As is clear from Eq. (100), the peak amplitude is maximized around \(\phi_{R,i}=10^{-4}M_{\rm Pl}\) since \(\tau_{\rm grav}\) becomes close to the present age of the universe. For smaller \(\phi_{R,i}\), \(\tau_{\rm grav}\) becomes even longer and the graviton production efficiency is suppressed. Footnote 17: It may be true for boson stars without self-interactions or for the exact I-balls/oscillons [71]. Fig. 2 shows the graviton spectrum in the present universe from the Q-ball decay. Note that similar results also apply to the case of oscillon, if the oscillon lifetime is long enough, as in the case of exact I-ball/oscillon. In the left panel we have taken \(m=10^{6},10^{8},10^{10}\,{\rm GeV}\) with \(\varphi_{R,i}=M_{\rm Pl}\) and \(\bar{\varphi}=0.1M_{\rm Pl}\), i.e., \(r\equiv\varphi_{R,i}/\bar{\varphi}=10\). In the right panel we have taken \(m=10^{6}\,{\rm GeV}\) and \(r=1.5,10,50\) with \(\varphi_{R,i}=M_{\rm Pl}\) fixed. In this case, the graviton production stops in the early universe when the orbit circular and the cutoff frequency roughly corresponds to the gravitons produced at this epoch. Looking at the line of \(r=50\) in the right panel, for example, the low frequency part corresponds to those produced at \(t<(\Gamma_{\rm grav}^{0})^{-1}\) that exhibits the scaling \(\Omega_{\rm GW}\propto E^{3}\). A small modulation is caused by the change of relativistic degrees of freedom around the graviton emission epoch. The higher frequency part corresponds to those produced at \((\Gamma_{\rm grav}^{0})^{-1}<t<(\overline{\Gamma}_{\rm grav})^{-1}\) and the spectrum behaves as \(\Omega_{\rm GW}\propto E^{-1}\). This is the characteristic signature of the gravitational wave spectrum from the Q-ball decay. The transition energy between these two regimes is given by \(E\sim 10^{-6}\,{\rm GeV}\,(10^{10}\,{\rm GeV}/m)^{1/2}(M_{\rm Pl}/\varphi_{ R,i})\). Finally there is a cutoff around \(E\sim 10^{-6}\,{\rm GeV}\,(10^{10}\,{\rm GeV}/m)^{1/2}(M_{\rm Pl}/\bar{ \varphi})\), corresponding to \(t=(\overline{\Gamma}_{\rm grav})^{-1}\), after which the orbit of the complex scalar becomes completely circular and the graviton production stops. For reference, this final epoch corresponds to the temperature \(T\sim 10^{4}\,{\rm GeV}\,(m/10^{10}\,{\rm GeV})^{3/2}(\bar{\varphi}/M_{\rm Pl})\) if the universe is radiation-dominated. In both cases, the predicted typical frequency of the stochastic gravitational wave is very high. Recently there are many proposals to detect such high frequency gravitational waves [153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163], although it is still challenging to detect it. ## 6 Discussion and conclusions It is known that the cosmological dynamics of scalar or vector fields can lead to the formation of soliton-like objects such as an oscillon, Q-ball or boson stars. Oscillons/Q-balls are stabilized by self-interactions and boson stars are stabilized by gravitational interactions. They may have significant impacts on the early universe history, although still complete understandings of their dynamics have not been reached yet. One of the important properties of the soliton is its lifetime. There have been many studies for estimating the soliton lifetime, most of which focused on the emission of relativistic particles due to self-interactions. Figure 1: (Left) The present graviton spectrum from the boson star decay for \(\phi_{R,i}=M_{\rm Pl}\) with \(m=100,10,1,10^{-1},10^{-2}\,{\rm GeV}\). (Right) The spectrum for \(m=100\,{\rm GeV}\) with \(\phi_{R,i}=(1,10^{-2},10^{-4},10^{-6})\times M_{\rm Pl}\). We are assuming that the boson star is long-lived and the dominant part comes from the boson star decaying at the present universe. In this paper we point out that there is a universal decay process of the solitons due to the gravitational interactions, i.e., quantum decay of a soliton into the graviton pair. This decay channel exists even for a free (scalar or vector) field with a minimal coupling to gravity and provides a strict upper limit on their lifetime. Although the solitons are spherically symmetric, its gravitational potential has an oscillating component in time, which leads to the production of gravitational waves (gravitons) via quantum decay process. The quantum decay rate into gravitons has a power-law dependence on the size of solitons, which should be compared with the exponentially-suppressed classical decay rate. Therefore, it can be the dominant channel for boson stars consisting of fuzzy dark matter or vector dark matter without a kinetic mixing. We have discussed the evolution of solitons via the quantum decay and calculated the spectrum of stochastic gravitational waves. Although the amplitude of gravitational waves is too small to be detected in the near future, the spectrum has an unique shape with a broken power law at a low frequency and a sharp cutoff at a high frequency. This work was supported by MEXT Leading Initiative for Excellent Young Researchers (M.Y.), and by JSPS KAKENHI Grant No. 20H01894 (F.T.), 20H05851 (F.T. and M.Y.), and 23K13092 (M.Y.), and JSPS Core-to-Core Program (grant number: JPJSCCA20200002) (F.T.) and World Premier International Research Center Initiative (WPI), MEXT, Japan. This article is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology). Figure 2: The present stochastic gravitational wave spectrum from the Q-ball decay. (Left) We have taken \(m=10^{6},10^{8},10^{10}\,\mathrm{GeV}\) with \(\varphi_{R,i}=M_{\mathrm{Pl}}\) and \(\bar{\varphi}=0.1M_{\mathrm{Pl}}\), i.e., \(r\equiv\varphi_{R,i}/\bar{\varphi}=10\). (Right) We have taken \(m=10^{6}\,\mathrm{GeV}\) and \(r=1.5,10,50\) with \(\varphi_{R,i}=M_{\mathrm{Pl}}\) fixed. The graviton production stops in the early universe when the orbit of the complex scalar becomes circular and the cutoff frequency roughly corresponds to the gravitons produced at this epoch.
2303.06112
The Sub-Leading Scattering Waveform from Amplitudes
We compute the next-to-leading order term in the scattering waveform of uncharged black holes in classical general relativity and of half-BPS black holes in $\mathcal{N}=8$ supergravity. We propose criteria, generalizing explicit calculations at next-to-leading order, for determining the terms in amplitudes that contribute to local observables. For general relativity, we construct the relevant classical integrand through generalized unitarity in two distinct ways, (1) in a heavy-particle effective theory and (2) in general relativity minimally-coupled to scalar fields. With a suitable prescription for the matter propagator in the former, we find agreement between the two methods, thus demonstrating the absence of interference of quantum and classically-singular contributions. The classical $\mathcal{N}=8$ integrand for massive scalar fields is constructed through dimensional reduction of the known five-point one-loop integrand. Our calculation exhibits novel features compared to conservative calculations and inclusive observables, such as the appearance of master integrals with intersecting matter lines and the appearance of a classical infrared divergence whose absence from classical observables requires a suitable definition of the retarded time.
Aidan Herderschee, Radu Roiban, Fei Teng
2023-03-10T18:00:02Z
http://arxiv.org/abs/2303.06112v4
# The Sub-Leading Scattering Waveform from Amplitudes ###### Abstract We compute the next-to-leading order term in the scattering waveform of uncharged black holes in classical general relativity and of half-BPS black holes in \(\mathcal{N}=8\) supergravity. We propose criteria, generalizing explicit calculations at next-to-leading order, for determining the terms in amplitudes that contribute to local observables. For general relativity, we construct the relevant classical integrand through generalized unitarity in two distinct ways, (1) in a heavy-particle effective theory and (2) in general relativity minimally-coupled to scalar fields. With a suitable prescription for the matter propagator in the former, we find agreement between the two methods, thus demonstrating the absence of interference of quantum and classically-singular contributions. The classical \(\mathcal{N}=8\) integrand for massive scalar fields is constructed through dimensional reduction of the known five-point one-loop integrand. Our calculation exhibits novel features compared to conservative calculations and inclusive observables, such as the appearance of master integrals with intersecting matter lines and the appearance of a classical infrared divergence whose absence from classical observables requires a suitable definition of the retarded time. ## 1 Introduction ### The classical limit and HEFT amplitudes The classical limit and HEFT amplitudes Waveforms in the observable-based formalism - 1 A brief review of the waveform in the observable-based formalism - 2 On the structure of local (exclusive) observables - 3 Infrared divergences of amplitudes and the classical waveform - 3 Waveforms from amplitudes: summary and further comments The five-point classical integrand in minimally scalar-coupled GR - 4.1 Preliminaries - 4.2 The five-point 2MPI HEFT integrand - 4.3 The five-point classical integrand from the quantum integrand - 4.4 Integrand reduction The classical five-point integrand in supergravity Integrating in the Rest Frame - 6.1 Triangle integrals - 6.2 Box and pentagon integrals The five-point amplitude and its infrared divergences Leading order and next-to-leading order waveforms - 8.1 Gravitational memory - 8.2 Leading order (LO) - 8.3 Next-to-leading order (NLO) Conclusions Self-consistency of HEFT Evaluating box master integrals - 1.1,1,1,0 - 1.2 - 1.3 - 1.4 - 1.5 Introduction Much like accelerated charges in Maxwell's theory emit electromagnetic radiation, accelerated masses in gravitational theories emit gravitational radiation. Increasingly more precise waveform models, yielding the asymptotic space-time metric, play a critical role in the detection and analysis of gravitational wave signals from compact binaries [1]. Current waveform computations for binaries in bound orbits use effective-one-body methods [2; 3; 4; 5; 6] and numerical-relativity approaches [7; 8], in addition to direct solutions of Einstein's equations sourced by the motion of the bodies in the post-Newtonian approximation [9; 10], and computations in the effective-field theory approach pioneered by Goldberger and Rothstein Ref. [11; 12; 13; 14; 15], see [16; 17] for reviews. The post-Minkowskian (PM) approximation, keeping exact velocity dependence for each power of Newton's constant, is natural for bound binary systems on eccentric orbits and for unbound binaries [18]. Inclusive dissipative observables through \(\mathcal{O}(G^{3})\) and waveforms to leading order in Newton's constant have been discussed with traditional methods in e.g. [19; 20; 21; 22; 23; 24; 25] and amplitude and worldline methods in Ref. [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and at \(\mathcal{O}(G^{4})\) in [37]. The properties of scattering waveforms - such as the absence of periodicity, short duration, and low amplitude - pose a challenge for current gravitational-wave detectors. They may, however, be interesting goals for future terrestrial and space-based detectors. From a theoretical standpoint, scattering waveforms are an important part of the program to leverage scattering amplitudes and quantum field theory methods for precision predictions of gravitational wave physics, complementing the effort to determine the conservative motion and inclusive dissipative observables. It is an important challenge for the future to identify the analytic continuation that connects them to bound-state waveforms, as it is the case for certain (parts of) inclusive observables such as the scattering angle vs. the periastron advance and the energy loss [38; 39; 40; 41]. The observable-based formalism of Kosower, Maybee and O'Connell (KMOC) [42] establishes a direct link between scattering amplitudes and scattering waveforms [43]. Having only scattering amplitudes as input, it builds on the vast technical advances used to construct them, such as the generalized unitarity [44; 45; 46; 47; 48; 49] and the double copy [50; 51; 52], as well modern techniques for the evaluation of the relevant Feynman and phase space integrals, such as differential equations [53; 54; 55; 56] and integration-by-parts identities [57; 58; 59]. This link has been used to evaluate the classical energy loss [29] and angular momentum loss [33] at \(\mathcal{O}(G^{3})\). Modeling compact spinless bodies as scalar fields, waveforms are determined by certain parts of the five-point amplitude [43]. A lesson from the calculations of conservative effects is that amplitudes exhibit vast simplifications when expanded in the classical limit. An important feature of this expansion is that the leading term is not always the classical one. The first \(L\) terms at \(L\) loops are classically singular, or super-classical, and can be interpreted as iterations of lower-loop terms. Since the KMOC formalism contains terms bilinear in amplitudes, one may naively expect to find classical contributions that arise from the interference between superclassical terms in one factor and classically subleading (quantum) contributions from the other. Intuitively however such terms should not contribute to classical physics and we demonstrate explicitly that at one-loop order this is indeed the case. Assuming the exponential representation of the scattering matrix proposed in Refs. [60; 61] we derive a necessary condition for a given term to be included in the classical amplitude. The heavy-particle effective theory (HEFT) of Refs. [62; 63] provides a strategy to isolate the classical amplitude, discarding the super-classical and quantum operators from the outset. It has been demonstrated in Ref. [63] that this approach indeed gives the correct four-point classical amplitudes through two-loop order. The use of this framework for the construction of the one-loop five-point amplitude with outgoing radiation requires that the \(i0\) prescription of the uncut linearized matter propagators be clearly stated. For four-point amplitudes specifying a prescription is not required through two loops because the uncut matter propagators are always off-shell due to kinematic constraints. This is no longer the case in the presence of an outgoing graviton; sewing a one-loop five-point amplitude into a four-point higher-loop one suggests that a similar feature may appear at three loops for this multiplicity. In this paper, we find, by comparing the HEFT and direct calculation of the classical one-loop five-point amplitude in GR coupled to scalars, that the correct prescription at this order for these uncut matter propagators is that they are principal-valued. With this prescription, the HEFT construction reproduces the classical expansion of the full theory amplitude once the super-classical terms are subtracted from the latter. We also compute the classical amplitude and scattering waveform for half-BPS black holes in \(\mathcal{N}=8\) supergravity [64; 65]. We use the existing \(d\) dimensional one-loop five-point amplitude of maximally supersymmetric Yang-Mills theory [66; 67], the double copy [51] and dimensional reduction to construct the relevant parts of the four-massive-scalar-one-graviton amplitude in four dimensions. Their remarkable simplicity leads to a (relatively) compact integral representation for the waveform. We encounter several interesting features, not present in the amplitudes-based approach to inclusive observables. One of the requirements of the classical limit is that the matter particles are always separated. In the four-point classical amplitudes relevant for inclusive observables, this requirement leads to the absence of diagrams with intersecting matter lines. As we will see in section 4, the five-point amplitude relevant for the scattering waveform does receive contributions from certain graphs with this topology. We will understand how their presence is consistent with the separation of the matter particles. Furthermore, we find that the presence of the outgoing graviton introduces a certain asymmetry between matter lines in contributing diagrams and enhances the importance of the \(i0\) prescription. Last, but not least, the classical five-point amplitudes in both GR and \(\mathcal{N}=8\) supergravity are infrared divergent. The structure of infrared divergences in quantum gravitational amplitudes was understood long ago in Ref. [68]. We remarkably find that the IR divergence of the classical amplitude is a pure phase that can be absorbed into the definition of the retarded time. This indicates that the asymptotic waveform contains no information about the lapsed time between the scattering event and observation. We directly integrate the resulting master integrals and find the complete classical amplitude. The construction of the asymptotic spectral waveform requires a further Fourier transform to impact parameter space while that of the time-domain waveform also requires a Fourier transform of the outgoing graviton frequency. The late-time properties of the waveform can be inferred directly from its integral representation. We find that, while there is an \(\mathcal{O}(G^{2})\) correction to the gravitational wave memory in GR, such a correction is absent in \(\mathcal{N}=8\) supergravity. We will later demonstrate that the memory is proportional to the scattering angle at \(\mathcal{O}(G^{2})\). Thus, we trace the absence of memory in \(\mathcal{N}=8\) supergravity to the absence of an \(\mathcal{O}(G^{2})\) correction to the scattering angle or, equivalently, to the absence of one-loop triangle integrals in this theory. For both \(\mathcal{N}=8\) supergravity and GR, we analytically evaluate the frequency integral and all but one of the integral transforms to impact-parameter space. The integral was evaluated numerically. We leave a complete analytic evaluation of the spectral and time-domain waveform to future work. Our paper is organized as follows. In section 2 we review the classical limit of amplitudes and the HEFT of Refs. [62; 63]. In section 3 we review observable-based formalism for waveforms, demonstrate the cancellation of two-matter-particle-reducible (2MPR) contributions, and spell out the relation between the two-matter-particle-irreducible part of the amplitude and the waveform. In section 4, we obtain the HEFT prediction for the classical one-loop five-point amplitude and compare it with the result of direct calculation in GR coupled to massive scalar fields. In section 5, we construct the classical one-loop five-point amplitude in \(\mathcal{N}=8\) supergravity using the double copy. In section 6 we discuss aspects of the integration of the relevant one-loop master integrals while leaving further details to appendix B. In section 8 we numerically compute the waveform for \(\mathcal{N}=8\) supergravity and discuss our results. In section 9, we discuss our conclusions. Appendix A contains an argument that all super-classical and interference between super-classical and quantum terms (i.e. \(\hbar/\hbar\) contributions) terms correspond to 2MPR graphs and thus do not contribute to local classical scattering observables. **Note added:** While this paper was in preparation, we became aware of Refs. [69] and [70], which partly overlap with aspects of our analysis. We thank the authors for communicating and for sharing copies of their drafts prior to publication. ## 2 The classical limit and HEFT amplitudes Integrands of scattering amplitudes simplify considerably in the classical limit [71; 72; 73]. It is therefore advantageous to take this limit as early as possible and weed out terms that do not contribute to classical observables from the outset. One approach to this limit uses the correspondence principle, according to which classical physics emerges in the limit of large charges. Thus, considering the scattering of two massive spinless bodies, the classical regime emerges for masses much larger than the Planck mass and for orbital angular momenta much larger than unity (in natural units). This limit also corresponds to the inter-particle separation being much larger than the de Broglie wavelength. As the separation of particles is Fourier-conjugate to the change in the momentum of each particle, it follows that the momentum transfer \(q_{i}\) is much smaller than the momenta of the two particles. If there is any massless radiation in the initial or final state, its momentum should be much smaller than the massive particles. This is implemented by the rescaling \[(q,k,\ell)\to(\lambda q,\lambda k,\lambda\ell)\,, \tag{1}\] where \(k\) and \(\ell\) are, respectively, the momenta of external and internal gravitons, and expanding at small \(\lambda\). This process is later called the _soft expansion_. Accounting for the classical nature of Newton's potential, classical \(L\)-loop four-scalar amplitudes depend on Newton's constant and \(\lambda\) as \(\mathcal{M}^{\text{cl.}}_{4+n_{g},\text{$L$-loop}}\sim G^{L+1+\frac{1}{2}n_{g }}\lambda^{-2+L}\), where \(n_{g}\) is the number of external gravitons. Another perspective on the classical limit was taken in Ref. [42] and involves a suitable restoration of the dependence on Planck's constant and an expansion at small \(\hbar\). From this perspective, external momenta and masses pick up a factor of \(\hbar^{-1}\), which is equivalent to messenger momenta picking up a factor of \(\hbar\) through a change of variables. Thus, Planck's constant effectively plays the role of the momentum transfer and its restoration in four dimensions, \[(q,k,\ell)\to(\hbar q,\hbar k,\hbar\ell)\,\qquad\kappa\to\kappa/\hbar^{1/2} \tag{2}\] realizes the classical limit as a limit of small messenger momenta. With this scaling, the classical four-point amplitudes scale as \(\mathcal{M}^{\text{cl.}}_{4,\text{$L$-loop}}\sim\hbar^{-3}\), which can also be recovered from the correspondence principle perspective by identifying \(\lambda\) and \(\hbar\) and further rescaling \(G\to G/\lambda\), so that the contributions to the classical amplitude have the same scaling at all loop orders. The generalization to amplitudes with four scalars and any number of gravitons is straightforward based on the observation that the emission of arbitrarily-many low-energy messengers is a classical process, so it should not involve additional quantum suppression. Thus, allowing for an arbitrary (even) number of scalars \(n_{\phi}\) and messengers \(n_{g}\), the amplitude scales as \[\mathcal{M}^{\text{cl.}}_{n_{\phi}+n_{g},\text{$L$-loop}}\sim G^{L+\left( \frac{1}{2}n_{\phi}-1\right)+\frac{1}{2}n_{g}}\lambda^{-2\left(\frac{1}{2}n_{ \phi}-1\right)+L}\sim\hbar^{-3\left(\frac{1}{2}n_{\phi}-1\right)-\frac{1}{2}n _{g}}\, \tag{3}\] in four dimensions. Indeed, eq. (3) can be verified for the tree-level five-point amplitude of Ref. [74] and we will use it to extract the classical limit of the one-loop four-scalar-one-graviton amplitude. As for the four-point amplitude, the method of regions provides a systematic way of isolating the relevant contributions to the amplitude [75]. Interestingly, unlike the one-loop four-scalar amplitude, not all internal messengers need to be in the potential region. However, this fact is unsurprising as, through the unitarity method, the one-loop five-point amplitude is part of the three-loop four-point amplitude, which receives contributions from messenger momenta in the radiation region. The identification of the classical limit as a large-mass expansion accompanied by small messenger momenta establishes a connection with heavy-quark effective theory [76; 77; 78; 79], first utilized for classical gravitational scattering in Ref. [80] and further developed in Ref. [81; 82]. One approach to constructing the heavy-particle effective theory starts with the action of a scalar field coupled to gravity, \[\mathcal{L}_{\text{sc-grav}}=\frac{1}{2}(g^{\mu\nu}\partial_{\mu}\phi\partial_{ \nu}\phi-m^{2}\phi^{2}). \tag{4}\] Building on the assumption that the messenger momenta are soft, one considers a process in which scalar fields exchange some gravitons. Decomposing the soft part of the scalar momenta and redefining the fields so that they only depend on the soft momenta, \[p^{\mu}=mu^{\mu}+p^{\mu}_{\text{soft}}\,,\qquad\phi=e^{-imu\cdot x}\chi+e^{imu \cdot x}\chi^{*}\, \tag{5}\] leads to the new action \[\mathcal{L}_{\text{sc-grav}}\longrightarrow\chi^{*}\left(2im(g^{\mu\nu}u_{\mu }\partial_{\nu})+(g^{\mu\nu}\partial_{\mu}\partial_{\nu})\right)\chi\, \tag{6}\] where we have neglected the terms with a highly oscillatory phase \(e^{\pm 2imu\cdot x}\). Denoting the momentum of \(\chi\) as \(q\), the propagator is1 Footnote 1: This is equivalent to taking a Taylor series expansion of the quadratic propagator at the level of the amplitude. \[\langle\chi^{*}(-q)\chi(q)\rangle=\frac{i}{-2mq\cdot u-q^{2}}\simeq\frac{-i}{ 2mq\cdot u}\left(1-\frac{q^{2}}{2mq\cdot u}+\dots\right). \tag{7}\] As one might expect from the propagator of a massive field, the leading term is \(\mathcal{O}(\hbar^{-1})\), the next to leading term is \(\mathcal{O}(\hbar^{0})\), etc. Upon constructing scattering amplitudes from the Lagrangian in eq. (6) extended with the Einstein-Hilbert action, all vertices on a matter line must be symmetrized as a consequence of Bose symmetry. Therefore, all leading-order propagators are replaced by Dirac delta functions through the identity \[\frac{i}{2mq\cdot u+i0}+\frac{i}{-2mq\cdot u+i0}=2\pi\delta(2mq\cdot u)\equiv \hat{\delta}(2mq\cdot u)\,, \tag{8}\] and its multi-propagator generalization [83]. This phenomenon can be visualized as \[\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{figs/2mqq.eps}}\quad+ \quad\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{figs/2mqq.eps}}\quad= \quad\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{figs/2mqq.eps}}\quad, \tag{9}\] where the red vertical line represents the cut massive propagator, and the two blobs connected by the cut commute \[\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{figs/2mqq.eps}}\quad= \quad\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{figs/2mqq.eps}}\quad. \tag{10}\] To leading order in the classical expansion, the exposed propagators of heavy scalar states can be treated as on-shell in the gravitational heavy-particle effective theory. At loop level, we also encounter propagators that form a principal value combination, \[\frac{i}{2mq\cdot u+i0}-\frac{i}{-2mq\cdot u+i0}=\text{PV}\frac{i}{mq\cdot u}\,. \tag{11}\] As we will see later, they often show up in classical amplitudes. We take a classical expansion of the full tree amplitudes to compute the HEFT tree amplitudes. The HEFT amplitudes are naturally organized as a sum of terms manifesting the factorization properties of Feynman diagrams with the on-shell matter propagator being linearized. Doubled (and perhaps higher powers of) linearized propagators, arising from the expansion in eq. (7), are also present. This is analogous to the structure obtained by directly expanding in the soft region. A more systematic way to construct the classical parts of tree-level HEFT two-scalar-graviton amplitudes without reference to a Lagrangian (though probably equivalent to one) that also exhibits double-copy properties was proposed in Ref. [62]. For example, the three and four-point gravitational Compton amplitudes, expanded up to the classical order, are given by \[=-\kappa m_{1}^{2}(u_{1}\cdot\varepsilon_{2})^{2}\,, \tag{12}\] \[=i\kappa^{2}m_{1}^{3}\delta(2u_{1}\cdot k_{2})(u_{1}\cdot \varepsilon_{2})^{2}(u_{1}\cdot\varepsilon_{3})^{2}-\frac{\kappa^{2}m_{1}^{2} }{2k_{2}\cdot k_{3}}\Big{[}\frac{u_{1}\cdot f_{2}\cdot f_{3}\cdot u_{1}}{u_{1 }\cdot k_{3}}\Big{]}^{2}\,, \tag{13}\] where \(f_{i}^{\mu\nu}=k_{i}^{\mu}\varepsilon_{i}^{\nu}-k_{i}^{\nu}\varepsilon_{i}^{\mu}\) is the momentum space linearized field strength. The three-point amplitude scales classically under eq. (2) as \(\mathcal{O}(\hbar^{-1/2})\). The first term in the four-point amplitude exhibits super-classical \(\mathcal{O}(\hbar^{-2})\) scaling, which contains the characteristic delta function that localizes it on a special momentum configuration, while the second term scales classically as \(\mathcal{O}(\hbar^{-1})\). The classical part of the amplitude is referred to as the "HEFT amplitude". At loop level, we will focus on amplitudes with four scalars (two distinct massive scalar lines). The classical expansion naturally decomposes amplitudes into two matter-particle reducible (2MPR) and two matter-particle irreducible (2MPI) contributions. We define the 2MPR contribution as follows: * The diagram becomes disconnected by cutting two matter propagators. * The two cut matter propagators are both represented by \(\delta(p_{i}\cdot\ell)\), and the residue on the cut follows from the factorization of the amplitude's integrand. Being complementary to 2MPR, the 2MPI contributions include the following two classes of diagrams: 1. The diagram remains connected after cutting any two matter propagators. 2. If the diagram becomes disconnected after cutting two matter propagators, then at least one of the matter lines exhibits a principal-value propagator. Consequently, this cut has zero residue. Therefore, by construction, the 2MPR diagrams are given by the product of their 2MPI components on the support of the explicit delta functions that enforce the two-matter cuts. Some simple examples of 2MPR and 2MPI diagrams are given in figure 1. It was further demonstrated in Ref. [63] that, together with a suitable application of generalized unitarity, the classical HEFT tree amplitudes, such as eq. (12) and the second term of eq. (13), yield the classical part of the two-loop four-scalar amplitude, which can be identified with the radial action. As expected, the classical contributions are all 2MPI. More generally, the prescription of Ref. [63] manifests the 2MPR versus 2MPI classification by construction, such that the 2MPI contributions have the correct classical scaling while the 2MPR parts are super-classical. The only way to make the 2MPR diagrams have a classical contribution is to include local quantum contributions in the HEFT tree amplitudes. Such terms are, however, discarded from the outset; since they can appear in a complete calculation, one may wonder whether such \(\hbar/\hbar\) contributions can appear in classical scattering observables. By comparing the HEFT calculation of the five-point amplitude with the analogous result GR coupled with scalar fields we will see that such contributions are in fact absent. We postpone for appendix A a more general discussion of these points. Finally, an essential aspect of using the HEFT tree amplitudes in a unitarity-based construction of loop-level amplitudes is fixing the \(i0\) prescription for the uncut linearized matter propagators. We find that for the one-loop five-point problem considered in this work, we can treat all the uncut linearized matter propagators as principal-valued. We justify this approach by showing that, with this prescription, the HEFT result agrees with the classical part of the full quantum amplitude. It is beyond the scope of this work to identify a general \(i0\) prescription for the uncut matter propagators when applying the HEFT construction at loop levels. ## 3 Waveforms in the observable-based formalism We begin this section by briefly reviewing the observable-based approach of Kosower, Maybee, and O'Connell [42] (KMOC) to inclusive and local scattering observables. We then Figure 1: Examples of 2MPR and 2MPI diagrams. discuss certain cancellations present in this formalism, which can be made manifest through the use of the HEFT organization of amplitudes. The KMOC formalism constructs quantum scattering observables, described by some operator \(\hat{\mathcal{O}}\), by comparing the expectation value of \(\hat{\mathcal{O}}\) in the final and initial states of the scattering process: \[\langle\hat{\mathcal{O}}\rangle=\langle\psi_{\rm out}|\hat{\mathcal{O}}|\psi_{ \rm out}\rangle-\langle\psi_{\rm in}|\hat{\mathcal{O}}|\psi_{\rm in}\rangle \tag{11}\] where \(\psi_{\rm out}\) and \(\psi_{\rm in}\) are the outgoing and incoming state respectively. Further using that the incoming and outgoing states are connected by the time-evolution operator whose matrix elements form the scattering matrix, \(\hat{S}\), \[|\psi_{\rm out}\rangle=\hat{S}|\psi_{\rm in}\rangle\quad\text{ where }\quad\hat{S}=\mathbb{I}+i\hat{T}\, \tag{12}\] and \(\hat{T}\) is the transition matrix, eq. (11) becomes [42] \[\langle\hat{\mathcal{O}}\rangle=i\langle\psi_{\rm in}|\hat{\mathcal{O}}\hat{T }-\hat{T}^{\dagger}\hat{\mathcal{O}}|\psi_{\rm in}\rangle+\langle\psi_{\rm in} |\hat{T}^{\dagger}\hat{\mathcal{O}}\hat{T}|\psi_{\rm in}\rangle. \tag{13}\] Thus, using eq. (13), scattering observables are expressed in terms of (i.e. phase space integrals of) scattering amplitudes (i.e. matrix elements of \(\hat{T}\)) and their cuts, dressed with \(\hat{\mathcal{O}}\). The operators \(\hat{\mathcal{O}}\) may correspond to global (inclusive) observables such as the impulse of the matter particles or the radiated momentum [42], to local (exclusive) observables of the waveform of the outgoing radiation [43], or to a combination of both. The corresponding classical observables can be obtained in the appropriate classical limit. Impact parameter space provides one convenient means to taking this limit. The classical limit corresponds to an impact parameter significantly larger than the de Broglie wavelengths of the scattered particles and the horizon radii of black holes of masses equal to their masses; these, in turn, imply that the orbital angular momentum is large, making contact with Bohr's correspondence principle. To make use of this, and assuming that the initial state contains no incoming radiation, the initial two-particle state is built in terms of wave packets over the tensor product of a single-particle phase-space of measure \({\rm d}\Phi[p]\): \[|\psi_{\rm in}\rangle =\int{\rm d}\Phi[p_{1}]{\rm d}\Phi[p_{2}]\phi(p_{1})\phi(p_{2})e ^{ip_{1}\cdot b_{1}+ip_{2}\cdot b_{2}}|p_{1}p_{2}\rangle\,, \tag{14}\] \[{\rm d}\Phi[p] \equiv\frac{{\rm d}^{4}p}{(2\pi)^{4}}2\pi\Theta(p^{0})\delta(p^{2 }-m^{2})\,.\] The combination \(b_{1}-b_{2}\) is the impact parameter (i.e. the separation between the incoming particles), while \(b_{1}+b_{2}\) is the conjugate of the momentum of the center of mass. The wave packets \(\phi_{i}\) are sufficiently localized not to interfere with the classical limit conditions [42], i.e. their widths \(\ell_{\phi(p_{i})}\) obey \(\ell_{\rm horizon}\ll\ell_{\phi(p_{i})}\ll\sqrt{-(b_{1}-b_{2})^{2}}\), namely, it is much greater than the horizon size \(\ell_{\rm horizon}\) of the black holes but much smaller than their separation. ### A brief review of the waveform in the observable-based formalism The waveform in the infinite future, obtained by measuring the linearized Riemann curvature tensor [43], is the focus of this paper. The relevant operator, written in terms of graviton creation and annihilation operators, is \[\mathbb{R}_{\mu\nu\rho\sigma}(x)=\frac{\kappa}{2}\,\sum_{h=\pm}\int \mathrm{d}\Phi(k)\,\Big{[}\,k_{[\mu}\varepsilon_{\nu],h}^{*}(k)\,k_{[\rho} \varepsilon_{\sigma],h}^{*}(k)\,e^{-ik\cdot x}\,\hat{a}_{hh}(k) \tag{10}\] \[\qquad\qquad\qquad\qquad\qquad+k_{[\mu}\varepsilon_{\nu],h}(k)\,k _{[\rho}\varepsilon_{\sigma],h}(k)\,e^{+ik\cdot x}\,\hat{a}_{hh}^{\dagger}(k) \Big{]}\,,\] where the antisymmetrization has strength 2, and \(\mathrm{d}\Phi(k)\) is given by eq. (11) at zero mass, \(\varepsilon_{h}(k)\) are polarization vectors normalized as \[\varepsilon_{h}(k)\cdot\varepsilon_{h}^{*}(k)=-1\,,\qquad\varepsilon_{+}^{*}(k )=\varepsilon_{-}(k)\,. \tag{11}\] In the second relation, we choose \(h=\pm\) to represent the \(\pm 1\) helicity state. The operator \(\hat{a}_{hh}(k)\) annihilates a graviton with polarization \(\varepsilon_{h}^{\mu\nu}=\varepsilon_{h}^{\mu}\varepsilon_{h}^{\nu}\). We note that, even though this operator superficially depends on an arbitrary space-time point, \(x\), the formulation of the measurement process through scattering theory in eq. (10) implicitly assumes that this point is both in the far future and at spatial infinity. Assuming temporarily that the observable is measured at finite distance and there is no gravitational radiation in the infinite past, the waveform is given by [84; 43] \[\langle\mathbb{R}_{\mu\nu\rho\sigma}(x)\rangle =i\int\mathrm{d}\Phi(k)\left[e^{-ik\cdot x}\tilde{J}_{\mu\nu\rho \sigma}(k)-e^{ik\cdot x}\tilde{J}_{\mu\nu\rho\sigma}^{\dagger}(k)\right]\, \tag{12}\] \[\tilde{J}_{\mu\nu\rho\sigma}(k) =\frac{\kappa}{2}\sum_{h=\pm}k_{[\mu}\varepsilon_{\nu],h}^{*}(k) \,k_{[\rho}\varepsilon_{\sigma],h}^{*}(k)(-i)\langle\psi_{\mathrm{in}}|\hat{S }^{\dagger}\hat{a}_{hh}(k)\hat{S}|\psi_{\mathrm{in}}\rangle\,\] (13) \[\tilde{J}_{\mu\nu\rho\sigma}^{\dagger}(k) =\frac{\kappa}{2}\sum_{h=\pm}k_{[\mu}\varepsilon_{\nu],h}(k)\,k_{ [\rho}\varepsilon_{\sigma],h}(k)(+i)\langle\psi_{\mathrm{in}}|\hat{S}^{ \dagger}\hat{a}_{hh}^{\dagger}(k)\hat{S}|\psi_{\mathrm{in}}\rangle. \tag{14}\] At large distances, \(|x^{0}|\to\infty\) and \(|\mathbf{x}|\to\infty\), the integral over the angular directions of the on-shell graviton momentum, \[k=\omega(1,\mathbf{n_{k}})\,,\quad\text{with $\omega>0$ and $\mathbf{n_{k}^{2}}=1$}\, \tag{15}\] can be evaluated through various methods [84; 43] and each of the exponentials in eq. (12) leads to a linear combination of advanced and retarded propagators while \(\mathbf{n_{k}}\) is localized to the spatial unit vector \(\mathbf{n}\equiv\mathbf{x}/|\mathbf{x}|\) at the observation point \(x\), \[\langle\mathbb{R}_{\mu\nu\rho\sigma}(x)\rangle =\frac{1}{4\pi|\mathbf{x}|}\int_{0}^{\infty}\frac{d\omega}{2\pi} \Big{[}e^{-i\omega(x^{0}-|\mathbf{x}|)}\tilde{J}_{\mu\nu\rho\sigma}(\omega,\omega \mathbf{n})+e^{+i\omega(x^{0}-|\mathbf{x}|)}\tilde{J}_{\mu\nu\rho\sigma}^{\dagger}( \omega,\omega\mathbf{n})\] \[\qquad\qquad-e^{-i\omega(x^{0}+|\mathbf{x}|)}\tilde{J}_{\mu\nu\rho \sigma}(\omega,-\omega\mathbf{n})-e^{+i\omega(x^{0}+|\mathbf{x}|)}\tilde{J}_{\mu\nu \rho\sigma}^{\dagger}(\omega,-\omega\mathbf{n})\Big{]}. \tag{16}\] In the infinite future, only the terms depending on the retarded time \(t=x^{0}-|\mathbf{x}|\) are relevant. We thus drop the second line of eq. (16). The waveform \(W_{\mu\nu\rho\sigma}(t,\mathbf{n})\) for the curvature tensor and the spectral waveform, \(f_{\mu\nu\rho\sigma}(\omega,\mathbf{n})\), is given by \[\langle\mathbb{R}_{\mu\nu\rho\sigma}(x)\rangle\Big{|}_{|\mathbf{x}| \rightarrow\infty} =\frac{1}{|\mathbf{x}|}W_{\mu\nu\rho\sigma}(t,\mathbf{n})\equiv\frac{1}{| \mathbf{x}|}\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}f_{\mu\nu\rho\sigma}( \omega,\mathbf{n})e^{-i\omega t}\,, \tag{3.12a}\] \[f_{\mu\nu\rho\sigma}(\omega,\mathbf{n}) =\frac{1}{4\pi}\left[\Theta(\omega)\tilde{J}_{\mu\nu\rho\sigma}( \omega,\omega\mathbf{n})+\Theta(-\omega)\tilde{J}^{\dagger}_{\mu\nu\rho\sigma}(| \omega|,|\omega|\mathbf{n})\right]\,. \tag{3.12b}\] A convenient presentation of the curvature tensor (and consequently of the gravitational waveform) is in terms of Newman-Penrose scalars [85]. They are constructed as projections of the curvature tensor on a complex basis of null vectors. Following Ref. [43] we choose these vectors to be \[L^{\mu}=\frac{k^{\mu}}{\omega}=(1,\mathbf{n}_{\mathbf{k}})\,\quad N^{\mu}=\zeta^{\mu}\, \quad M^{\mu}=\varepsilon^{\mu}_{+}\,\quad M^{*\mu}=\varepsilon^{\mu}_{-}\, \tag{3.13}\] where \(M\cdot M^{*}=-1\) following eq. (3.6), and \(\zeta\) is a gauge choice such that \(\zeta\cdot\varepsilon_{\pm}=0\) and \(L\cdot N=L\cdot\zeta=1\). The independence of \(L\) on the frequency of the outgoing graviton makes eq. (3.13) a suitable basis both for the waveform and the spectral waveform. The Newman-Penrose scalars are defined by the independent contractions of the Weyl tensor with the vectors in eq. (3.13). The one with the slowest decay at large distances, typically denoted by \(\Psi_{4}\), describes the transverse radiation propagating along \(L\)[85], \[\Psi_{4}(x)=-N^{\mu}M^{*\nu}N^{\rho}M^{*\sigma}\langle\mathbb{R}_{\mu\nu\rho \sigma}(x)\rangle=\frac{1}{|\mathbf{x}|}\Psi_{4}^{\infty}+\ldots \tag{3.14}\] where the ellipsis stands for terms suppressed at large distance. Using the transversality and null property of \(M^{\mu}\) and eq. (3.11), we can write the spectral representation of \(\Psi_{4}^{\infty}\) as \[\widetilde{\Psi}_{4}^{\infty}(\omega,\mathbf{n})=-\frac{\kappa}{4\pi} \Big{[}\Theta(\omega)\omega^{2}(-i)\langle\psi_{\rm in}|\hat{S}^{ \dagger}\hat{a}_{--}(\omega,\omega\mathbf{n})\hat{S}|\psi_{\rm in}\rangle\] \[\qquad\qquad+\Theta(-\omega)\omega^{2}(+i)\langle\psi_{\rm in}| \hat{S}^{\dagger}\hat{a}^{\dagger}_{++}(|\omega|,|\omega|\mathbf{n})\hat{S}|\psi_ {\rm in}\rangle\Big{]}\,. \tag{3.15}\] For an asymptotically flat spacetime, outgoing radiation at large distances is described by linearized general relativity in transverse traceless gauge. Using that \(k\cdot N=\omega\), the Newman-Penrose scalar \(\Psi_{4}\) takes the form \[\Psi_{4}=\frac{1}{2}\kappa\,\varepsilon^{\mu}_{-}\varepsilon^{\nu}_{-}\tilde{ h}_{\mu\nu}\equiv\frac{1}{2}\kappa(\tilde{h}_{+}+i\tilde{h}_{\times})=\frac{ \kappa}{8\pi|\mathbf{x}|}(\tilde{h}_{+}^{\infty}+i\tilde{h}_{\times}^{\infty})\, \tag{3.16}\] For \(k=(\omega,0,0,\omega)\), the negative-helicity polarization vector is \(\varepsilon_{-}=-\frac{1}{\sqrt{2}}(0,1,i,0)\) and the \(+\) and \(\times\) graviton polarizations are \[h_{+}=\frac{1}{2}(h_{11}-h_{22})\,,\qquad h_{\times}=h_{12}=h_{21}. \tag{3.17}\] In general, the \(\times\) and \(+\) polarization are defined with respect to the vector \(L^{\mu}\) pointing along the graviton momentum. They are related to the real and imaginary parts of the outgoing negative helicity polarization tensor, see section 8. The metric perturbation is in transverse traceless gauge and is normalized in such a way that, at spatial infinity, it falls off as \[g_{\mu\nu}\Big{|}_{|\mathbf{x}|\to\infty}=\eta_{\mu\nu}+\frac{\kappa}{8 \pi|\mathbf{x}|}h^{\infty}_{\mu\nu}\,. \tag{3.18}\] Therefore, we may directly identify the Fourier transform of the metric at infinity in terms of the frequency-space Newman-Penrose scalar: \[\tilde{h}^{\infty}_{+}+i\tilde{h}^{\infty}_{\times}=\Big{[} \,\Theta(\omega)(-i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_ {--}(\omega,\omega\mathbf{n})\hat{S}|\psi_{\rm in}\rangle\] \[\qquad+\Theta(-\omega)(+i)\langle\psi_{\rm in}|\hat{S}^{\dagger} \hat{a}^{\dagger}_{++}(|\omega|,|\omega|\mathbf{n})\hat{S}|\psi_{\rm in}\rangle \Big{]}\;. \tag{3.19}\] Thus, in frequency space, the standard gravitational-wave polarizations, \(h_{+,\times}\), are given directly in terms of scattering amplitudes. Let us now discuss the matrix elements \(\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{--}(k)\hat{S}|\psi_{\rm in}\rangle\) and \(\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}^{\dagger}_{++}(k)\hat{S}|\psi_{ \rm in}\rangle\), which define \(\tilde{J}\) and \(\tilde{J}^{\dagger}\). We will focus on the former and obtain the latter by complex conjugation. We first go to momentum space and consider the matrix element with an initial state with momenta \(p_{1}\) and \(p_{2}\), \[\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}| \psi_{\rm in}\rangle=\int {\rm d}\Phi[p_{1}]{\rm d}\Phi[p_{2}]{\rm d}\Phi[p^{\prime}_{1}]{ \rm d}\Phi[p^{\prime}_{2}]\phi(p_{1})\phi(p_{2})\phi^{*}(p^{\prime}_{1})\phi^{* }(p^{\prime}_{2})\] \[\times\langle p^{\prime}_{1}p^{\prime}_{2}|\hat{S}^{\dagger}\hat{ a}_{hh}(k)\hat{S}|p_{1}p_{2}\rangle e^{i(p_{1}-p^{\prime}_{1})_{1}b_{1}}e^{i(p_{2}-p^{ \prime}_{2})\cdot p_{2}} \tag{3.20}\] We then expand \(\hat{S}\) in terms of the transition matrix \(\hat{T}\), which leads to \[\langle p^{\prime}_{1}p^{\prime}_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}| p_{1}p_{2}\rangle=\langle p^{\prime}_{1}p^{\prime}_{2}k^{h}|i\hat{T}|p_{1}p_{2} \rangle+\langle p^{\prime}_{1}p^{\prime}_{2}|\hat{T}^{\dagger}\hat{a}_{hh}(k) \hat{T}|p_{1}p_{2}\rangle\,. \tag{3.21}\] The first term is simply the 2-to-3 scattering amplitude \[\langle p^{\prime}_{1}p^{\prime}_{2}k^{h}|i\hat{T}|p_{1}p_{2}\rangle =i{\cal M}(p_{1}p_{2}\to p^{\prime}_{1}p^{\prime}_{2}k^{h})\] \[=iM(p_{1}p_{2}\to p^{\prime}_{1}p^{\prime}_{2}k^{h})\underbrace{(2 \pi)^{d}\delta^{(d)}(p_{1}+p_{2}-p^{\prime}_{1}-p^{\prime}_{2}-k)}_{\equiv \hat{S}^{(d)}(p_{1}+p_{2}-p^{\prime}_{1}-p^{\prime}_{2}-k)}, \tag{3.22}\] where \({\cal M}\) contains implicitly the momentum-conserving delta function, and \(M\) is the reduced amplitude with this delta function stripped off. The second term gives the \(s\)-channel unitarity cuts of this virtual amplitude, \[\langle p^{\prime}_{1}p^{\prime}_{2}|\hat{T}^{\dagger}\hat{a}_{hh} (k)\hat{T}|p_{1}p_{2}\rangle =\sum_{X}\int{\rm d}\Phi[r_{1}]{\rm d}\Phi[r_{2}]\langle p^{\prime }_{1}p^{\prime}_{2}|T^{\dagger}|r_{1}r_{2}X\rangle\langle r_{1}r_{2}kX|T|p_{1} p_{2}\rangle \tag{3.23}\] \[=\sum_{X}\int{\rm d}\Phi[r_{1}]{\rm d}\Phi[r_{2}]{\cal M}^{*}(p^{ \prime}_{1}p^{\prime}_{2}\to r_{1}r_{2},X){\cal M}(p_{1}p_{2}\to r_{1}r_{2}k^ {h},X)\,,\] where we have judiciously inserted a complete set of states between \(\hat{T}^{\dagger}\) and \(\hat{a}_{hh}(k)\), and \(X\) stands for graviton exchanges. This term can be graphically represented by \[\langle p_{1}^{\prime}p_{2}^{\prime}|\hat{T}^{\dagger}\hat{a}_{hh}(k)\hat{T}|p_{1 }p_{2}\rangle=\sum_{X}\int\mathrm{d}\Phi[r_{1}]\mathrm{d}\Phi[r_{2}] \tag{3.24}\] We note that \(\sum_{X}\) implicitly contains both the integration over the graviton phase spaces and the sum over polarizations, together with symmetry factors for identical particles. The phase factor in eq. (3.20) can be reorganized using the momentum-conserving delta functions. We introduce \(q_{i}=p_{i}-p_{i}^{\prime}\), which are related to the momentum of the outgoing graviton as \(q_{1}+q_{2}=k\). The phase factor thus becomes \(e^{iq_{1}\cdot b_{1}+iq_{2}\cdot b_{2}}=e^{iq_{1}\cdot(b_{1}-b_{2})+ik\cdot b_{ 2}}\). The second term can be absorbed into the \(e^{ikx}\) factor in eq. (3.7) by redefining the position \(x\). Since the \(b_{i}\) are finite, this redefinition is irrelevant at large distances. We can thus choose the impact parameter to be \(b=b_{1}-b_{2}\), which is the Fourier-conjugate of \(q_{1}\).2 Consequently, it is equivalent to performing the Fourier transform in eq. (3.20) as Footnote 2: Other choices are possible, but differ only by a phase whose argument is linear in \(k\). \[\langle\psi_{\mathrm{in}}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}| \psi_{\mathrm{in}}\rangle=\int \mathrm{d}\Phi[p_{1}]\mathrm{d}\Phi[p_{2}]\mathrm{d}\Phi[p_{1}^{ \prime}]\mathrm{d}\Phi[p_{2}^{\prime}]\phi(p_{1})\phi(p_{2})\phi^{*}(p_{1}^{ \prime})\phi^{*}(p_{2}^{\prime})\] \[\times\langle p_{1}^{\prime}p_{2}^{\prime}|\hat{S}^{\dagger}\hat{ a}_{hh}(k)\hat{S}|p_{1}p_{2}\rangle e^{i(p_{1}-p_{1}^{\prime})\cdot b}\,. \tag{3.25}\] Finally, the soft expansion in the classical limit, \(q_{i}\ll p_{i}\), introduces substantial simplifications. After introducing \(\bar{p}_{i}\equiv p_{i}+(1/2)q_{i}\), we can rewrite the measure \(\mathrm{d}\Phi(p_{i}^{\prime})\) as \[\mathrm{d}\Phi(p_{i}^{\prime})=\frac{\mathrm{d}^{4}p_{i}^{\prime}}{(2\pi)^{4}} (2\pi)\delta({p_{i}^{\prime}}^{2}-m_{i}^{2})=\frac{\mathrm{d}^{4}q_{i}}{(2\pi )^{4}}(2\pi)\delta(2\bar{p}_{i}\cdot q_{i})\equiv\frac{\mathrm{d}^{4}q_{i}}{(2 \pi)^{4}}\hat{\delta}(2\bar{p}_{i}\cdot q_{i})\,, \tag{3.26}\] where \(\hat{\delta}(x)\equiv 2\pi\delta(x)\), and we have used the fact that external physical particles always have positive energy. We further identify the initial and final momentum space wave packets in this limit, \(\phi(p_{i})\to\phi(\bar{p}_{i})\) and \(\phi^{*}(p_{i}^{\prime})\to\phi^{*}(\bar{p}_{i})\). Accounting for these features of the classical limit, eq. (3.25) now takes its final form, \[(-i)\langle\psi_{\mathrm{in}}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat {S}|\psi_{\mathrm{in}}\rangle=\int\mathrm{d}\Phi[p_{1}]\mathrm{d}\Phi[p_{2}]| \phi(\bar{p}_{1})|^{2}|\phi(\bar{p}_{2})|^{2}\mathcal{B}_{h}\,, \tag{3.27}\] \[\mathcal{B}_{h}\equiv\int\frac{\mathrm{d}^{d}q_{1}}{(2\pi)^{d}} \frac{\mathrm{d}^{d}q_{2}}{(2\pi)^{d}}\hat{\delta}(2\bar{p}_{1}\cdot q_{1}) \hat{\delta}(2\bar{p}_{2}\cdot q_{2})e^{iq_{1}\cdot b}(-i)\langle p_{1}-q_{1}, p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|p_{1}p_{2}\rangle\,\] where the matrix element \(\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|p_{1}p_ {2}\rangle\) should also be evaluated in the classical limit. The matrix element \(\langle\psi_{\mathrm{in}}|\hat{S}^{\dagger}\hat{a}_{hh}^{\dagger}(k)\hat{S}| \psi_{\mathrm{in}}\rangle\) follows by complex conjugation. The reason we included an explicit factor of \((-i)\) is to make \(\mathcal{B}_{h}\) be given by amplitudes directly, which are defined as matrix elements of \(\hat{T}\), see eqs. (3.21) and (3.22). We will now discuss some important properties of these matrix elements. ### On the structure of local (exclusive) observables The matrix element \(\langle p_{1}^{\prime}p_{2}^{\prime}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|p_{1} p_{2}\rangle\) determining the spectral waveform is written out explicitly in terms of scattering amplitudes in eq. (3.21). **FT:** references fixed As discussed in Refs. [29; 86] the two different \(i0\) prescriptions that appear in the last term in eq. (3.21) pose no difficulty to the evaluation of inclusive observables. For example, in the related KMOC calculation of inclusive observables [29; 86], the terms bilinear in amplitudes were evaluated through reverse unitarity [87; 88; 89]. We may simply construct the five-point virtual amplitude \(\mathcal{M}(p_{1}p_{2}\to p_{1}^{\prime}p_{2}^{\prime}k^{h})\) and then take its \(s\)-channel cut to find the second term in eq. (3.21).3 However, such direct calculations can obscure additional simplifications as propagators with opposite \(i0\) conventions can exhibit nontrivial cancellations. In particular, we argue that only cuts of 2MPI graphs contribute to classical local observables due to such cancellations. Footnote 3: We note that here the term bilinear in amplitudes are literally the \(s\)-channel cut of the term linear in amplitudes without any additional dressing factors. This is a consequence of the structure of the observable, which only “measures” properties of outgoing particles and thus does not actively participate in the phase space integration. As another side note, we allow disconnected matrix elements to appwear across the \(s\)-channel cuts. At one loop, it is clear from eq. (3.21) that the 2MPR graphs are subtracted in the classical waveform. The second term in eq. (3.21), \(\langle p_{1}^{\prime}p_{2}^{\prime}|\hat{T}^{\dagger}\hat{a}_{h}(k)\hat{T}|p_ {1}p_{2}\rangle\), evaluates to the 2MPR contribution plus one-loop 3-particle cuts, \[\includegraphics[scale=0.4]{figures/2MPR-1- emission. That is, \[\hat{S}=e^{i\hat{N}}\equiv\sum_{n=0}^{\infty}\frac{(i\hat{N})^{n}}{n!}\, \tag{3.30}\] where \(\hat{N}\) is a Hermitian operator and the product of operators is defined by inserting the identity operator in the complete Hilbert space of states, \[\mathbb{I}=\underbrace{|r_{1}r_{2}\rangle\langle r_{1}r_{2}|}_{ \hat{P}_{2,0}}+\underbrace{|r_{1}r_{2}k_{1}\rangle\langle r_{1}r_{2}k_{1}|}_{ \hat{P}_{2,1}}+\underbrace{|r_{1}r_{2}k_{1}k_{2}\rangle\langle r_{1}r_{2}k_{1} k_{2}|}_{\hat{P}_{2,2}}+\ldots=\sum_{m=0}^{\infty}\hat{P}_{2,m} \tag{3.31}\] where \(P_{2,m}\) is the identity operators in the \((2+m)\)-particle Hilbert space. We can perturbatively expand \(\hat{N}\) in \(\kappa\), \[\hat{N}=\kappa^{2}\hat{N}_{0}+\kappa^{3}\hat{N}_{0}^{\rm rad}+ \kappa^{4}\hat{N}_{1}+\kappa^{5}\hat{N}_{1}^{\rm rad}+\kappa^{6}\hat{N}_{2}+\ldots \tag{3.32}\] where the subscript denotes the loop order, and the superscript signals the presence of graviton emission. We note that graviton emission also contributes to even orders of \(\kappa\), which we suppress here for simplicity. We can perform a similar expansion for \(\hat{T}\) and solve \(N_{i}\), the matrix element of \(\hat{N}_{i}\), order by order. For example, when restricted to two-particle initial and final states, we have [61], \[N_{0}= \tag{3.33}\] where the blobs represent virtual amplitudes, which are matrix elements of \(\hat{T}\), and the cut propagator is integrated with measure \({\rm d}\Phi[p]\). Crucially, we now prove that the matrix elements of \(\hat{N}\) do not exhibit local \(s\)-channel cuts by induction. A term exhibits a local \(s\)-channel cut if the term contains an \(s\)-channel cut whose residue can be interpreted as a product of trees. For \(N_{1}\), this is apparent due to the explicit subtraction of the cut shown in eq. (3.33), and one can check this property order by order through direct computations. To see this for more generic cases, we carry out the expansion in eq. (3.30) up to a certain power of \(\kappa\). Should an \(\hat{N}_{i}\) in this expansion contain an \(s\)-channel cut in addition to those explicitly present due to the insertion of the projectors \(\hat{P}_{2,m}\), then it would contribute a term \(\hat{A}_{a}\hat{P}_{2,m}\hat{B}_{b}\in\hat{N}_{i}\). Both \(\hat{A}_{a}\) and \(\hat{B}_{b}\) here, which may contain graviton emissions, have a lower power of \(\kappa\). They should have already been included in the respective \(\hat{N}_{a}\) and \(\hat{N}_{b}\), otherwise it would be inconsistent with the symmetry factors coming from eq. (3.30). Therefore, we have proved that \(\hat{N}_{i}\) is free of \(s\)-channel cuts. We now substitute eq. (3.30) into eq. (3.1), finding a sum of nested commutators, \[\langle\hat{\mathcal{O}}\rangle=\langle\psi_{\rm in}|e^{-i\hat{N }}[\hat{\mathcal{O}},e^{i\hat{N}}]|\psi_{\rm in}\rangle=\langle\psi_{\rm in}| \sum_{k=1}^{\infty}\frac{i^{k}}{k!}\underbrace{[[[[\hat{\mathcal{O}},\hat{N}], \hat{N}],\ldots],\hat{N}]}_{\text{$k$ times}}|\psi_{\rm in}\rangle\,. \tag{3.34}\] The above discussion is completely general, applying to the full quantum theory. This form of the S-matrix operator manifests all the \(s\)-channel cuts, which can only appear between two \(\hat{N}\) operators through the insertion of the phase-space projector \(\hat{P}_{2,n}\). Restricting ourselves to the waveform operator, or more generally those that only measure properties of particular external states, one finds additional identities among the matrix elements in the classical limit when a two-massive-particle projector is inserted, \[\langle\psi_{\rm in}|[[[\hat{\mathcal{O}},\hat{N}],\ldots],\hat{N}]\hat{P}_{2, 0}\hat{N}|\psi_{\rm in}\rangle=\langle\psi_{\rm in}|\hat{N}\hat{P}_{2,0}[[[ \mathcal{O},\hat{N}],\ldots],\hat{N}]|\psi_{\rm in}\rangle\,. \tag{3.35}\] This relation follows from reformulating eq. (2.10) in operator language, which states that the difference between the two matrix elements in eq. (3.35) is subleading in the momentum transfer \(q\). It is crucial that we are considering an operator \(\hat{\mathcal{O}}\) that does not participate in the integrals implied by the phase-space projector. As discussed above, \(\hat{N}\) does not contain local \(s\)-channel cuts. We have therefore demonstrated the absence of two-massive-particle cuts at all loop orders.4 As a direct consequence of this result, only 2MPI diagrams contribute to the classical waveform in the KMOC formalism. This generalizes the intuitive picture that the iteration part of the scattering amplitudes (often superclassical) should not contribute. We will evaluate the 2MPI part of the classical five-point amplitude at one loop order in sections 4 to 6. Footnote 4: We thank Carlo Heissenberg for the correspondence leading the revision of this statement in the first ArXiv version. ### Infrared divergences of amplitudes and the classical waveform Five-point (and in general \(n\)-point) gravitational amplitudes are typically IR divergent. In cross-section calculations, some divergences exponentiate to a harmless total phase while others must be removed by summing over final state radiation [90; 91] or dressing the external states [92; 93; 94]. In observables that are both linear and bilinear in scattering amplitudes, such as the waveform, some of these divergences are in the 2MPR contributions and therefore cancel against the bilinear-in-\(\hat{T}\) terms in eq. (3.22). It is important to understand the IR divergences of the surviving 2MPI diagrams on e.g. eqs. (3.12a) and (3.27) as these are relevant for classical observables. However, in the classical limit, we find the surviving 2MPI IR divergences correspond to a total phase that can be safely absorbed into a linear re-definition of \(t\) in eq. (3.12). Similar treatment was first discussed in Refs. [13; 14] in the context of PN expansion. Ref. [68] famously showed that the virtual IR divergences of gravitational amplitudes come from loop-momentum integration regions in which a graviton \(\ell\) connecting the external particles with momenta \(p_{a}\) and \(p_{b}\) becomes soft. They factorize and exponentiate as [68] \[\frac{\mathcal{M}(\alpha\to\beta)}{\mathcal{M}^{0}(\alpha\to\beta)} =\exp\left[4\pi G\sum_{a,b}\left((p_{a}\cdot p_{b})^{2}-\frac{1}{ 2}m_{a}^{2}m_{b}^{2}\right)\eta_{a}\eta_{b}J_{ab}\right], \tag{3.36}\] \[J_{ab} =-i\mu^{2\epsilon}\int^{\Lambda}\frac{{\rm d}^{d}\ell}{(2\pi)^{d }}\frac{1}{(\ell^{2}+i0)(p_{b}\cdot\ell+i\eta_{b}0)(p_{a}\cdot\ell-i\eta_{a} 0)}\, \tag{3.37}\] where \({\cal M}(\alpha\to\beta)\) is the all-order \(\alpha\to\beta\) amplitude, and \({\cal M}^{0}(\alpha\to\beta)\) is its counterpart without the virtual soft gravitons. In the exponent, the factor \((p_{a}\cdot p_{b})^{2}-\frac{1}{2}m_{a}^{2}m_{b}^{2}\) comes from the contraction of two stress-energy tensors with the numerator of the graviton propagator. The summation \(\sum_{a,b}\) runs over all the unordered pairs \((a,b)\) of external particles and \(\eta_{a}=\pm 1\) depending on whether \(p_{a}\) is outgoing or incoming. We follow Ref. [95] and dimensionally regularize \(J_{ab}\) by using \(d=4-2\epsilon\), \(\mu\) is the scale of dimensional regularization, and \(\Lambda\) is the cutoff \(|\ell^{2}|<\Lambda^{2}\) that defines the virtual soft momenta.5 RR: Some tweaks. Footnote 5: These momenta should note be confused with the momenta in the soft region as defined in eq. (1). The IR-divergent integral \(J_{ab}\) has both a real and an imaginary part [68]. One option to eliminate them and define IR-finite S-matrix elements is by choosing suitable asymptotic states [93; 95; 96]. While we will not pursue this approach here, it would be interesting to understand if it can be realized by judiciously choosing the wave packets \(\phi(p_{i})\) in eq. (11). Instead, we will show here that, in the classical limit, the 2MPI diagrams do not contribute to the real IR divergence. The remaining IR-divergent phase can be absorbed into the definition of the time variable of the waveform. Following Ref. [68], let us consider the scattering of two massive particles (labeled as 1 and 2) with graviton emission in the final state. In the classical limit (that is, expanding in the soft region), all the matter propagators are linearized. The \(\hbar\) counting further implies that, for a connected amplitude, there can be at most one graviton (labeled as \(k\)) in the final state which can be relevant to classical observables [97; 98].6 In the following discussion, the "virtual soft graviton" is even softer than the soft region, namely, \(\Lambda\ll|\mathbf{q}|\) and \(|\mathbf{k}|\). Under this setup, the IR divergence receives contributions from the following three configurations shown in figure 2: Footnote 6: We note that disconnected amplitudes at higher points may still contribute to KMOC-type observables on the support of zero graviton energy. However, as we will see later, such configurations are not relevant for the waveform. **(I) The virtual soft graviton starting and ending on the same particle:** For such a configuration, the soft expansion implies that the corresponding loop integral is either scaleless or its propagators are linearly-dependent. The former has the topology of an external bubble, which integrates to zero in dimensional regularization.7 The latter Figure 2: Typical contribution to the IR divergence. The soft graviton is shown in red. requires partial fractioning [29; 99] and the resulting integrals are either scaleless or finite. Thus, this graviton configuration does not lead to an IR divergence in the classical limit. Footnote 11: The \(\mathcal{O}(1)\) factor is not included in the \(\mathcal{O}(1)\) factor. (II) The virtual soft graviton connecting different massive particles:In the classical amplitudes, the incoming and outgoing momenta of the same massive particle are actually equal. This is because, in the full amplitude, their difference is quantum, and the soft expansion homogenizes the \(\hbar\) counting in each diagram. As a result, the exponent in eq. (3.3) becomes \[\sum_{a,b}\left((p_{a}\cdot p_{b})^{2}-\frac{1}{2}m_{a}^{2}m_{b}^{ 2}\right)\eta_{a}\eta_{b}J_{ab}\Bigg{|}_{a\text{ and }b\text{ are different massive particles}}\] \[=-i\big{(}2(p_{1}\cdot p_{2})^{2}-m_{1}^{2}m_{2}^{2}\big{)}\mu^{2 \epsilon}\sum_{\eta_{a,b}=\pm 1}\int^{\Lambda}\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}} \frac{\eta_{a}\eta_{b}}{(\ell^{2}+i0)(p_{b}\cdot\ell+i\eta_{b}0)(p_{a}\cdot \ell-i\eta_{a}0)}\] \[=i\big{(}2(p_{1}\cdot p_{2})^{2}-m_{1}^{2}m_{2}^{2}\big{)}\mu^{2 \epsilon}\int^{\Lambda}\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{(4\pi)^{2} \delta(p_{1}\cdot\ell)\delta(p_{2}\cdot\ell)}{\ell^{2}+i0}\,. \tag{3.38}\] This term contributes a real IR divergence, but it belongs to the 2MPR part of the amplitude. Therefore, this configuration does not contribute to the IR divergence of the 2MPI amplitude. This also shows that in the \(2\to 2\) scatterings, the IR divergence is real and captured by 2MPR diagrams, while the 2MPI contributions are finite. (III) The virtual soft graviton connecting a massive and a massless particle:For this configuration, the exponent of eq. (3.3) is given by \[\sum_{a,b}\left((p_{a}\cdot p_{b})^{2}-\frac{1}{2}m_{a}^{2}m_{b}^ {2}\right)\eta_{a}\eta_{b}J_{ab}\Bigg{|}_{(a,b)\in(\text{one massless, one massive})} \tag{3.39}\] \[=-i\mu^{2\epsilon}\sum_{b=1,2}\int^{\Lambda}\frac{\mathrm{d}^{d} \ell}{(2\pi)^{d}}\left[\frac{2(p_{b}\cdot k)^{2}}{(\ell^{2}+i0)(p_{b}\cdot \ell+i0)(k\cdot\ell-i0)}-\frac{2(p_{b}\cdot k)^{2}}{(\ell^{2}+i0)(p_{b}\cdot \ell-i0)(k\cdot\ell-i0)}\right]\,.\] The evaluation proceeds by first integrating over \(\ell^{0}\), following [95; 68]. For the first integral in eq. (3.39), we close the \(\ell^{0}\) contour from above, picking up the pole \(\ell^{0}=-|\boldsymbol{\ell}|+i0\) and \(\ell^{0}=\frac{1}{\omega}\boldsymbol{k}\cdot\boldsymbol{\ell}+i0\). For the second integral, we close the contour from below, picking up the pole \(\ell^{0}=|\boldsymbol{\ell}|-i0\). The contribution from \(-|\boldsymbol{\ell}|+i0\) and \(|\boldsymbol{\ell}|-i0\) cancel each other, leaving only an imaginary IR divergence from the pole \(\ell^{0}=\frac{1}{\omega}\boldsymbol{k}\cdot\boldsymbol{\ell}+i0\), \[-i\mu^{2\epsilon}\sum_{b=1,2}\int^{\Lambda}\frac{\mathrm{d}^{d} \ell}{(2\pi)^{d}}\left[\frac{2(p_{b}\cdot k)^{2}}{(\ell^{2}+i0)(p_{b}\cdot\ell +i0)(k\cdot\ell-i0)}-\frac{2(p_{b}\cdot k)^{2}}{(\ell^{2}+i0)(p_{b}\cdot\ell-i 0)(k\cdot\ell-i0)}\right]\] \[=-\frac{i}{4\pi}(p_{1}\cdot k+p_{2}\cdot k)\left[\frac{1}{ \epsilon}-\log\frac{\Lambda^{2}}{\mu^{2}}+\mathcal{O}(\epsilon)\right]\,, \tag{3.40}\] where we have used the fact that \(p_{b}\cdot k>0\) for physical processes. If there are more external gravitons, then the internal soft graviton connecting two external gravitons may also contribute to the IR divergence. However, as we have discussed before, amplitudes that are relevant to classical physics can only contain at most one external graviton [97; 98]. Therefore, we have shown that for classical 2MPI amplitudes, the IR divergence is purely imaginary. For the four-scalar-one-graviton amplitude, it is a pure phase \[\mathcal{M}_{5}^{\rm 2MPI,cl.}(p_{1}p_{2}\to p_{1}^{\prime}p_{2}^{ \prime}k) =\exp\left[-iG(p_{1}\cdot k+p_{2}\cdot k)\left(\frac{1}{\epsilon}-\log \frac{\Lambda^{2}}{\mu^{2}}\right)\right]\] \[\quad\times\mathcal{M}_{5}^{0,\rm 2MPI,cl.}(p_{1}p_{2}\to p_{1}^{ \prime}p_{2}^{\prime}k). \tag{45}\] To leading order in Newton's constant \(\mathcal{M}^{0,\rm 2MPI,cl.}(p_{1}p_{2}\to p_{1}^{\prime}p_{2}^{\prime}k)\) is the tree-level five-point amplitude. We will indeed verify the one-loop part of this relation in section 7. We can now assemble the waveform from eqs. (30a), (31) and (32) and understand the fate of the IR-divergent phase. Indeed, after the cancellation of the 2MPR part of the five-point amplitude, the remaining 2MPI part and the bilinear-in-\(\hat{T}\) contribution to the matrix element in eqs. (31) and (32), and consequently the spectral waveform (30b), have the same IR-divergent phase. Using the solution for the on-shell condition in eq. (29) for the outgoing graviton, we can write its argument as \[-G(p_{1}\cdot k+p_{2}\cdot k)\left(\frac{1}{\epsilon}-\log\frac{ \Lambda^{2}}{\mu^{2}}\right)=-G\big{(}E_{1}+E_{2}-(\mathbf{p}_{1}+\mathbf{p}_{2})\cdot \mathbf{n_{k}}\big{)}\omega\left(\frac{1}{\epsilon}-\log\frac{\Lambda^{2}}{\mu^{2} }\right)\,, \tag{46}\] which may be removed by defining the observation time as \[\tau=t+G\big{(}E_{1}+E_{2}-(\mathbf{p}_{1}+\mathbf{p}_{2})\cdot\mathbf{n}\big{)}\left( \frac{1}{\epsilon}-\log\frac{\Lambda^{2}}{\mu^{2}}\right)\,, \tag{47}\] where \(t\) is the retarded time first defined in eq. (33). We thus conclude that the IR-divergent phase of the classical five-point amplitude can be removed by choosing a suitable origin of the observation time or, alternatively, focusing on observation-time intervals. IR divergences similar to those of amplitudes appear in the far-zone EFT; in the PN expansion they were discussed in Refs. [13; 14] where they were also absorbed in the definition of the retarded time. Interestingly, the \(\epsilon\) dependence of the scattering amplitude with no soft-graviton contribution, \(\mathcal{M}^{0,\rm 2MPI,cl.}(p_{1}p_{2}\to p_{1}^{\prime}p_{2}^{\prime}k)\), is irrelevant after the soft divergence is absorbed in the definition of the time. Indeed, these \(\mathcal{O}(\epsilon)\) terms contribute to the \(\epsilon\to 0\) limit only if they are multiplied by some \(1/\epsilon\) factor. Since all IR divergences are eliminated from the spectral waveform by eq. (47) and the classical amplitude is free of ultraviolet divergences,8 it follows that \(\mathcal{O}(\epsilon)\) can be ignored when evaluating the waveform \(f_{\mu\nu\rho\sigma}(\omega,\mathbf{n})\) or \(\langle\mathbb{R}_{\mu\nu\rho\sigma}(x)\rangle\) in eq. (30). Footnote 8: Ultraviolet divergences probe short-distance physics and at this order are outside the long-distance regime singled out by the soft expansion ### Waveforms from amplitudes: summary and further comments We will discuss the calculation of waveforms at leading order and next-to-leading order by applying this formalism to classical \(\mathcal{N}=8\) supergravity and GR in section 8. To facilitate this application, we collect here the relevant formulae and further organize them, using the properties of amplitudes in the classical limit to streamline their connection to waveforms in the time domain. The spectral waveform (or the frequency-space curvature tensor), the frequency-space Newman-Penrose scalar, and the frequency-space metric in a transverse-traceless gauge are given by eqs. (3.12b), (3.15) and (3.19). After the IR divergence is absorbed in the definition of time, they become \[f_{\mu\nu\rho\sigma}(\omega,\mathbf{n}) =\frac{\kappa}{8\pi}\sum_{h=\pm}\Big{[}\,\Theta(\omega)\,k_{[\mu} \varepsilon_{\nu],h}(k)\,k_{[\rho}\varepsilon_{\sigma],h}^{*}(k)\,(-i)\langle \psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|\psi_{\rm in}\rangle^{0} \big{|}_{k=(\omega,\omega\mathbf{n})}\] \[\quad+\Theta(-\omega)k_{[\mu}\varepsilon_{\nu],h}(k)k_{[\rho} \varepsilon_{\sigma],h}(k)(+i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{ hh}^{\dagger}(k)\hat{S}|\psi_{\rm in}\rangle^{0}\big{|}_{k=(|\omega|,|\omega| \mathbf{n})}\Big{]}\,, \tag{3.44}\] \[\widetilde{\Psi}_{4}^{\infty}(\omega,\mathbf{n}) =-\frac{\kappa}{8\pi}\,\omega^{2}(\tilde{h}_{+}^{\infty}+i\tilde {h}_{\times}^{\infty})\,,\] (3.45) \[\tilde{h}_{+}^{\infty}+i\tilde{h}_{\times}^{\infty} =\Big{[}\Theta(\omega)\,\,(-i)\langle\psi_{\rm in}|\hat{S}^{ \dagger}\hat{a}_{--}(k)\hat{S}|\psi_{\rm in}\rangle^{0}\big{|}_{k=(\omega, \omega\mathbf{n})}\] (3.46) \[\quad+\Theta(-\omega)\,\,(+i)\langle\psi_{\rm in}|\hat{S}^{ \dagger}\hat{a}_{++}^{\dagger}(k)\hat{S}|\psi_{\rm in}\rangle^{0}\big{|}_{k=(| \omega|,|\omega|\mathbf{n})}\Big{]}+{\rm M}\delta(\omega)+{\rm S}\delta^{\prime}( \omega)\,\] where the superscript \(0\) indicates that the IR divergences have been absorbed in the definition of the observation time \(\tau\), see eq. (3.43). The term proportional to \(\delta^{\prime}(\omega)\) is a gauge degree of freedom, and we will ignore it in the following. The remaining \(\delta(\omega)\) term in eq. (3.46) can be present due to specific contributions that only have support on zero graviton energy and will, at most, lead to a time-independent background that the initial condition will fix. Additionally, such terms are irrelevant specifically for the evaluation of the asymptotic Newman-Penrose scalar \(\widetilde{\Psi}_{4}^{\infty}\) because of the additional factor of \(\omega^{2}\) in eq. (3.45). The time-domain observables follow from the Fourier-transform in eq. (3.12a), \[\mathcal{F}(\tau)=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\widetilde{ \mathcal{F}}(\omega)e^{-i\omega\tau}\,\quad\mathcal{F}\in\{f_{\mu\nu\rho\sigma}, \widetilde{\Psi}_{4}^{\infty},\tilde{h}_{+}+i\tilde{h}_{\times}\}\,. \tag{3.47}\] The matrix element \((-i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|\psi_{\rm in}\rangle\) and its conjugate \((+i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}^{\dagger}(k)\hat{S}| \psi_{\rm in}\rangle\) are then computed using eq. (3.27), \[(-i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}| \psi_{\rm in}\rangle^{0}=\int{\rm d}\Phi[p_{1}]{\rm d}\Phi[p_{2}]|\phi(\bar{p} _{1})|^{2}|\phi(\bar{p}_{2})|^{2}\mathcal{B}_{h} \tag{3.48}\] \[\mathcal{B}_{h}=\int\frac{{\rm d}^{d}q_{1}}{(2\pi)^{d}}\frac{{ \rm d}^{d}q_{2}}{(2\pi)^{d}}\hat{\delta}(2\bar{p}_{1}\cdot q_{1})\hat{\delta}( 2\bar{p}_{2}\cdot q_{2})e^{iq_{1}\cdot b}\ (-i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{ \dagger}\hat{a}_{hh}(k)\hat{S}|p_{1}p_{2}\rangle^{0}\] \[(+i)\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}^{\dagger}( k)\hat{S}|\psi_{\rm in}\rangle^{0}=\left((-i)\langle\psi_{\rm in}|\hat{S}^{ \dagger}\hat{a}_{hh}(k)\hat{S}|\psi_{\rm in}\rangle^{0}\right)^{*}\, \tag{3.49}\] where, as for frequency-domain observables, the superscript \(0\) indicates that the IR divergent phase has been removed from the matrix element and absorbed into the definition of the observation time. The gravitational memory of the observable \({\cal F}\) is defined as the difference between its initial and final values, \[\Delta{\cal F}\equiv{\cal F}(+\infty)-{\cal F}(-\infty)=\int_{-\infty}^{+\infty} \mathrm{d}\omega(-i\omega)\widetilde{\cal F}(\omega)\delta(\omega)\,, \tag{112}\] which can be derived by integrating the derivative of \({\cal F}\) between \(\tau=\pm\infty\). Therefore, the memory is determined solely by the residue of \(\widetilde{\cal F}\) at zero frequency. To evaluate frequency-domain observables, it is necessary to evaluate the \(q_{1}\) and \(q_{2}\) integrals in eq. (105); only one of them is nontrivial because of the momentum-conserving constraint \(q_{1}+q_{2}=k\) implicit in \(\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|p_{1} p_{2}\rangle\). The two explicit delta functions, as well as the phase factor, suggest that it is convenient to decompose the integration variable into components along \(\bar{u}_{1}\), \(\bar{u}_{2}\), \(b\), and a fourth vector orthogonal on these, as described in Ref. [43]. To evaluate classical time-domain observables, it is convenient first to evaluate the \(\omega\) integral because it localizes parts of the remaining integrals. To expose this structure, we use properties of classical amplitudes - and thus of the matrix elements \(\langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{hh}(k)\hat{S}|\psi_{\rm in}\rangle\) - under the soft-region rescaling in eq. (1), \[\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(k) \hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{L\text{ loops}} \tag{113}\] \[\qquad=\lambda^{2-L+d}\langle p_{1}-\lambda q_{1},p_{2}-\lambda q _{2}|\hat{S}^{\dagger}\hat{a}_{hh}(\lambda k)\hat{S}|p_{1}p_{2}\rangle^{0} \big{|}_{L\text{ loops}}\,,\] where \(\lambda^{2-L}\) comes from the scaling of the \(L\)-loop classical amplitude, and \(\lambda^{d}\) comes from the momentum conserving delta function \(\delta^{(d)}(q_{1}+q_{2}-k)\) implicit in the matrix element. Choosing \(\lambda=\omega^{-1}\) and changing integration variables \(q_{i}=\omega\tilde{q}_{i}\) we may therefore isolate the \(\omega\) dependence of \({\cal B}\) to the Fourier phase and overall factors: \[{\cal B}_{h}(k) =\sum_{L\geq 0}{\cal B}_{h,L}(k)=\sum_{L\geq 0}\omega^{L+d-4} \widetilde{\cal B}_{h,L}(\tilde{k}) \tag{114}\] \[\widetilde{\cal B}_{h,L} =\int\frac{\mathrm{d}^{d}\tilde{q}_{1}}{(2\pi)^{d}}\frac{\mathrm{ d}^{d}\tilde{q}_{2}}{(2\pi)^{d}}\hat{\delta}(2\bar{p}_{1}\cdot\tilde{q}_{1}) \hat{\delta}(2\bar{p}_{2}\cdot\tilde{q}_{2})e^{i\omega\tilde{q}_{1}\cdot b}({ \cal R}_{L}+i\,{\cal I}_{L})\] \[{\cal R}_{L}+i\,{\cal I}_{L} =(-i)\langle p_{1}-\tilde{q}_{1},p_{2}-\tilde{q}_{2}|\hat{S}^{ \dagger}\hat{a}_{hh}(\tilde{k})\hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{L\text{ loops}}\;, \tag{115}\] where \(\tilde{k}=k/\omega=(1,\boldsymbol{n})\). The tildes on \(q_{i}\) can now be dropped as they are dummy integration variables. Schematically, \({\cal R}_{L}\) and \({\cal I}_{L}\) are defined as \[{\cal R}_{L}=\varepsilon_{h}^{\mu}\varepsilon_{h}^{\nu}\operatorname{Re}({ \cal M}_{\mu\nu}^{L\text{ loops}})\,,\qquad{\cal I}_{L}=\varepsilon_{h}^{\mu} \varepsilon_{h}^{\nu}\operatorname{Im}({\cal M}_{\mu\nu}^{L\text{ loops}})\,, \tag{116}\] where \(\operatorname{Re}({\cal M}_{\mu\nu}^{L\text{ loops}})\) and \(\operatorname{Im}({\cal M}_{\mu\nu}^{L\text{ loops}})\) are respectively the real and imaginary part of the \(L\)-loop matrix element \((-i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(\tilde{k}) \hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{L\text{ loops}}\) with the polarization vectors stripped off. In the following, we will simply refer \({\cal R}_{L}\) and \({\cal I}_{L}\) as the real and imaginary parts of the \(L\)-loop matrix elements. In the conjugate matrix element, we scale out \((-\omega)\) because \(\Theta(-\omega)\) localizes the integrals to the domain \((-\omega)>0\). We can now explore the structure of eq. (106) given integrands of the form in eqs. (104) to (3.46). Since all IR divergences have been removed, we may set \(d=4\). The relevant integral to compute the time-domain observables at \(L\)-loop order is \[\mathcal{J}_{n,L} \equiv\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}e^{-i\omega(\tau -q_{1}\cdot b)}\omega^{2n}\left[\Theta(\omega)\omega^{L}(\mathcal{R}_{L}+i\, \mathcal{I}_{L})+\Theta(-\omega)(-\omega)^{L}(\mathcal{R}_{L}-i\,\mathcal{I}_ {L})\right] \tag{3.55}\] \[=\frac{i^{L+2n+1}}{2\pi}\Gamma(L+2n+1)\ \mathcal{R}_{L}\left[\frac{1}{( \tau-q_{1}\cdot b+i0)^{L+2n+1}}-\frac{(-1)^{L+2n}}{(\tau-q_{1}\cdot b-i0)^{L+2n +1}}\right]\] \[\quad-\frac{i^{L+2n+2}}{2\pi}\Gamma(L+2n+1)\ \mathcal{I}_{L}\left[\frac{1}{( \tau-q_{1}\cdot b+i0)^{L+2n+1}}+\frac{(-1)^{L+2n}}{(\tau-q_{1}\cdot b-i0)^{L +2n+1}}\right]\,,\] where the additional factor of \(\omega^{2n}\) accounts for and generalizes such factors in eqs. (3.44) and (3.45). For now we keep the exponent \(n\) to be a real number. As we will see in sections 7 and 8, the amplitude contains a logarithmic dependence on \(\omega\); we may find the relevant Fourier transform by simply differentiating with respect to \(n\). This logarithmic dependence on the outgoing-graviton frequency yields the so-called gravitational-wave tail first studied in [100; 101] and represents the effect of the scattering of the leading-order gravitational wave off the gravitational field of the source. For the integer part of the exponent, depending on the parity of the loop order \(L\) the contribution to the waveform from the real or imaginary part of the matrix element (3.53) localize because: \[\frac{1}{(x+i0)^{s}}-\frac{1}{(x-i0)^{s}} =-2\pi i(-1)^{s-1}\delta^{(s-1)}(x)\,,\] \[\frac{1}{(x+i0)^{s}}+\frac{1}{(x-i0)^{s}} =2(-1)^{s-1}\text{PV}^{(s-1)}\left[\frac{1}{x}\right]. \tag{3.56}\] For example, this localization occurs at tree level for the real part of the matrix element (3.53) (\(L=0\) and \(n=1\)), where \(\mathcal{R}_{L=0}\) is just the tree-level five-point amplitude. It also occurs at one loop for the imaginary part of the matrix element (3.53) (\(L=1\) and \(n=1\)), where \(\mathcal{I}_{L=1}\) is _effectively_ the imaginary part of the 2MPI part one-loop five-point amplitude with subtracted IR divergences. In terms of \(\mathcal{J}_{n,L}\), the time-domain Newman-Penrose scalar and waveform have very compact expressions, \[\Psi_{4}^{\infty} =-\frac{\kappa}{8\pi}\sum_{L\geq 0}\int\frac{\text{d}^{d}q_{1}}{(2 \pi)^{d}}\frac{\text{d}^{d}q_{2}}{(2\pi)^{d}}\hat{\delta}(2\bar{p}_{1}\cdot q _{1})\hat{\delta}(2\bar{p}_{2}\cdot q_{2})\mathcal{J}_{2,L}\,,\] \[h_{+}^{\infty}+ih_{\times}^{\infty} =\sum_{L\geq 0}\int\frac{\text{d}^{d}q_{1}}{(2\pi)^{d}}\frac{ \text{d}^{d}q_{2}}{(2\pi)^{d}}\hat{\delta}(2\bar{p}_{1}\cdot q_{1})\hat{ \delta}(2\bar{p}_{2}\cdot q_{2})\mathcal{J}_{0,L}\,, \tag{3.57}\] where we have assumed that the wavepackets for the massive states are highly localized. Let us now spell out the ingredients necessary for the evaluation of the waveform \(h_{+}^{\infty}+ih_{\times}^{\infty}\) at leading order, \(\mathcal{O}(\kappa^{3})\), and next-to-leading order, \(\mathcal{O}(\kappa^{5})\). At \(\mathcal{O}(\kappa^{3})\), the matrix element determining \(\mathcal{B}_{h}\) in eq. (3.48) is simply the tree-level \(2\to 3\) amplitude evaluated at \(\tilde{k}=(1,\mathbf{n})\), \[(-i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh}(\tilde{k}) \hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{\text{tree}}=\mathcal{M}_{5,\text{tree} }^{\text{cl.}}(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},\tilde{k}^{h})\, \tag{3.58}\] where we do not decorate the right-hand side with the "2MPI" designation because it is irrelevant at tree level. Thus we have \(\mathcal{R}_{L=0}=\mathcal{M}_{5,\text{tree}}^{\text{cl}}\) and \(\mathcal{I}_{L=0}=0\) at the leading order. At \(\mathcal{O}(\kappa^{5})\) the matrix element determining \(\mathcal{B}_{h}\) in eq. (3.48) is the 2MPI part of the one-loop five-point amplitude in which the IR-divergent contribution of soft virtual gravitons has been removed per eq. (3.43), and the \(\hat{T}\)-bilinear terms in eq. (3.3): \[(-i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{hh }(k)\hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{1\text{ loop}}=\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},k^{h}) \tag{3.59}\] \[-i\int\mathrm{d}\Phi[r_{1}]\mathrm{d}\Phi[r_{2}]\mathrm{d}\Phi[ \ell]\sum_{a=\pm}\mathcal{M}_{5,\text{tree}}^{\text{cl}\,\,*}(p_{1}-q_{1},p_{ 2}-q_{2}\to r_{1}r_{2}\ell^{a})\mathcal{M}_{6,\text{tree}}^{\text{disc.}}(p_{ 1}p_{2}\to r_{1}r_{2}k^{h}\ell^{a})\,,\] where \(\ell\) here is a single graviton whose polarization \(a\) is summed over. In the second line, we only keep terms at \(\mathcal{O}(\kappa^{5})\) order. It consists of a five-point classical amplitude \(\mathcal{M}_{5,\text{tree}}^{\text{cl}\,\,*}\), which contributes at \(\mathcal{O}(\kappa^{3})\), and the disconnected pieces of the six-point amplitude \(\mathcal{M}_{6,\text{tree}}^{\text{disc.}}\), which contributes at \(\mathcal{O}(\kappa^{2})\). Diagrammatically, these terms are (3.60) It is not difficult to see that such amplitudes are kinematically forbidden unless the two outgoing gravitons have zero energy. They would at most contribute to the \(\delta(\omega)\) terms in the metric (3.46), which corresponds to a time-independent background that can be subtracted. The factors of graviton energy \(\omega\) in eq. (3.45) imply that such configurations do not contribute to the Newman-Penrose scalar and the spectral waveform. Thus for our calculation, we may neglect the second line of eq. (3.59) and write \[(-i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{--}( k)\hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{1\text{ loop}}\equiv\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},k^{--})\,\] \[(+i)\langle p_{1}-q_{1},p_{2}-q_{2}|\hat{S}^{\dagger}\hat{a}_{++}^ {\dagger}(k)\hat{S}|p_{1}p_{2}\rangle^{0}\big{|}_{1\text{ loop}}\equiv\left[\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},k^{++})\right]^{*}. \tag{3.61}\] Upon the rescaling (3.51), it identifies \(\mathcal{R}_{L=1}\) and \(\mathcal{I}_{L=1}\) as the real and imaginary parts of \(\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}\) evaluated at \(\tilde{k}=(1,\boldsymbol{n})\), \[\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}(p_{1}p_{2}\to p _{1}-q_{1},p_{2}-q_{2},\tilde{k}^{--}) =\mathcal{R}_{L=1}+i\,\mathcal{I}_{L=1}\,,\] \[\left[\mathcal{M}_{5,1\text{ loop}}^{0,\text{2MPI,cl.}}(p_{1}p_{2} \to p_{1}-q_{1},p_{2}-q_{2},\tilde{k}^{++})\right]^{*} =\mathcal{R}_{L=1}-i\,\mathcal{I}_{L=1}\,. \tag{3.62}\] Note that we have used the fact that \(\varepsilon_{+}^{*}=\varepsilon_{-}\). In the next two sections, we will evaluate the integrand and then the integrals of this one-loop classical amplitude. We collect them and discuss their properties in section 7. In section 8, we proceed to discuss waveform observables for \(\mathcal{N}=8\) supergravity and GR. The five-point classical integrand in minimally scalar-coupled GR The integrand for the complete one-loop four-scalar-one-graviton amplitude in general relativity coupled to self-interacting scalars and in \(\mathcal{N}=0\) supergravity was constructed in Ref. [102] through double-copy methods. In this section, we construct the corresponding one-loop HEFT amplitude and compare it with the 2MPI part of the classical limit of the full theory amplitude, which we find convenient to re-derive directly from generalized unitarity considerations. We isolate from the classical field-theory amplitude the 2MPR diagrams, in which both matter lines are cut. As discussed in the previous sections, these contributions are super-classical and cancel in the waveform. The remaining diagrams, which have exactly one matter line cut, reproduce the HEFT amplitude, thus demonstrating the absence of \(\hbar/\hbar\) contributions to the waveform.9 Footnote 9: See Ref. [63] for the analogous result for the four-point amplitude and related observables through two loops. ### Preliminaries We begin by setting up the notation and variables for five-point kinematics. While we are ultimately interested in physical kinematics, with two incoming and three outgoing particles, it is convenient to take all momenta to be outgoing, (4.1) To cleanly separate different orders in the soft expansion, it is convenient to introduce \(\bar{p}_{1,2}\) variables that are respectively orthogonal on \(q_{1,2}\), the momentum transfer from particle 1 and 2 [74], \[\left.\begin{array}{ll}p_{1}^{\mu}=\bar{p}_{1}^{\mu}-q_{1}^{\mu}/2&p_{4}^{ \mu}=-\bar{p}_{1}^{\mu}-q_{1}^{\mu}/2\\ p_{2}^{\mu}=\bar{p}_{2}^{\mu}-q_{2}^{\mu}/2&p_{3}^{\mu}=-\bar{p}_{2}^{\mu}-q_{ 2}^{\mu}/2\end{array}\right\}\implies\bar{p}_{1}\cdot q_{1}=\bar{p}_{2}\cdot q _{2}=0\,. \tag{4.2}\] The external graviton momentum \(k\) is related to the momentum transfers as \(k=q_{1}+q_{2}\). The on-shell conditions expressed in terms of the shifted matter momenta are \[\bar{p}_{1}^{2}=\bar{m}_{1}^{2}=m_{1}^{2}-\frac{q_{1}^{2}}{4}\,,\qquad\bar{p}_ {2}^{2}=\bar{m}_{2}^{2}=m_{2}^{2}-\frac{q_{2}^{2}}{4}\,. \tag{4.3}\] We define barred four-velocity \(\bar{u}_{1}=\bar{p}_{1}/\bar{m}_{1}\) and \(\bar{u}_{2}=\bar{p}_{2}/\bar{m}_{2}\) such that \(\bar{u}_{1}^{2}=\bar{u}_{2}^{2}=1\). It is also convenient to define \(y=\bar{u}_{1}\cdot\bar{u}_{2}\). We will also use normal four-velocity \(u_{1}=p_{1}/m_{1}\) and \(u_{2}=p_{2}/m_{2}\), and define \(\sigma=u_{1}\cdot u_{2}\). The difference between \(y\) and \(\sigma\) is of the order \(\mathcal{O}(q^{2})\). The classical four-scalar-one-graviton tree-level amplitude in GR was given in Ref. [74]. In the notation above and for real kinematics, it can be written as \[M_{5,\text{tree}}^{\text{cl.GR}}=-\frac{\kappa^{3}}{4}\bar{m}_{1}^{ 2}\bar{m}_{2}^{2}\,\varepsilon^{*}(k)_{\mu\nu}\left[\frac{4P_{12}^{\mu}P_{12}^ {\nu}}{q_{1}^{2}q_{2}^{2}}+\frac{2y}{q_{1}^{2}q_{2}^{2}}\left(Q_{12}^{\mu}P_{1 2}^{\nu}+Q_{12}^{\nu}P_{12}^{\mu}\right)\right. \tag{4.4}\] \[\left.\qquad\qquad\qquad+\left(y^{2}-\frac{1}{d-2}\right)\left( \frac{Q_{12}^{\mu}Q_{12}^{\nu}}{q_{1}^{2}q_{2}^{2}}-\frac{P_{12}^{\mu}P_{12}^ {\nu}}{(k\cdot\bar{u}_{1})^{2}(k\cdot\bar{u}_{2})^{2}}\right)\right]\,\] \[P_{12}^{\mu}\equiv k\cdot\bar{u}_{1}\;\bar{u}_{2}^{\mu}-k\cdot \bar{u}_{2}\;\bar{u}_{1}^{\mu}\,,\quad Q_{12}^{\mu}\equiv(q_{1}-q_{2})^{\mu}- \frac{q_{1}^{2}}{k\cdot\bar{u}_{1}}\bar{u}_{1}^{\mu}+\frac{q_{2}^{2}}{k\cdot \bar{u}_{2}}\bar{u}_{2}^{\mu}, \tag{4.5}\] where \(\varepsilon(k)_{\mu\nu}=\varepsilon(k)_{\mu}\varepsilon(k)_{\nu}\) is the graviton polarization tensor, and the conjugation indicates that it is an outgoing graviton. The coupling \(\kappa\) is related to Newton's constant through \(\kappa^{2}=32\pi G\). One can easily verify that both \(\varepsilon^{*}\cdot P_{12}\) and \(\varepsilon\cdot Q_{12}\) are gauge invariant. We will use this expression in section 7 to compare the IR divergences of the classical amplitude with the prediction of Weinberg's analysis (3.41) and in section 8 to construct the leading-order waveform. Here is a list of tree amplitudes that will be used in unitarity cut constructions, (4.6) including the three-point amplitudes that are uniform in \(\hbar\), \[M_{3,\text{tree}}^{\text{Comp.}}(p_{1},p_{4},k)=M_{3,\text{tree }}^{\text{Comp.}}(\bar{u}_{1},k)=-\kappa^{2}\bar{m}_{1}^{2}(\bar{u}_{1}\cdot \varepsilon)^{2}\,,\] \[M_{3,\text{tree}}^{\text{graviton}}(k_{1},k_{2},k_{3})=-\kappa^ {2}(\varepsilon_{1}\cdot\varepsilon_{2}\,\varepsilon_{3}\cdot k_{1}+ \varepsilon_{2}\cdot\varepsilon_{3}\,\varepsilon_{1}\cdot k_{2}+\varepsilon _{3}\cdot\varepsilon_{1}\,\varepsilon_{2}\cdot k_{3})^{2}\,. \tag{4.7}\] In eq. (4.6), the full quantum amplitudes are given in the first entries, which we compute through the standard Feynman diagram approach. The amplitudes with a superscript "cl" are classical amplitudes, which are obtained from the full amplitudes through a soft expansion and keeping the terms with the classical scaling (2.3). These classical tree amplitudes will be used later to construct HEFT cuts. The sum over the physical graviton states is a common ingredient in both HEFT and full amplitude calculations, as it enters in the evaluation of generalized unitarity cuts. Generally, it is \[\sum_{h}\varepsilon_{h}(k)^{\mu\nu}\varepsilon_{h}^{*}(k)^{\alpha\beta}=\frac{ 1}{2}\left(\mathcal{P}^{\mu\alpha}\mathcal{P}^{\nu\beta}+\mathcal{P}^{\mu \beta}\mathcal{P}^{\nu\alpha}-\frac{2}{d-2}\mathcal{P}^{\mu\nu}\mathcal{P}^{ \alpha\beta}\right)\,, \tag{4.8}\] where \[\mathcal{P}^{\mu\nu}(k)=\eta^{\mu\nu}-\frac{r^{\mu}k^{\nu}+r^{\nu}k^{\mu}}{r\cdot k} \tag{4.9}\] is the physical-state projector for a vector field, and \(r^{\mu}\) is an arbitrary null reference vector that should drop out of physical expressions. This sum simplifies considerably if the amplitudes being sewn obey generalized Ward identities, i.e., they obey the Ward identity for external leg \(i\) without using the transversality of the polarization vectors for any of the other external gravitons [103; 104; 73]. For such amplitudes, all terms proportional to the momentum of the sewn legs drop out so we can effectively use the much simplified (and manifestly covariant) graviton state sum \[\sum_{h}\varepsilon_{h}(k)^{\mu\nu}\varepsilon_{h}^{*}(k)^{\alpha\beta}=\frac{ 1}{2}\left(\eta^{\mu\alpha}\eta^{\nu\beta}+\eta^{\mu\beta}\eta^{\nu\alpha}- \frac{2}{d-2}\eta^{\mu\nu}\eta^{\alpha\beta}\right). \tag{4.10}\] Ref. [103] showed that, through simple manipulations, it is always possible to put scattering amplitudes into a form that obeys the generalized Ward identities. In fact, by being manifestly written in terms of linearized field strengths, HEFT amplitudes already obey such generalized Ward identities without any additional manipulations. We will use such amplitudes in our loop calculations. The expressions for the 2MPI amplitudes, both in HEFT and the full theory calculation, are naturally expressed in terms of scalar integrals of pentagon topology and with two linear propagators. One of our results, which is natural in the HEFT approach, is that one matter line is always cut; thus, all integrals will be of the special cases of \[I^{\pm}_{a_{1},a_{2},a_{3},a_{4}}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}} \frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{[\ell^{2}]^{a_{1}}[(\ell+q_{2})^{2} ]^{a_{2}}[(\ell-q_{1})^{2}]^{a_{3}}(2\bar{u}_{1}\cdot\ell\pm i0)^{a_{4}}}\, \tag{4.11}\] where the delta function realized as \[\frac{i}{x+i0}+\frac{i}{-x+i0}=2\pi\delta(x)\equiv\hat{\delta}(x). \tag{4.12}\] We will omit the \(\pm\) superscript if the linearized matter propagator is absent, i.e., when \(a_{4}=0\). This is the generalization to the five-point case of the analogous feature present in four-point amplitudes [71; 72; 73]. The diagrammatic representation of the master integrals are shown in figure 3. Figure 3: Topology of integrals given in eq. (4.11). Unlike the four-point case, the five-point classical amplitudes depend on master integrals with contact matter vertices, namely, \(a_{1}=0\) in eq. (4.11). They come from the IBP reduction of integrands with higher topologies, and their coefficients contain non-local dependence on \(q_{i}^{2}\) through the Gram determinants generated by IBP. As a result, these contributions are relevant to the classical long-range interaction in the position space after a Fourier transform.10 Footnote 10: In contrast, at four points, the coefficients of such contact master integrals can only have a polynomial dependence on \(q^{2}\), while the integrals are independent of \(q\). Therefore, they can only result in delta-function interactions in position space. ### The five-point 2MPI HEFT integrand We use generalized unitarity in \(d\) dimensions [44; 45; 46] to construct the 2MPI HEFT integrand from the spanning set of generalized cuts in figure 4. Unlike generalized unitarity in the full quantum theory, the cut matter line, denoted by a red vertical line, is permanently cut and dressed by a delta function \(\hat{\delta}(2\bar{p}_{i}\cdot\ell)\). In addition, the input tree amplitude for each blob contains only terms with classical scaling in the full tree amplitude. Such terms are free of the delta-function contributions that are super-classical, as reviewed in section 2. We can safely ignore the super-classical delta-function dependent terms because they correspond to 2MPR graphs, which are ultimately subtracted by the KMOC formalism. Finally, we focus on integrands where matter line 2 is cut as the integrands where matter line 1 is cut can be derived from a relabelling. We note that cuts with no cut matter lines, for example, \[\begin{split}\includegraphics[width=142.374pt]{figures/2- propagator remains cut in the final expression.11 These HEFT cuts are given by: Footnote 11: This is equivalent to removing all contributions in which both matter lines are collapsed; direct counting indicates that such graviton loops do not contribute to the classical limit, in analogy with e.g. [71; 73]. \[\mathcal{C}^{(1,\text{H})}_{m_{1}^{2}m_{2}^{3}} =\sum_{h_{2}}M_{5,\text{tree}}^{\text{cl,GR}}(\bar{u}_{1},\bar{u} _{2},q_{1},q_{2},-\ell_{2}^{h_{2}})M_{4,\text{tree}}^{\text{cl,Comp.}}(\bar{u}_ {2},\ell_{2}^{-h_{2}},k^{\varepsilon})\,, \tag{4.14}\] \[\mathcal{C}^{(2,\text{H})}_{m_{1}^{2}m_{2}^{3}} =\sum_{h_{3}}M_{6,\text{tree}}^{\text{cl,GR}}(\bar{u}_{1},\bar{u} _{2},q_{1},q_{2},-\ell_{3}^{h_{3}},k^{\varepsilon})M_{3,\text{tree}}^{\text{ Comp}}(\bar{u}_{2},\ell_{3}^{-h_{3}})-\text{divergent bubbles}.\] As first mentioned in section 2, we make all the uncut linear propagators symmetric in \(i0\) and thus principal-valued in the HEFT cuts. The HEFT Cut 2 is technically divergent due to external bubble contributions such as \[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig contributions, \(\mathcal{C}^{(1,\text{H})}\) and \(\mathcal{C}^{(2,\text{H})}\), and the overlap contribution, \(\mathcal{C}^{(12,\text{H})}\), are a consequence of the factors of \(i\) in the definition of matrix elements and of propagators. ### The five-point classical integrand from the quantum integrand In this section, we construct the classical limit of the four-scalar-one-graviton amplitude in GR coupled to two scalar fields to verify the completeness of the HEFT amplitude and to understand the fate of terms of \(\hbar/\hbar\) type that naturally appear in the classical expansion of cuts of the full theory. The spanning set of generalized unitarity cuts determines the (classical part) of the five-point amplitude in GR and is given in figure 6. The terms in the integrand determined solely by GR Cut 3 contain (1) 1PR mushroom graphs and (2) graphs with intersecting matter lines with numerators that are polynomial in external and loop momenta. The former is required by gauge invariance; the latter, while not having a Feynman vertex counterpart, may appear depending on choices made in the construction of the integrand. The integral corresponding to the latter does not depend on the momentum transfer \((q_{1}-q_{2})\) and as such these contact terms contribute only \(\delta(b)\) terms to the waveform and we could ignore it. The same integral also appears in the IBP reduction of terms determined by the first three cuts; their coefficients turn out to have a rather nontrivial dependence on the momentum transfer \((q_{1}-q_{2})\) and thus contribute nontrivially to the waveform. We will find that for the integrand we construct, the contact terms with the topology of GR Cut 3 vanish identically. In our construction, we will ignore 2MPI contributions to the classical amplitude which are not captured by these cuts, because the corresponding (master) integrals are scaleless and thus vanish in dimensional regularization. We use \(d\)-dimensional generalized unitarity [44; 45; 46] to construct the five-point integrand. In terms of tree amplitudes, the cuts in Figure 6: The spanning cuts of the five-point one-loop amplitude in GR coupled to scalars. All exposed lines are cut. The loop momentum \(\ell_{i}\) follows the clockwise direction. Contributions captured solely by GR Cut 3 involve intersecting matter lines which do not contribute in the classical limit but whose possible appearance is related to the absence of four-scalar contact terms in the classical action. To construct the integrand we consider all relabelings of external legs. figure 6 are \[\mathcal{C}^{(1)}_{m_{1}^{2}m_{2}^{3}}=\sum_{h_{1},h_{2}}M^{\rm Comp.} _{3,\rm tree}(p_{2},\ell_{4},-\ell_{1}^{h_{1}})M^{\rm Comp.}_{4,\rm tree}(- \ell_{4},p_{3},\ell_{2}^{h_{2}},k^{\varepsilon})M^{\rm Comp.}_{4,\rm tree}(p_ {1},p_{4},\ell_{1}^{-h_{1}},-\ell_{2}^{-h_{2}})\,,\] \[\mathcal{C}^{(2)}_{m_{1}^{2}m_{2}^{3}}=\sum_{h_{1},h_{3}}M^{\rm Comp.}_{3,\rm tree}(p_{2},\ell_{4},-\ell_{1}^{h_{1}})M^{\rm Comp.}_{3,\rm tree}(- \ell_{4},p_{3},\ell_{3}^{h_{3}})M^{\rm Comp.}_{5,\rm tree}(p_{1},p_{4},\ell_{1 }^{-h_{1}},-\ell_{3}^{-h_{3}},k^{\varepsilon})\,,\] \[\mathcal{C}^{(3)}_{m_{1}^{2}m_{2}^{3}}=\sum_{h_{2}}M^{\rm GR}_{5, \rm tree}(p_{1},p_{4},p_{2},\ell_{4},-\ell_{2}^{h_{2}})M^{\rm Comp.}_{4,\rm tree }(-\ell_{4},p_{3},\ell_{2}^{h_{2}},k^{\varepsilon})\, \tag{4.18}\] where the \(\ell_{i}\) are defined according to figure 6. We use complete tree-level amplitudes of GR minimally-coupled to scalar fields that obey the generalized Ward identities [103; 104; 73], so the sum over the internal graviton physical states, labeled here by \(h_{1,2,3}\), is given by eq. (4.10). The resulting cuts reproduce those used in the construction of the quantum five-point integrand in Ref. [102] up to the contributions of four-scalar contact terms which do not contribute in the classical limit but are natural in the double copy construction used there.12 Footnote 12: Since we are interested in the classical limit, we do not include all cuts required to construct the complete quantum five-point amplitude which was considered in Ref. [102]. Merging these cuts using the method of maximal cuts [49] while maintaining quadratic propagators for the matter fields leads to the relevant part of the one-loop integrand, which includes only graphs with three, four, and five propagators with at least one matter line in the loop. The relevant topologies are shown in figure 7. Diagrams with fewer propagators either do not have internal matter lines or intersecting matter lines, neither of which contribute to the classical amplitude. We then expand it in the soft limit [106], \(q_{1,2},k,\ell\ll p_{1,2}\), e.g. \[\frac{1}{(p+\ell)^{2}-m^{2}+i0}=\frac{1}{2p\cdot\ell+\ell^{2}+i0}=\frac{1}{2p \cdot\ell+i0}\sum_{n=0}^{\infty}\left(-\frac{\ell^{2}}{2p\cdot\ell+i0}\right) ^{n}. \tag{4.19}\] Figure 7: Topologies of the contact terms that contribute to the classical limit of the five-point amplitude before reduction to master integrals. The complete basis includes all inequivalent permutations of these diagrams. In practice, we also convert to \(\bar{p}_{i}\) variables defined in eq. (4.2) at this step, which will introduce additional \(q_{i}\) dependence in the above expansion. The leading soft-region scaling of the five-point one-loop amplitude is super-classical, as expected from the existence of graphs with two-particle matter cuts. Direct inspection of contributing diagrams suggests that one of them, figure 6(b), scales as \(q^{-3}\) while the diagrams in figures 6(a) and 6(c) to 6(e) scale as \(q^{-2}\), and the remaining two triangle graphs exhibit classical scaling, \(q^{-1}\), at leading order. After soft expanding to \(\mathcal{O}(q^{-1})\), in which the diagrams with topology figure 6(b) must be expanded to second order, all matter propagators have linear dependence in loop momenta [106] and may be raised to a power higher than one. Upon soft expanding the integrand, the two matter propagators in the top matter line of figures 6(b) and 6(d) become linearly dependent because the momentum of the outgoing graviton is of the same order, \(\mathcal{O}(\hbar)\), as the loop momentum. Linear dependence of propagators prevents a direct IBP reduction. These integrands are first partial-fractioned, and the resulting terms are assigned to the box diagrams in which the graviton is attached to the left and right vertex on that matter line, \[\includegraphics[width=14.226378pt]{figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures// part), the cut of such propagators vanishes. Therefore, diagrams with one principal-valued matter propagator and one cut matter propagator can contribute to the term bilinear in the transfer matrix in the KMOC expression for the waveform only if a cut through a graviton propagator is allowed kinematically which, according to the discussion in section 3.2, is not the case here. ### Integrand reduction We have computed the integrand using generalized unitarity at the level of HEFT and the full quantum amplitude. We now reduce the integrand to a basis of master integrals with kinematic coefficients. We first expand the polarization tensor, \(\varepsilon^{\mu\nu}=\varepsilon^{\mu}\varepsilon^{\nu}\) and \(\varepsilon^{2}=0\), in a basis of external momenta: \[\varepsilon^{\mu}=\alpha_{1}\bar{p}_{1}^{\mu}+\alpha_{2}\bar{p}_{2}^{\mu}+ \alpha_{3}q_{1}^{\mu}+\alpha_{4}q_{2}^{\mu}. \tag{111}\] This decomposition, equivalent to Passarino-Veltman reduction [107], introduces spurious poles in the form of Gram determinants, \(G[\bar{p}_{1},\bar{p}_{2},q_{1},q_{2}]\), which should cancel in the final integrated expression. The resulting separated and soft-expanded integrands are reduced to master integrals using FIRE [58; 59]. We keep pentagon, box, triangle, and bubble integrals that do not vanish in the classical limit. While bubble integrals are independent of the momentum transfer, their coefficients, which are generated by the IBP reduction, can exhibit such a dependence and thus can contribute nontrivially in the classical limit. Integrands with different numbers of cut propagators form distinct sectors under IBP reduction. Diagrams with two cut matter lines are the 2MPR contributions are not included (though of course computable) in the HEFT amplitude. Factorization of the one-loop amplitude of the full theory identifies these terms as the product of the classical limit of four-point and five-point tree amplitudes. Integrals with a single cut matter line, including mushroom graphs, are the same as in the reduction of the HEFT 2PMI amplitude (111) and we have verified that the coefficients are also the same. This demonstrates the complete cancellation of the \(\hbar/\hbar\) terms in the full-theory calculation; such terms appear at intermediate stages, with the numerator coming from quantum terms in one tree-level factor in a cut and the denominator from superclassical terms in another. Ultimately, the classical 2MPI part of the amplitude becomes a linear combination of the master integrals \[I_{1,1,0,0}\,,\quad I_{0,1,0,1}^{+}+I_{0,1,0,1}^{-}\,,\quad I_{1,1,0,1}^{+}+I_{1,1,0,1}^{-}\,,\] \[I_{0,0,1,0}\,,\quad I_{1,0,1,0}\,,\quad I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-}\,,\quad I_{1,1,1,0}\,,\] \[I_{1,0,1,1}^{+}+I_{1,0,1,1}^{-}\,,\quad I_{0,1,1,1}^{+}+I_{0,1,1,1}^{-}\,,\quad I_{1,1,1,1}^{+}+I_{1,1,1,1}^{-}\,, \tag{112}\] and their image under exchanging the matter lines, \[M_{5,m_{1}^{2}m_{2}^{\rm MPI,cl.GR}}^{2\rm MPI,cl.GR} =c_{1}I_{0,0,1,0}+c_{2}I_{1,1,0,0}+c_{3}I_{1,0,1,0}+c_{4}(I_{0,1,0, 1}^{+}+I_{0,1,0,1}^{-})\] \[+c_{5}(I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-})+c_{6}I_{1,1,1,0}+c_{7}(I_{1,1,0,1}^{+}+I_{1,1,0,1}^{-})\] \[+c_{8}(I_{1,0,1,1}^{+}+I_{1,0,1,1}^{-})+c_{9}(I_{0,1,1,1}^{+}+I_{0,1,1,1}^{-})+c_{10}(I_{1,1,1,1}^{+}+I_{1,1,1,1}^{-})\,, \tag{114}\] \[M_{5,1\;\text{loop}}^{2\rm MPI,cl.GR} =M_{5,m_{1}^{2}m_{2}^{\rm MPI,cl.GR}}^{2\rm MPI,cl.GR}+(\bar{u}_{1} \leftrightarrow\bar{u}_{2},\bar{m}_{1}\leftrightarrow\bar{m}_{2},q_{1} \leftrightarrow q_{2})\, \tag{115}\] where the integrals are defined in eq. (109). The master integral coefficients \(c_{i}\) are lengthy and are included in the ancillary Mathematic-readable file. The symmetric combinations \(I_{1,1,0,1}^{+}+I_{1,1,0,1}^{-}\) and \(I_{1,1,1,1}^{+}+I_{1,1,1,1}^{-}\) correspond to replacing the linear matter propagators of these integrals with their principal value. We have checked with the authors of Ref. [69] and we find full agreement on the master integral coefficients for GR. The seven master integrals in the second and third line of eq. (100) have contributions from radiation region gravitons. They are thus complex and contain the radiation reaction as discussed in Ref. [70]. ## 5 The classical five-point integrand in \(\mathcal{N}=8\) supergravity The \(\mathcal{N}=8\) supergravity provides an important laboratory to explore properties of gravitational theories in a setting where amplitudes have somewhat simpler expressions. In this section, we construct the classical five-point amplitude in this theory, with the massive scalars being the lightest Kaluza-Klein modes for scalar modes of gravitons in the compact dimensions while all other particles are zero-compact-momentum modes, following [106]. We construct massless maximally-supersymmetric supergravity amplitude in generic dimensions via the double copy. The classical tree-level five-point amplitude was constructed in Ref. [74]. With the notation introduced in eq. (102), it is given by, \[M_{5,\rm tree}^{\rm cl,\mathcal{N}=8}=-\frac{\kappa^{3}}{4} \bar{m}_{1}^{2}\bar{m}_{2}^{2}\;\varepsilon^{*}(k)_{\mu\nu} \left[\frac{4P_{12}^{\mu}P_{12}^{\nu}}{q_{1}^{2}q_{2}^{2}}+\frac{2y}{q_{1}^{2 }q_{2}^{2}}\left(Q_{12}^{\mu}P_{12}^{\nu}+Q_{12}^{\nu}P_{12}^{\mu}\right)\right.\] \[\left.+y^{2}\left(\frac{Q_{12}^{\mu}Q_{12}^{\nu}}{q_{1}^{2}q_{2}^ {2}}-\frac{P_{12}^{\mu}P_{12}^{\nu}}{(k\cdot\bar{u}_{1})^{2}(k\cdot\bar{u}_{2} )^{2}}\right)\right]. \tag{116}\] Figure 8: Nonvanishing numerators in five-point maximally-supersymmetric Yang-Mills and supergravity amplitudes. It differs from that of GR result in eq. (4.4) only by the inclusion of the dilaton exchange here. At one loop, we start with the one-loop five-gluon BCJ numerators of maximally-supersymmetric Yang-Mills theory [66; 67], which consist of only pentagon and box topology, shown in figure 8, \[N^{\rm SYM}_{12345} =\frac{1}{2}\ell_{\mu}t_{8}^{\mu}(1,2,3,4,5)-\frac{1}{4}\Big{[}t_ {8}({\sf f}_{12},{\sf f}_{3},{\sf f}_{4},{\sf f}_{5})+t_{8}({\sf f}_{13},{\sf f }_{2},{\sf f}_{4},{\sf f}_{5})+t_{8}({\sf f}_{14},{\sf f}_{2},{\sf f}_{3},{\sf f }_{5})\] \[\quad+t_{8}({\sf f}_{15},{\sf f}_{2},{\sf f}_{3},{\sf f}_{4})+t_{8} ({\sf f}_{23},{\sf f}_{1},{\sf f}_{4},{\sf f}_{5})+t_{8}({\sf f}_{24},{\sf f}_{ 1},{\sf f}_{3},{\sf f}_{5})+t_{8}({\sf f}_{25},{\sf f}_{1},{\sf f}_{3},{\sf f }_{4})\] \[\quad+t_{8}({\sf f}_{34},{\sf f}_{1},{\sf f}_{2},{\sf f}_{5})+t_{8 }({\sf f}_{35},{\sf f}_{1},{\sf f}_{2},{\sf f}_{4})+t_{8}({\sf f}_{45},{\sf f }_{1},{\sf f}_{2},{\sf f}_{3})\Big{]}\, \tag{5.2}\] \[N^{\rm SYM}_{[12]345} =-\frac{1}{2}t_{8}({\sf f}_{12},{\sf f}_{3},{\sf f}_{4},{\sf f}_{ 5})\,,\] where the scalar and vector \(t_{8}\) tensors are defined as \[t_{8}({\sf f}_{1},{\sf f}_{2},{\sf f}_{3},{\sf f}_{4}) ={\rm Tr}[{\sf f}_{1}{\sf f}_{2}{\sf f}_{3}{\sf f}_{4}]-\frac{1}{ 4}{\rm Tr}[{\sf f}_{1}{\sf f}_{2}]{\rm Tr}[{\sf f}_{3}{\sf f}_{4}]+{\rm cyclic }(2,3,4) \tag{5.3}\] \[t_{8}^{\mu}(1,2,3,4,5) ={\sf e}_{1}^{\mu}t_{8}({\sf f}_{2},{\sf f}_{3},{\sf f}_{4},{\sf f }_{5})+{\sf e}_{2}^{\mu}t_{8}({\sf f}_{1},{\sf f}_{3},{\sf f}_{4},{\sf f}_{5}) +{\sf e}_{3}^{\mu}t_{8}({\sf f}_{1},{\sf f}_{2},{\sf f}_{4},{\sf f}_{5})\] \[\quad+{\sf e}_{4}^{\mu}t_{8}({\sf f}_{1},{\sf f}_{2},{\sf f}_{3},{ \sf f}_{5})+{\sf e}_{5}^{\mu}t_{8}({\sf f}_{1},{\sf f}_{2},{\sf f}_{3},{\sf f }_{4}). \tag{5.4}\] The traces entering the definition of \(t_{8}\) are over the Lorentz indices and the linearized one- and two-particle field strengths, \({\sf f}_{i}^{\mu\nu}\) and \({\sf f}_{ij}^{\mu\nu}\), are \[{\sf f}_{i}^{\mu\nu} ={\sf k}_{i}^{\mu}{\sf e}_{i}^{\nu}-{\sf k}_{i}^{\nu}{\sf e}_{i}^{ \mu}\,,\] \[{\sf f}_{ij}^{\mu\nu} =({\sf e}_{i}\cdot{\sf k}_{j}){\sf f}_{j}^{\mu\nu}-({\sf e}_{j} \cdot{\sf k}_{i}){\sf f}_{i}^{\mu\nu}+{\sf f}_{i}^{\mu}\wedge{\sf f}_{j}^{\mu \nu}-{\sf f}_{j}^{\mu}\wedge{\sf f}_{i}^{\lambda\nu}\,, \tag{5.5}\] where \({\sf k}_{i}\) and \({\sf e}_{i}\) are respectively the massless momentum and polarization in higher dimensions. From eq. (5.2), we can get the numerators of maximal supergravity through the double copy \(N^{\rm SUGRA}=(N^{\rm SYM})^{2}\). We introduce four compact dimensions and use a dimensional reduction in which the compact momenta responsible for the scalar masses are in orthogonal directions,15 Footnote 15: We may relax this assumption and essentially rotate one of the two compact momenta by an arbitrary angle, as in Ref. [106]. \[{\sf k}_{1} =(p_{1},m_{1},0,0,0)\,, {\sf k}_{2} =(p_{2},0,m_{2},0,0)\,,\] \[{\sf k}_{4} =(p_{4},-m_{1},0,0,0)\,, {\sf k}_{3} =(p_{3},0,-m_{2},0,0)\,, \tag{5.6}\] \[{\sf k}_{5} =(k,0,0,0,0)\,.\] Since the masses arise from higher-dimensional momenta, they obey conservation relations, i.e., they change signs with the orientation of the corresponding momentum. The on-shell condition \({\sf k}_{1,2,3,4}^{2}=0\) for higher dimensional massless momenta thus lead to the massive on-shell condition \(p_{1}^{2}=p_{4}^{2}=m_{1}^{2}\) and \(p_{2}^{2}=p_{3}^{2}=m_{2}^{2}\). The kinematic configuration in eq. (5.6) gives the following reduction rules for Mandelstam variables, \[\begin{split}\mathsf{k}_{1}\cdot\mathsf{k}_{4}&=p_{1} \cdot p_{4}+m_{1}^{2}\hskip 14.226378pt\mathsf{k}_{1}\cdot\mathsf{k}_{2}=p_{1} \cdot p_{2}\hskip 14.226378pt\mathsf{k}_{1}\cdot\mathsf{k}_{3}=p_{1}\cdot p_{3} \hskip 14.226378pt\mathsf{k}_{5}\cdot\mathsf{k}_{i}=k\cdot p_{i}\\ \mathsf{k}_{2}\cdot\mathsf{k}_{3}&=p_{2}\cdot p_{3}+m_ {2}^{2}\hskip 14.226378pt\mathsf{k}_{2}\cdot\mathsf{k}_{4}=p_{2}\cdot p_{4} \hskip 14.226378pt\mathsf{k}_{3}\cdot\mathsf{k}_{4}=p_{3}\cdot p_{4}\end{split}\,, \tag{101}\] while \(\ell\cdot\mathsf{k}_{i}=\ell\cdot p_{i}\) for \(i=1,2,3,4\), and \(\ell\cdot\mathsf{k}_{5}=\ell\cdot k\). The massive scalars are realized as the scalar graviton modes in the compact dimensions, \[\mathsf{e}_{1}=\mathsf{e}_{4}=\left(0,0,0,1,0\right),\qquad\mathsf{e}_{2}= \mathsf{e}_{3}=\left(0,0,0,0,1\right),\qquad\mathsf{e}_{5}=\left(\varepsilon, 0,0,0,0\right), \tag{102}\] such that dot products involving polarization vectors are reduced by \[\mathsf{e}_{1}\cdot\mathsf{e}_{4}=\mathsf{e}_{2}\cdot\mathsf{e}_ {3}=-1\text{ and }\mathsf{e}_{i}\cdot\mathsf{e}_{j}=0\text{ otherwise},\] \[\mathsf{e}_{i}\cdot\mathsf{k}_{j}=\left\{\begin{array}{ll}0& \text{for }i=1,2,3,4\text{ and arbitrary }j\\ \varepsilon\cdot p_{j}&\text{for }i=5\text{ and }j=1,2,3,4\end{array}\right.. \tag{103}\] The diagrams that survive as these relations are plugged into the one-loop integrand are the ones that are consistent with higher-dimensional momentum conservation at each vertex. Namely, we keep the diagrams in which the two matter lines connecting \(\{p_{1},p_{4}\}\) and \(\{p_{2},p_{3}\}\) do not cross each other. They are all captured by the spanning set of cuts in figure 6. As in the GR calculation outlined in section 4.3, we expand the resulting integrand in the soft limit. By using eq. (119), we can expose all the 2MPR diagrams. The remaining 2MPI diagrams have one cut matter propagator. Upon reduction to master integrals, the uncut linear matter propagators turn into principal-value propagators. The 2MPI part of the classical amplitude takes the same form as eq. (128), with vanishing values for the coefficients \(c_{2}^{\mathcal{N}=8}\) and \(c_{3}^{\mathcal{N}=8}\), i.e. \[M_{5,m_{1}^{2}m_{2}^{3}}^{2\mathrm{PMI},\mathrm{cL},\mathcal{N} =8} =c_{1}^{\mathcal{N}=8}I_{0,0,1,0}+c_{4}^{\mathcal{N}=8}(I_{1,0,1, 1}^{+}+I_{0,1,0,1}^{-})+c_{5}^{\mathcal{N}=8}(I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-})\] \[+c_{6}^{\mathcal{N}=8}I_{1,1,1,0}+c_{7}^{\mathcal{N}=8}(I_{1,1,0, 1}^{+}+I_{1,1,0,1}^{-})+c_{8}^{\mathcal{N}=8}(I_{1,0,1,1}^{+}+I_{1,0,1,1}^{-})\] \[+c_{9}^{\mathcal{N}=8}(I_{0,1,1,1}^{+}+I_{0,1,1,1}^{-})+c_{10}^{ \mathcal{N}=8}(I_{1,1,1,1}^{+}+I_{1,1,1,1}^{-})\,, \tag{104}\] \[M_{5,1\text{ loop}}^{2\mathrm{MPI},\mathrm{cL},\mathcal{N}=8} =\mathcal{M}_{5,m_{1}^{2}m_{2}^{3}}^{2\mathrm{PMI},\mathrm{cL}, \mathcal{N}=8}+\left(\bar{u}_{1}\leftrightarrow\bar{u}_{2},\bar{m}_{1} \leftrightarrow\bar{m}_{2},q_{1}\leftrightarrow q_{2}\right)\,. \tag{105}\] While the coefficients are somewhat unwieldy, it is not difficult to verify that each of them is separately gauge-invariant, as they should be. Interestingly, in the classical limit, the \(\mathcal{N}=8\) amplitude includes triangle and bubble integrals, unlike the quadratic-propagator counterpart [108]. Similar to the GR five-point amplitude, they contribute nontrivially to the waveform because their coefficients exhibit nontrivial dependence on the momentum transfer \((q_{1}-q_{2})\). ## 6 Integrating in the Rest Frame In this section, we evaluate, for physical kinematic configurations, the bubble, triangle, and pentagon master integrals eq. (119) that appear in GR and \(\mathcal{N}=8\) supergravity one loop five-point classical amplitudes. We also list the expressions for the box integrals and relegate the details of their evaluation to appendix B. We note that the results given in this section are in full agreement with Ref. [69]. For all master integrals, it is convenient to work in the rest frame of particle 2, in which \(\bar{u}_{2}=(1,0,0,0)\), and integrate out \(\hat{\delta}(2\bar{u}_{2}\cdot\ell)\) in the numerators. This projects out the temporal component of the loop momentum such that \(\ell=(0,\boldsymbol{\ell})\). We are thus left with a Euclidean loop integral in \(3-2\epsilon\) dimensions, expressed in terms of non-covariant quantities. We can uplift the result back into a generic frame by using \[\bar{u}_{1}^{0} =y\,, \bar{\boldsymbol{u}}_{1}^{2}=y^{2}-1\,, E_{q_{1}}=\bar{u}_{2}\cdot q_{1}\,, \boldsymbol{q}_{1}^{2}=(\bar{u}_{2}\cdot q_{1})^{2}-q_{1}^{2}\,,\] \[\boldsymbol{q}_{2}^{2} =-q_{2}^{2}\,, \bar{\boldsymbol{u}}_{1}\cdot\boldsymbol{q}_{1}=y(\bar{u}_{2} \cdot q_{1})\,, \bar{\boldsymbol{u}}_{1}\cdot\boldsymbol{q}_{2}=-\bar{u}_{1} \cdot q_{2}\,, \boldsymbol{q}_{1}\cdot\boldsymbol{q}_{2}=\frac{q_{1}^{2}+q_{2}^{2}}{2 }\,. \tag{111}\] where the left hand side comes from the components of \(\bar{u}_{1}\), \(q_{1}\) and \(q_{2}\) written in the \(\bar{u}_{2}\) rest frame, \(\bar{u}_{1}=(\bar{u}_{1}^{0},\boldsymbol{\bar{u}}_{1})\), \(q_{1}=(E_{q_{1}},\boldsymbol{q}_{1})\) and \(q_{2}=(0,\boldsymbol{q}_{2})\). In the physical region, we have \(\bar{u}_{1}\cdot q_{2}>0\) and \(\bar{u}_{2}\cdot q_{1}>0\) because the outgoing graviton has positive energy.16 We also have \(q_{1}^{2}<0\) and \(q_{2}^{2}<0\), because the momentum transfer of particle 1 and 2 is spatial-like. Footnote 16: The reason for \(\bar{u}_{2}\cdot q_{1}>0\) is that in the \(\bar{u}_{2}\) rest frame, \(\bar{u}_{2}\cdot q_{1}=\bar{u}_{2}\cdot k_{5}=(k_{5})^{0}\). Since the graviton is outgoing, its energy \((k_{5})^{0}\) is always positive. We also have \(\bar{u}_{1}\cdot q_{2}>0\) for the same reason. With these preparations, let us first discuss the bubble integral \(I_{0,0,1,0}\) as the simplest example, \[I_{0,0,1,0}=\int\frac{\mathrm{d}^{4-2\epsilon}\ell}{(2\pi)^{4-2 \epsilon}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1})^{2}+i0} =-\frac{(4\pi^{2})^{\epsilon}}{16\pi^{3}}\int\frac{\mathrm{d}^{3- 2\epsilon}\boldsymbol{\ell}}{\boldsymbol{\ell}^{2}-(\bar{u}_{2}\cdot q_{1})^{2 }-i0}\] \[=-\frac{(4\pi^{2})^{\epsilon}\Gamma(\epsilon-1/2)}{16\pi^{3/2+ \epsilon}[-(\bar{u}_{2}\cdot q_{1})^{2}-i0]^{\epsilon-1/2}}\,. \tag{112}\] In the classical amplitude, the bubble integral usually comes with a divergent coefficient \(\frac{1}{d-4}\) that encodes part of the IR divergence. We expand this combination up to the terms finite in \(\epsilon\), \[\frac{1}{d-4}I_{0,0,1,0}=-\frac{\bar{u}_{2}\cdot q_{1}}{16}-\frac{i(\bar{u}_{ 2}\cdot q_{1})}{16\pi}\left[-\frac{1}{\epsilon_{\rm IR}}-2+\log(4\pi(\bar{u}_ {2}\cdot q_{1})^{2})\right]+\mathcal{O}(\epsilon)\,, \tag{113}\] where we have defined for convenience, \[-\frac{1}{\epsilon_{\rm IR}}\equiv-\frac{1}{\epsilon}+\gamma_{\rm E}-\log(4 \pi^{2})\,, \tag{114}\] and \(\gamma_{\rm E}\) is the Euler constant. It is crucial to track the \(i0\) prescription to determine the analytic continuation into the physical region of external kinematics.17 We will mainly consider the results in \(d=4\) only, so we will omit the \(\mathcal{O}(\epsilon)\) terms in the following. Footnote 17: To derive eq. (113), we have used the analytic continuation \([-(\bar{u}_{2}\cdot q_{1})^{2}-i0]^{1/2}=-i(\bar{u}_{2}\cdot q_{1})\) and \(\log(-(\bar{u}_{2}\cdot q_{1})^{2}-i0)=\log(\bar{u}_{2}\cdot q_{1})^{2}-i\pi\) with \(\bar{u}_{2}\cdot q_{1}>0\). ### Triangle integrals Next, we consider the triangle integral \(I_{1,1,0,0}\). We first introduce the Feynman parameterization to combine the denominator while going to the \(\bar{u}_{2}\) rest frame. The integral is then straightforward to work out, \[I_{1,1,0,0}=\int\frac{\mathrm{d}^{4}\ell}{(2\pi)^{4}}\frac{\hat{ \delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}}=\frac{1}{16\pi^{3}} \int_{0}^{1}\mathrm{d}x\int\frac{\mathrm{d}^{3}\boldsymbol{\ell}}{[ \boldsymbol{\ell}^{2}+x(1-x)\boldsymbol{q}_{2}^{2}]^{2}}=\frac{1}{16\sqrt{-q_ {2}^{2}}}\,. \tag{100}\] This integral is purely real as expected because further cutting either \(\ell^{2}\) or \((\ell+q_{2})^{2}\) leads to vanishing results due to on-shell three-point kinematics. In contrast, the other triangle master integral \(I_{1,0,1,0}\) is complex. We start with the same Feynman parameterization, and the integration proceeds as before, \[I_{1,0,1,0}=\int\frac{\mathrm{d}^{4}\ell}{(2\pi)^{4}}\frac{ \hat{\delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}[(\ell-q_{1})^{2}+i0]} =\frac{1}{16\pi^{3}}\int_{0}^{1}\mathrm{d}x\int\frac{\mathrm{d}^ {3}\boldsymbol{\ell}}{[\boldsymbol{\ell}^{2}+x(1-x)\boldsymbol{q}_{1}^{2}-xE_ {q_{1}}^{2}-i0]^{2}}\] \[=\frac{1}{8\pi\sqrt{\boldsymbol{q}_{1}^{2}}}\arcsin\left(\sqrt{ \frac{\boldsymbol{q}_{1}^{2}}{\boldsymbol{q}_{1}^{2}-E_{q_{1}}^{2}}}+i0\right)\] \[=\frac{1}{16\sqrt{\boldsymbol{q}_{1}^{2}}}+\frac{i}{8\pi\sqrt{ \boldsymbol{q}_{1}^{2}}}\,\mathrm{arccosh}\,\sqrt{\frac{\boldsymbol{q}_{1}^{ 2}}{\boldsymbol{q}_{1}^{2}-E_{q_{1}}^{2}}}\,. \tag{101}\] We note that the argument of \(\arcsin\) in the second line is greater than \(1\) in the physical region. The analytic continuation in \(+i0\) prescription is \[\arcsin(x+i0)=\frac{\pi}{2}+i\,\mathrm{arccosh}(x)\quad\text{for}\quad x>1\,. \tag{102}\] By using eq. (100), we can uplift the result to a generic frame, \[I_{1,0,1,0}=\frac{1}{16\sqrt{(\bar{u}_{2}\cdot q_{1})^{2}-q_{1}^{2}}}+\frac{ i}{8\pi\sqrt{(\bar{u}_{2}\cdot q_{1})^{2}-q_{1}^{2}}}\,\mathrm{arcsinh}\, \frac{\bar{u}_{2}\cdot q_{1}}{\sqrt{-q_{1}^{2}}}\,, \tag{103}\] where we have also used the identity \(\mathrm{arccosh}\,\sqrt{x+1}=\mathrm{arcsinh}\,\sqrt{x}\). To compute the integrals with a linear matter propagator, we use a different parameterization to combine the denominator, \[\frac{1}{ab}=\int_{0}^{\infty}\frac{\mathrm{d}x}{(a+xb)^{2}}\,. \tag{104}\] Applying eq. (104) to \(I_{0,0,1,1}^{+}\), we get \[I_{0,0,1,1}^{+}=\int\frac{\mathrm{d}^{4}\ell}{(2\pi)^{4}}\frac{ \hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot\ell+i0)} =\int_{0}^{\infty}\mathrm{d}x\int\frac{\mathrm{d}^{4}\ell}{(2\pi )^{4}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{[(\ell-q_{1})^{2}+2x\bar{u}_ {1}\cdot\ell+i0)]^{2}}\] \[=\frac{i}{16\pi\sqrt{y^{2}-1}}\int_{-\alpha}^{\infty}\frac{ \mathrm{d}x}{(x^{2}+\beta)^{1/2}}\,, \tag{105}\] where \(\alpha=\frac{y(\bar{u}_{2}\cdot q_{1})}{y^{2}-1}\) and \(\beta=-\frac{(\bar{u}_{2}\cdot q_{1})^{2}}{(y^{2}-1)^{2}}+i0\). The integration over \(x\) is divergent at \(x\to\infty\), which encodes the IR divergence due to the linear matter propagator \(2\bar{u}_{1}\cdot\ell\). However, in the classical amplitude, the linear matter propagator only appears in the principal-valued combination \(I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-}\). After including \(I_{0,0,1,1}^{-}\), \[I_{0,0,1,1}^{-}=\int\frac{\mathrm{d}^{4}\ell}{(2\pi)^{4}}\frac{ \hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot\ell-i0)} =-\int\frac{\mathrm{d}^{4}\ell}{(2\pi)^{4}}\frac{\hat{\delta}(2 \bar{u}_{2}\cdot\ell)}{(\ell+q_{1})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\] \[=-I_{0,0,1,1}^{+}\Big{|}_{q_{1}\to-q_{1}} \tag{111}\] we find that the range of \(x\) gets truncated and we get a finite result, \[I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-}=\frac{i}{8\pi\sqrt{y^{2}-1}}\int_{0}^{\alpha} \frac{\mathrm{d}x}{(x^{2}+\beta)^{1/2}}=\frac{i}{8\pi\sqrt{y^{2}-1}}\,\mathrm{ arcsinh}\,\frac{\alpha}{\sqrt{\beta}}\,. \tag{112}\] The \(+i0\) prescription in \(\beta\) leads to the following analytic continuation of the \(\mathrm{arcsinh}\) function in the physical region, \[\mathrm{arcsinh}\,\frac{\alpha}{\sqrt{\beta}}=-\frac{i\pi}{2}+\mathrm{arccosh }(y)\,. \tag{113}\] Therefore, the final result of this triangle integral is \[I_{0,0,1,1}^{+}+I_{0,0,1,1}^{-}=\frac{1}{16\sqrt{y^{2}-1}}+\frac{i}{8\pi\sqrt {y^{2}-1}}\,\mathrm{arccosh}(y)\,. \tag{114}\] We can apply the same technique to compute \[I_{0,1,0,1}^{\pm}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta} (2\bar{u}_{2}\cdot\ell)}{(\ell+q_{2})^{2}(2\bar{u}_{1}\cdot\ell\pm i0)}\,. \tag{115}\] In particular, the principal value combination is given by \[I_{0,1,0,1}^{+}+I_{0,1,0,1}^{-}=\frac{i}{8\pi\sqrt{y^{2}-1}}\int_{0}^{\gamma} \frac{\mathrm{d}x}{(x^{2}-\gamma^{2}+i0)^{1/2}}=\frac{1}{16\sqrt{y^{2}-1}}\,, \tag{116}\] where \(\gamma=\frac{\bar{u}_{1}\cdot q_{2}}{y^{2}-1}\). ### Box and pentagon integrals The box master integrals are all individually IR divergent. For \(I_{1,1,1,0}\), the IR divergence is due to the presence of a massless three-point vertex, \[I_{1,1,1,0}=\frac{1}{32q_{2}^{2}(\bar{u}_{2}\cdot q_{1})}+\frac{i}{32\pi q_{2} ^{2}(\bar{u}_{2}\cdot q_{1})}\left[-\frac{1}{\epsilon_{\mathrm{IR}}}+\log(4 \pi(\bar{u}_{2}\cdot q_{1})^{2})+2\log\frac{q_{2}^{2}}{q_{1}^{2}}\right]\,, \tag{117}\] The other box integrals, \(I_{1,1,0,1}^{\pm}\), \(I_{1,0,1,1}^{\pm}\) and \(I_{0,1,1,1}^{\pm}\), all have an IR divergence due to the linear matter propagator \(2\bar{u}_{1}\cdot\ell\). This IR divergence cancels in the classical amplitude because the linear matter propagator always appears as a principal value, \[I^{+}_{1,1,0,1}+I^{-}_{1,1,0,1} =\frac{1}{16q_{2}^{2}\sqrt{y^{2}-1}}\,, \tag{111a}\] \[I^{+}_{1,0,1,1}+I^{-}_{1,0,1,1} =\frac{1}{16q_{1}^{2}\sqrt{y^{2}-1}}+\frac{i}{8\pi q_{1}^{2}\sqrt{ y^{2}-1}}\,\text{arccosh}(y)\,. \tag{111b}\] On the other hand, \(I^{+}_{0,1,1,1}+I^{-}_{0,1,1,1}\) remains IR divergent due to the presence of the same massless three-point vertex as in \(I_{1,1,1,0}\), \[I^{+}_{0,1,1,1}+I^{-}_{0,1,1,1} =-\frac{1}{32(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2}\cdot q_{1})} \tag{112}\] \[\quad-\frac{i}{32\pi(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2}\cdot q_{1 })}\left[-\frac{1}{\epsilon_{\text{IR}}}+\log(4\pi(\bar{u}_{2}\cdot q_{1})^{2} )+2\log\frac{\bar{u}_{1}\cdot q_{2}}{\bar{u}_{2}\cdot q_{1}}\right]\,.\] We leave the derivations of these integrals to appendix B. Scalar pentagon integrals with quadratic propagators in \(d=4-2\epsilon\) dimensions can be written as a sum of box integrals and a six-dimensional pentagon integral [109; 54; 110]. We derive here an analogous decomposition for our pentagon integral with linear matter propagators one of which is cut, \(I^{\pm}_{1,1,1,1}\). We first decompose the loop momentum \(\ell\) into its four-dimensional and extra-dimensional component, \(\ell^{2}=\ell_{4}^{2}+\mu^{2}\), where \(\ell_{4}\) can be expressed as a linear combination of external kinematic data, \(\ell_{4}=\alpha_{1}\bar{u}_{1}+\alpha_{2}\bar{u}_{2}+\alpha_{3}q_{1}+\alpha_{4 }q_{2}\). The coefficients \(\alpha_{i}\) contain the \(\ell\) dependence through scalar products \(v\cdot\ell_{4}=v\cdot\ell\) with \(v\in\{\bar{u}_{1},\bar{u}_{2},q_{1},q_{2}\}\). We then plug the above relation for \(\ell_{4}\) into the identity \[0=\int\frac{\mathrm{d}^{4-2\epsilon}\ell}{(2\pi)^{4-2\epsilon}}\frac{\hat{ \delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}(\ell-q_{1})^{2}(2 \bar{u}_{1}\cdot\ell)}(\ell^{2}-\ell_{4}^{2}-\mu^{2})\,, \tag{113}\] express all the \(v\cdot\ell\) in terms of inverse propagators, and perform the standard tensor reduction. This will lead to a linear relation that expresses the pentagon integral in terms of the box integrals in \(d=4-2\epsilon\) and another pentagon integral \(I^{\pm,d=6-2\epsilon}_{1,1,1,1}\) in \(d=6-2\epsilon\), \[I^{\pm}_{1,1,1,1}=\beta_{1}I^{\pm}_{1,1,0,1}+\beta_{2}I^{\pm}_{1,0,1,1}+\beta_ {3}I^{\pm}_{0,1,1,1}+\beta_{4}I_{1,1,1,0}+\beta_{5}\,\epsilon\,I^{\pm,d=6-2 \epsilon}_{1,1,1,1}\,, \tag{114}\] where \(\beta_{i}\) only depend on external Mandelstam variables. The pentagon integral \(I^{\pm,d=6-2\epsilon}_{1,1,1,1}\) comes from evaluating the \(\mu^{2}\) term contribution by the dimension shift relation [45], \[\int\mathrm{d}^{4-2\epsilon}\ell\frac{\mu^{2}\hat{\delta}(2\bar{u}_{2}\cdot \ell)}{\ell^{2}(\ell+q_{2})^{2}(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot\ell)}=- \frac{\epsilon}{\pi}\int\mathrm{d}^{6-2\epsilon}\ell\frac{\hat{\delta}(2\bar{ u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot \ell)}\,. \tag{115}\] In the principal value combination \(I^{+}_{1,1,1,1}+I^{-}_{1,1,1,1}\), this term is finite in \(d=6-2\epsilon\) such that it contributes at most to \(\mathcal{O}(\epsilon)\). The coefficients \(\beta_{i}\) in eq. (114) are straightforward to compute. Here we just give the final result of the pentagon integral, \[I^{+}_{1,1,1,1,1}+I^{-}_{1,1,1,1,1} = \frac{1}{32q_{2}^{2}}\left(\frac{2}{q_{1}^{2}\sqrt{y^{2}-1}}-\frac {1}{(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2}\cdot q_{1})}\right)\] \[+\frac{i}{32\pi q_{2}^{2}(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2} \cdot q_{1})}\left[\frac{1}{\epsilon_{\rm IR}}-\log(4\pi(\bar{u}_{2}\cdot q_{1 })^{2})-2\log\frac{q_{2}^{2}}{q_{1}^{2}}\right]\] \[+\frac{i}{16\pi}\left[\frac{C_{1}}{(\bar{u}_{1}\cdot q_{2})(\bar {u}_{2}\cdot q_{1})}\log\frac{(\bar{u}_{1}\cdot q_{2})q_{1}^{2}}{(\bar{u}_{2} \cdot q_{1})q_{2}^{2}}+\frac{C_{2}\arccos(y)}{q_{1}^{2}\sqrt{y^{2}-1}}\right]\,,\] where \(C_{1}\) and \(C_{2}\) are given by \[C_{1}=\frac{y(\bar{u}_{2}\cdot q_{1})(\bar{u}_{1}\cdot q_{2})(q _{1}^{2}+q_{2}^{2})-(\bar{u}_{1}\cdot q_{2})^{2}q_{1}^{2}-(\bar{u}_{2}\cdot q _{1})^{2}q_{2}^{2}+2(\bar{u}_{1}\cdot q_{2})^{2}(\bar{u}_{2}\cdot q_{1})^{2}}{( \bar{u}_{2}\cdot q_{1})^{2}q_{2}^{4}-2y(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2} \cdot q_{1})q_{1}^{2}q_{2}^{2}+(\bar{u}_{1}\cdot q_{2})^{2}q_{1}^{4}}\,,\] \[C_{2}=\frac{2(\bar{u}_{2}\cdot q_{1})^{2}q_{2}^{2}-2y(\bar{u}_{1 }\cdot q_{2})(\bar{u}_{2}\cdot q_{1})q_{1}^{2}-(y^{2}-1)q_{1}^{2}(q_{1}^{2}-q_ {2}^{2})}{(\bar{u}_{2}\cdot q_{1})^{2}q_{2}^{4}-2y(\bar{u}_{1}\cdot q_{2})( \bar{u}_{2}\cdot q_{1})q_{1}^{2}q_{2}^{2}+(\bar{u}_{1}\cdot q_{2})^{2}q_{1}^{4 }}\,. \tag{101}\] ## 7 The five-point amplitude and its infrared divergences Having evaluated all the relevant master integrals, we can assemble the classical amplitudes and carry our various consistency checks. When writing down the classical amplitude we can also remove the distinction between \(\{m_{i},p_{i},u_{i},\sigma=u_{1}\cdot u_{2}\}\) and \(\{\bar{m}_{i},\bar{p}_{i},\bar{u}_{i},y=\bar{u}_{1}\cdot\bar{u}_{2}\}\), as they differ only by positive powers \(q_{1}\) and \(q_{2}\), i.e. by terms with quantum scaling in the soft expansion. By inspecting the non-rational terms in the integrals evaluated in section 6 and appendix B it is straightforward to see that they are all linear combinations of the functions \[\frac{1}{\sqrt{-q_{1}^{2}}}\,,\quad\frac{1}{\sqrt{-q_{2}^{2}}}\,, \quad\frac{1}{\sqrt{(k\cdot u_{2})^{2}-q_{1}^{2}}}\,,\quad\frac{1}{\sqrt{(k \cdot u_{1})^{2}-q_{2}^{2}}}\,,\quad\log\frac{(k\cdot u_{1})^{2}(k\cdot u_{2} )^{2}}{\mu^{4}}\,,\] \[\log\frac{q_{2}^{2}}{q_{1}^{2}}\,,\quad\log\frac{k\cdot u_{1}}{k \cdot u_{2}}\,,\quad\frac{\arcsinh\frac{k\cdot u_{2}}{\sqrt{-q_{1}^{2}}}}{ \sqrt{(k\cdot u_{2})^{2}-q_{1}^{2}}}\,,\quad\frac{\arcsinh\frac{k\cdot u_{1}}{ \sqrt{-q_{2}^{2}}}}{\sqrt{(k\cdot u_{1})^{2}-q_{2}^{2}}}\,,\quad\frac{\arccos \alpha}{(\sigma^{2}-1)^{3/2}}\,, \tag{102}\] with rational coefficients. Here \(\mu\) is the scale of dimensional regularization, and this \(\mu\)-dependent logarithm is intimately related to IR divergence of amplitudes. In both GR and \({\cal N}=8\) supergravity classical two-scalar-one-graviton five-point ampli tude has the general form \[M_{5,1\;\rm loop}^{\rm 2MPI,cl.} =-\frac{i\,\kappa^{2}}{32\pi}(k\cdot p_{1}+k\cdot p_{2})\left(\frac{ 1}{\epsilon}-\frac{1}{2}\log\frac{16\pi^{2}(k\cdot u_{1})^{2}(k\cdot u_{2})^{2} }{\mu^{4}}\right)M_{5,\rm tree}^{\rm cl.}\] \[\quad+\kappa^{5}\left(A_{\rm rat}^{R}+\frac{A_{1}^{R}}{\sqrt{(k \cdot u_{2})^{2}-q_{1}^{2}}}+\frac{A_{2}^{R}}{\sqrt{(k\cdot u_{1})^{2}-q_{2}^{ 2}}}+\frac{A_{3}^{R}}{\sqrt{-q_{1}^{2}}}+\frac{A_{4}^{R}}{\sqrt{-q_{2}^{2}}}\right)\] \[\quad+i\,\kappa^{5}\left(A_{\rm rat}^{I}+A_{1}^{I}\frac{\arcsinh \frac{k\cdot u_{2}}{\sqrt{-q_{1}^{2}}}}{\sqrt{(k\cdot u_{2})^{2}-q_{1}^{2}}}+A _{2}^{I}\frac{\arcsinh\frac{k\cdot u_{1}}{\sqrt{-q_{2}^{2}}}}{\sqrt{(k\cdot u _{1})^{2}-q_{2}^{2}}}+A_{3}^{I}\log\frac{q_{2}^{2}}{q_{1}^{2}}\right.\] \[\qquad\qquad\qquad\left.+A_{4}^{I}\log\frac{k\cdot u_{1}}{k\cdot u _{2}}+A_{5}^{I}\frac{\arccos\h\sigma}{(\sigma^{2}-1)^{3/2}}\right)\,, \tag{111}\] where the coefficient functions \(A_{i}^{R,I}\) are rational combinations of momentum invariants and polarization vector \(\varepsilon\). Their scaling in the soft limit is homogeneous and it is such that \(M_{5,1\;\rm loop}^{\rm 2MPI,cl.}\) has classical \(q^{-1}\) scaling. They are gauge invariant as a consequence of the gauge invariance of the master integral coefficients in eqs. (109) and (110). The explicit expressions of the coefficient functions \(A_{i}^{R,I}\) are included in two ancillary Mathematica-readable files, GR_Coeffs.m and Neq8_Coeffs.m, for GR and \(\mathcal{N}=8\) supergravity, respectively. The first line of eq. (111), reproducing expectations based on Weinberg's theorem (106), requires a nontrivial interplay between the \(c_{i}\) and \(c_{i}^{\mathcal{N}=8}\) coefficients determining the GR and \(\mathcal{N}=8\) supergravity classical integrands, eqs. (109) and (110) respectively, is a check of our calculation. As discussed in section 3.3, the definition of the observation time \(\tau\) in eq. (110) absorbs the IR divergence such that the part of the one-loop five-point amplitude that contributes to the (spectral) waveform and the Newman-Penrose scalar is \[M_{5,1\;\rm loop}^{\rm 0,2MPI,cl.} =\kappa^{5}\left(A_{\rm rat}^{R}+\frac{A_{1}^{R}}{\sqrt{(k\cdot u _{2})^{2}-q_{1}^{2}}}+\frac{A_{2}^{R}}{\sqrt{(k\cdot u_{1})^{2}-q_{2}^{2}}}+ \frac{A_{3}^{R}}{\sqrt{-q_{1}^{2}}}+\frac{A_{4}^{R}}{\sqrt{-q_{2}^{2}}}\right)\] \[\quad+i\,\kappa^{5}\left(A_{\rm rat}^{I}+A_{1}^{I}\frac{\arcsinh \frac{k\cdot u_{2}}{\sqrt{-q_{1}^{2}}}}{\sqrt{(k\cdot u_{2})^{2}-q_{1}^{2}}}+ A_{2}^{I}\frac{\arcsinh\frac{k\cdot u_{1}}{\sqrt{-q_{2}^{2}}}}{\sqrt{(k\cdot u_{1})^{2}-q _{2}^{2}}}+A_{3}^{I}\log\frac{q_{2}^{2}}{q_{1}^{2}}\right.\] \[\qquad\qquad\qquad+A_{4}^{I}\log\frac{k\cdot u_{1}}{k\cdot u_{2} }+A_{5}^{I}\frac{\arccos\h\sigma}{(\sigma^{2}-1)^{3/2}}\right),\] \[\quad+\frac{i\,\kappa^{2}}{64\pi}(k\cdot p_{1}+k\cdot p_{2})\left[ \log\frac{16\pi^{2}(k\cdot u_{1})^{2}(k\cdot u_{2})^{2}}{\Lambda^{4}}\right] M_{5,\rm tree}^{\rm cl.}\,, \tag{112}\] where the dependence on the dimensional regularization scale \(\mu\) has been replaced by cutoff defining the virtual IR gravitons \(\Lambda\) due to eq. (110). We have also explored the fate of spurious singularities. The Gram determinant \(G[\bar{p}_{1},\bar{p}_{2},q_{1},q_{2}]\) arising from the decomposition (108) of the graviton polarization tensor into a basis of external momenta cancels in eq. (111) within each coefficient \(A_{i}^{R,I}\) with the help of four dimensional identities involving the vanishing Gram determinant \(G[\bar{p}_{1},\bar{p}_{2},q_{1},q_{2},\varepsilon]\). Other spurious singularities occur for kinematic configurations that set to zero denominator factors arising from the IBP reduction. They are solutions to the equations \[\Delta_{1} =-2\sigma(u_{1}\cdot k)(u_{2}\cdot k)q_{1}^{2}q_{2}^{2}+(u_{2}\cdot k )^{2}(q_{2}^{2})^{2}+(u_{1}\cdot k)^{2}(q_{1}^{2})^{2}=0\, \tag{111}\] \[\Delta_{2} =-2\sigma(u_{1}\cdot k)(u_{2}\cdot k)+(u_{2}\cdot k)^{2}+(u_{1} \cdot k)^{2}=0\,,\] \[\Delta_{3} =(q_{1}^{2}-q_{2}^{2})^{2}+4(u_{2}\cdot k)^{2}q_{2}^{2}=0\,,\] \[\Delta_{4} =(q_{1}^{2}-q_{2}^{2})^{2}+4(u_{1}\cdot k)^{2}q_{1}^{2}=0\,.\] It is not difficult to check that when either of these relations is satisfied, the logarithmic functions in eq. (110) are no longer linearly-independent. Therefore, these singularities can cancel only in the complete expression, which they indeed do. While all four \(\Delta\)'s appear in the GR amplitude, only \(\Delta_{1}\) and \(\Delta_{2}\) appear in the \(\mathcal{N}=8\) amplitude. In the real part of the amplitude, the rational coefficient has a very simple structure. For the GR amplitude, it reads \[A_{\text{rat}}^{R,\text{GR}}=\frac{1}{32}\left(1-\frac{\sigma(\sigma^{2}-3/2)} {(\sigma^{2}-1)^{3/2}}\right)(k\cdot p_{1}+k\cdot p_{2})M_{5,\text{tree}}^{ \text{cl.GR}}\Big{|}_{\kappa=1}\,, \tag{112}\] where we set \(\kappa=1\) in the tree amplitude because the overall \(\kappa\) dependence has been pulled out. On the other hand, the square-root functions in eq. (110) originate only from the triangle integrals \(I_{1,1,0,0}\), \(I_{1,0,1,0}\) and their up-down flip, while their coefficients are more complicated in GR. However, such master integrals are absent for \(\mathcal{N}=8\) supergravity, see eq. (110), and thus \(A_{1,2,3,4}^{R,\mathcal{N}=8}=0\). This makes the real part of the five-point \(\mathcal{N}=8\) amplitude much simpler than eq. (111) implies. The complete real part is given by \(A_{\text{rat}}^{R}\) only, which is similar to its GR counterpart, \[\varepsilon^{\mu}\varepsilon^{\nu}\operatorname{Re}\left[M_{5,1\ \text{loop}}^{0,\text{2MPI,cl.}\mathcal{N}=8}\right]_{\mu\nu} =A_{\text{rat}}^{R,\mathcal{N}=8} \tag{113}\] \[=\frac{1}{32}\left(1-\frac{\sigma(\sigma^{2}-2)}{(\sigma^{2}-1)^ {3/2}}\right)(k\cdot p_{1}+k\cdot p_{2})M_{5,\text{tree}}^{\text{cl.}\mathcal{ N}=8}\Big{|}_{\kappa=1}\.\] We note that this is the complete \(\mathcal{R}_{L=1}\) that is used to compute the waveform for \(\mathcal{N}=8\) in eq. (100). Curiously, the \(\sigma\)-dependent factor in eq. (113) vanishes in the ultrarelativistic limit, \(\sigma\to\infty\), implying that in this limit only the imaginary part of the amplitude contributes to the waveform at \(\mathcal{O}(G^{2})\) in \(\mathcal{N}=8\) supergravity according to eq. (100). Even though the real part of the \(\mathcal{N}=8\) classical one-loop amplitude is proportional to the tree-level amplitude, the real part of the one-loop spectral function is different from the tree-level one because the \(\omega\)-dependent distributional factor is now \(\Theta(\omega)-\Theta(-\omega)\) for \(L=1\), see eq. (100). Finally, we have verified that our one-loop amplitudes reproduce the expected soft limit under \(k\to 0\). At the leading order, the imaginary part vanishes, and the real part factorizes into the four-point amplitude and the Weinberg soft factor, \[\lim_{k\to 0}M_{5,1\text{ loop}}^{0,\text{2MPI,cl.GR}}=\frac{\kappa}{2} \big{[}\varepsilon^{\mu}\varepsilon^{\nu}\mathcal{S}(k,q)_{\mu\nu}\big{]}M_{4,1 \text{ loop}}^{\text{2MPI,cl.GR}}\,,\] \[\lim_{k\to 0}M_{5,1\text{ loop}}^{0,\text{2MPI,cl.\mathcal{N}}=8}=0\,. \tag{112}\] The leading soft limit for the 2MPI amplitude of \(\mathcal{N}=8\) vanishes because at four points it is only contributed by the cut box, which is 2MPR. Meanwhile, the \(2\to 2\) amplitude in GR receives contributions from triangles, and it is given by \[M_{4,1\text{ loop}}^{\text{2MPI,cl.GR}}=\frac{3\kappa^{4}m_{1}^{2}m_{2}^{2}(m_ {1}+m_{2})(5\sigma^{2}-1)}{512\sqrt{-q^{2}}}\,, \tag{113}\] where \(q\) is the momentum transfer for this \(2\to 2\) scattering. At the leading order, we can take \(q=q_{1}=-q_{2}\), such that the soft factor \(\mathcal{S}\) is given by \[\mathcal{S}(k,q)^{\mu\nu} =\sum_{i=1}^{4}\frac{p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot k}=-\sum_ {i=1}^{2}\left[\frac{p_{i}^{\mu}q_{i}^{\nu}+p_{i}^{\nu}q_{i}^{\mu}}{p_{i}\cdot k }-\frac{p_{i}^{\mu}p_{i}^{\nu}q_{i}\cdot k}{(p_{i}\cdot k)^{2}}\right]+ \mathcal{O}(\hbar)\,, \tag{114}\] \[\varepsilon^{\mu}\varepsilon^{\nu}\mathcal{S}(k,q)_{\mu\nu} =-\frac{2\varepsilon\cdot p_{1}\varepsilon\cdot q}{p_{1}\cdot k}+ \frac{(\varepsilon\cdot p_{1})^{2}k\cdot q}{(p_{1}\cdot k)^{2}}+\frac{2 \varepsilon\cdot p_{2}\varepsilon\cdot q}{p_{2}\cdot k}-\frac{(\varepsilon \cdot p_{2})^{2}k\cdot q}{(p_{2}\cdot k)^{2}}+\mathcal{O}(k^{0})\,.\] The gravitational memory only receive contribution from the leading soft limit. Thus there is no memory effect for \(\mathcal{N}=8\) at NLO, and we will explicitly compute the memory for GR in the next section. ## 8 Leading order and next-to-leading order waveforms Having constructed the relevant part of the one-loop five-point amplitude, we now use the formulae summarized in section 3.4 to construct the leading order and next-to-leading order waveform observables focusing on \(\mathcal{N}=8\) supergravity. The time-domain leading-order asymptotic metric was first discussed in Ref. [22] and more recently from the worldline QFT perspective in Ref. [35]. The last ingredient that we need are polarization vectors \(\varepsilon_{\pm}(k)\) for a massless particle with momentum \(k\). We may use spinor-helicity notations for it; for numerical evaluation, however, it is more convenient to start with the special outgoing momentum \(k_{0}\) \[k_{0}=(1,0,0,1)\qquad\varepsilon_{\pm}(k_{0})=-\frac{1}{\sqrt{2}}(0,1,\mp i,0) \tag{115}\] and obtain the general angle-dependent polarization vector through a rotation: \[\varepsilon_{\pm}(k)=R(\theta,\phi)\varepsilon_{\pm}(k_{0})\qquad R(\theta, \phi)=\left(\begin{smallmatrix}1&0&0&0\\ 0&\cos\phi&-\sin\phi&0\\ 0&\sin\phi&\cos\phi&0\\ 0&0&0&1\end{smallmatrix}\right)\cdot\left(\begin{smallmatrix}1&0&0&0\\ 0&\cos\theta&0&\sin\theta\\ 0&0&1&0\\ 0&-\sin\theta&0&\cos\theta\end{smallmatrix}\right)\,. \tag{116}\] The third angle of a general rotation simply multiplies \(\varepsilon\) by a phase, so we will ignore it. This rotation also maps \(k_{0}\) to \(\tilde{k}=(1,\mathbf{n_{k}})\) with \(\mathbf{n_{k}}\) a general unit vector. We stress that, as discussed before, the complex nature of these polarization tensors does not change the definition of \(\mathcal{R}_{L}\) and \(\mathcal{I}_{L}\) in eq. (3.53), because they are stripped off before taking the real and imaginary part. Indeed, we may decompose the outgoing polarization tensor \(\varepsilon_{--}=\varepsilon_{-}\otimes\varepsilon_{-}\) as \[\varepsilon_{--}=\varepsilon_{+}+i\varepsilon_{\times}\, \tag{8.3}\] and use separately the real polarizations \(\varepsilon_{+}\) and \(\varepsilon_{\times}\) to define the real and imaginary parts of the matrix element \(\mathcal{B}_{-}\) and subsequently the waveforms. The asymptotic metric \(h^{\infty}_{\mu\nu}\) is defined in eq. (3.18). We further parametrize it as \[h^{\infty}_{\mu\nu}=(\kappa\mathsf{M})\hat{h}_{\mu\nu}\,,\qquad\hat{h}_{\mu\nu }=\frac{\kappa^{2}\mathsf{M}}{\sqrt{-b^{2}}}\hat{h}^{(1)}_{\mu\nu}+\left(\frac {\kappa^{2}\mathsf{M}}{\sqrt{-b^{2}}}\right)^{2}\hat{h}^{(2)}_{\mu\nu}+\ldots \tag{8.4}\] where \(\mathsf{M}=m_{1}+m_{2}\) is the total mass. Here \(\hat{h}^{(1)}_{\mu\nu}\) and \(\hat{h}^{(2)}_{\mu\nu}\) are respectively the reduced waveform at 1PM and 2PM order. Similar to the previous section, we will not distinguish barred and unbarred variables since their difference is quantum. We will perform explicit calculations in the center-of-mass (COM) frame in which the black holes move along the \(z\) axis and \(b^{\mu}\) is along the \(x\) axis,18 Footnote 18: Technically, \(u_{1}\) and \(u_{2}\) are outgoing four-velocities in our setup. However, in the classical limit, we can treat them as incoming since the difference is quantum. \[u^{\mu}_{1}=(\gamma,0,0,-\gamma v)\,,\qquad u^{\mu}_{2}=(\gamma,0,0,\gamma v) \,,\qquad b^{\mu}=(0,|\mathbf{b}|,0,0)\,, \tag{8.5}\] where \(\gamma=\frac{1}{\sqrt{1-v^{2}}}\). We will plot waveforms at various locations \((\theta,\phi)\) at spatial infinity with different velocities \(v\). ### Gravitational memory By plugging eqs. (3.46) to (3.48) into eq. (3.50), we can write the gravitational-wave memory as \[\Delta(h^{\infty}_{+}+ih^{\infty}_{\times}) =\lim_{\omega\to 0^{+}}(-i\omega)\Theta(\omega)(-i)\langle \psi_{\rm in}|\hat{S}^{\dagger}\hat{a}_{--}(k)\hat{S}|\psi_{\rm in}\rangle^{0 }\big{|}_{k=(\omega,\omega\mathbf{n})}\] \[\quad+\lim_{\omega\to 0^{-}}(-i\omega)\Theta(-\omega)(+i) \langle\psi_{\rm in}|\hat{S}^{\dagger}\hat{a}^{\dagger}_{++}(k)\hat{S}|\psi_{ \rm in}\rangle^{0}\big{|}_{k=(|\omega|,|\omega|\mathbf{n})}\,. \tag{8.6}\] In the soft limit, we have seen explicitly in eq. (7.7) through one loop order, that the matrix elements become \[\lim_{\omega\to 0^{+}}(-i\omega)\Theta(\omega)(-i)\langle\psi_{\rm in }|\hat{S}^{\dagger}\hat{a}_{--}(k)\hat{S}|\psi_{\rm in}\rangle^{0}\big{|}_{k=( \omega,\omega\mathbf{n})}\] \[\qquad=-\frac{i\kappa}{4}\int\frac{{\rm d}^{d}q}{(2\pi)^{d}}\hat {\delta}(2p_{1}\cdot q)\hat{\delta}(2p_{2}\cdot q)e^{iq\cdot b}\,\varepsilon^ {\mu}_{-}\varepsilon^{\nu}_{-}\mathcal{S}(\tilde{k},q)_{\mu\nu}M_{4}^{\rm 2MPI.cl.}\,, \tag{8.7}\] where \(\tilde{k}=(1,\mathbf{n})\), and we have used \(\Theta(0)=1/2\). The graviton soft theorem implies this result holds at all orders. A similar calculation can be applied to the conjugate matrix element.19 Therefore, eq. (8.6) becomes Footnote 19: We note that in this calculation, cancelling \(\omega\) in \(k\) leads to an extra minus sign because \(k=(-\omega,-\omega\mathbf{n})\) and \(\omega<0\). This sign will be absorbed by redefining \(q\to-q\) to align the exponential factor \(e^{iq\cdot b}\) because the soft factor \(\mathcal{S}_{\mu\nu}\) is odd in \(q\). The final result differs from eq. (8.7) only by a complex conjugation on \(M_{4}^{2\text{MPL}:\text{cl.}}\). \[\Delta(h_{+}^{\infty}+ih_{\times}^{\infty})=-\frac{i\kappa}{2}\int\frac{\text{d }^{d}q}{(2\pi)^{d}}\hat{\delta}(2p_{1}\cdot q)\hat{\delta}(2p_{2}\cdot q)e^{iq \cdot b}\,\varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\mathcal{S}(\tilde{k},q)_ {\mu\nu}\,\text{Re}\big{[}M_{4}^{2\text{MPL}:\text{cl}}\big{]}\,. \tag{8.8}\] The typical integral in eq. (8.8) is \[I^{\mu}=\int\frac{\text{d}^{d}q}{(2\pi)^{d}}\hat{\delta}(2p_{1}\cdot q)\hat{ \delta}(2p_{2}\cdot q)e^{iq\cdot b}q^{\mu}f(q^{2})\,. \tag{8.9}\] We can fix \(d=4\) and decompose \(q\) in terms of Ref. [43] \[q^{\mu}=z_{1}u_{1}^{\mu}+z_{2}u_{2}^{\mu}+z_{b}b^{\mu}+z_{v}v^{\mu}\,,\quad v^ {\mu}\equiv 4\epsilon^{\mu\nu\rho\sigma}u_{1\nu}u_{2\rho}b_{\sigma}\,. \tag{8.10}\] We then integrate over the coefficients \(\{z_{1},z_{2},z_{b},z_{v}\}\). It is easy to see that the two delta functions fix \(z_{1}=z_{2}=0\). The integration over \(z_{v}\) vanishes because the integrand is odd in \(z_{v}\). As a result, \(I^{\mu}\) must be proportional to \(b^{\mu}\). \(\varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\mathcal{S}(\tilde{k},q)_{\mu\nu}\) can therefore be pulled out of the integral by replacing \(q^{\mu}\to-i\partial/\partial b_{\mu}\). Recalling that, up to two loops, the constrained Fourier-transform of the real part of the 2MPI four-point amplitude is expected to be the radial action [63; 99], we find \[\Delta(h_{+}^{\infty}+ih_{\times}^{\infty}) =-\frac{\kappa}{2}\left[\varepsilon_{-}^{\mu}\varepsilon_{-}^{ \nu}\mathcal{S}(\tilde{k},\partial/\partial b)_{\mu\nu}\right]I_{r}(b)\,,\] \[I_{r}(b) =\int\frac{\text{d}^{d}q}{(2\pi)^{d}}\hat{\delta}(2p_{1}\cdot q) \hat{\delta}(2p_{2}\cdot q)e^{iq\cdot b}\;\text{Re}\left[M_{4}^{2\text{MPL}: \text{cl}}\right]\,. \tag{8.11}\] The action of \(\partial/\partial b_{\mu}\) on the radial action is proportional to the scattering angle \(\chi\), \[\frac{\partial I_{r}}{\partial b_{\mu}}=-\frac{m_{1}m_{2}\sqrt{\sigma^{2}-1}} {\mathsf{M}\sqrt{1+2\nu(\sigma-1)}}\frac{b^{\mu}}{\sqrt{-b^{2}}}\frac{\partial I _{r}}{\partial J}=\frac{m_{1}m_{2}\sqrt{\sigma^{2}-1}}{\mathsf{M}\sqrt{1+2\nu (\sigma-1)}}\tilde{b}^{\mu}\chi\,, \tag{8.12}\] where \(\nu=m_{1}m_{2}/\mathsf{M}^{2}\), \(\tilde{b}^{\mu}=b^{\mu}/\sqrt{-b^{2}}\) and \(\chi=-\partial I_{r}/\partial J\). The angular momentum \(J\) is given by \(J=p_{\infty}\sqrt{-b^{2}}\), and \(p_{\infty}\) is the norm of the initial COM momentum, which is given by the prefactor in eq. (8.12). Therefore, the memory is also proportional to \(\chi\), \[\Delta(h_{+}^{\infty}+ih_{\times}^{\infty}) =-\frac{\kappa m_{1}m_{2}\sqrt{\sigma^{2}-1}}{2\mathsf{M}\sqrt{1+ 2\nu(\sigma-1)}}\left(\varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\mathcal{S}( \tilde{k},\tilde{b})_{\mu\nu}\right)\chi\,, \tag{8.13}\] \[\varepsilon^{\mu}\varepsilon^{\nu}\mathcal{S}(\tilde{k},\tilde{b} )_{\mu\nu} =-\frac{2\varepsilon\cdot p_{1}\varepsilon\cdot\tilde{b}}{p_{1} \cdot\tilde{k}}+\frac{(\varepsilon\cdot p_{1})^{2}\tilde{k}\cdot\tilde{b}}{( p_{1}\cdot\tilde{k})^{2}}+\frac{2\varepsilon\cdot p_{2}\varepsilon\cdot\tilde{b}}{p_{2} \cdot\tilde{k}}-\frac{(\varepsilon\cdot p_{2})^{2}\tilde{k}\cdot\tilde{b}}{( p_{2}\cdot\tilde{k})^{2}}\,.\] Importantly, the angular dependence of the memory is completely encoded in the soft factor. We now plug in the 1PM and 2PM scattering angles for GR [111; 112] and \(\mathcal{N}=8\) supergravity [106], \[\chi^{(1)}=\frac{\kappa^{2}\mathsf{M}}{\sqrt{-b^{2}}}\frac{\sqrt{1+2 \nu(\sigma-1)}(2\sigma^{2}-X)}{16\pi(\sigma^{2}-1)}\,,\] \[\chi^{(2)}=\left(\frac{\kappa^{2}\mathsf{M}}{\sqrt{-b^{2}}} \right)^{2}\frac{3\sqrt{1+2\nu(\sigma-1)}(5\sigma^{2}-1)X}{4096\pi(\sigma^{2} -1)}\,, \tag{110}\] where \(X=1\) for GR and \(X=0\) for \(\mathcal{N}=8\) supergravity. The final result for gravitational memory is \[\Delta(\hat{h}_{+}^{(1)}+i\hat{h}_{\times}^{(1)})=-\frac{m_{1}m_{2 }(2\sigma^{2}-X)}{32\pi\mathsf{M}^{2}\sqrt{\sigma^{2}-1}}\left(\varepsilon_{-} ^{\mu}\varepsilon_{-}^{\nu}\mathcal{S}(\tilde{k},\tilde{b})_{\mu\nu}\right)\,,\] \[\Delta(\hat{h}_{+}^{(2)}+i\hat{h}_{\times}^{(2)})=-\frac{3m_{1}m _{2}(5\sigma^{2}-1)X}{8192\pi\mathsf{M}^{2}\sqrt{\sigma^{2}-1}}\left( \varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\mathcal{S}(\tilde{k},\tilde{b})_{ \mu\nu}\right)\,, \tag{111}\] where \(\hat{h}\) is defined in eq. (109). In particular, there are no memories for \(\mathcal{N}=8\) at 2PM. We note that the above relation between memory and the soft factor was first predicted in Ref. [113]. ### Leading order (LO) As discussed in eq. (108), the LO waveform is determined by the tree-level five-point amplitude. From eqs. (106) and (108), we get \[(h_{+}^{\infty}+ih_{\times}^{\infty})(\tau,\mathbf{n})\Big{|}_{\rm LO }=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}e^{-i\omega\tau}\int\frac{ \mathrm{d}^{d}q_{1}}{(2\pi)^{d}}\frac{\mathrm{d}^{d}q_{2}}{(2\pi)^{d}}\hat{ \delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2}\cdot q_{2})e^{iq_{1}\cdot b}\] \[\quad\times\hat{\delta}^{(d)}(q_{1}+q_{2}-k)M_{5,\rm tree}^{\rm cl }(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},k^{--})\big{|}_{k=\omega(1,\mathbf{n})}\] \[=\int\frac{\mathrm{d}^{d}q_{1}}{(2\pi)^{d}}\frac{\mathrm{d}^{d}q_ {2}}{(2\pi)^{d}}\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2}\cdot q_{2} )e^{iq_{1}\cdot b}\delta(\tau-q_{1}\cdot b)\] \[\quad\times\hat{\delta}^{(d)}(q_{1}+q_{2}-\tilde{k})M_{5,\rm tree }^{\rm cl}(p_{1}p_{2}\to p_{1}-q_{1},p_{2}-q_{2},\tilde{k}^{--})\,, \tag{112}\] where \(\tilde{k}=(1,\mathbf{n})\). After plugging in the explicit form of the tree amplitudes, eq. (109) for GR and eq. (108) for \(\mathcal{N}=8\) supergravity, we find that the contribution to the reduced LO waveform, defined in eq. (109), has the following unified form, \[(\hat{h}_{+}^{(1)}+i\hat{h}_{\times}^{(1)})(\tau,\mathbf{n})=-\frac{ m_{1}m_{2}}{16\mathsf{M}^{2}}\big{(}B_{1}\mathcal{J}_{1,0}^{0,0}+B_{2}\mathcal{J}_{1, 0}^{1,0}+B_{3}\mathcal{J}_{1,0}^{0,1}+B_{4}\mathcal{J}_{0,1}^{0,0}+B_{5} \mathcal{J}_{0,1}^{1,0}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+B_{6}\mathcal{J}_{0,1 }^{0,1}+B_{7}\mathcal{J}_{1,1}^{0,0}+B_{8}\mathcal{J}_{1,1}^{1,0}+B_{9} \mathcal{J}_{1,1}^{2,0}\big{)}\,, \tag{113}\] where the integrals \(\mathcal{J}_{\beta_{1},\beta_{2}}^{\alpha_{1},\alpha_{2}}\) are defined as \[\mathcal{J}_{\beta_{1},\beta_{2}}^{\alpha_{1},\alpha_{2}}=\int\frac{\mathrm{d}^ {d}q_{1}}{(2\pi)^{d}}\hat{\delta}(u_{1}\cdot q_{1})\hat{\delta}(u_{2}\cdot(q_{ 1}-\tilde{k}))\delta(\tau/\sqrt{-b^{2}}-q_{1}\cdot\tilde{b})\frac{(\varepsilon _{-}\cdot q_{1})^{\alpha_{1}}(\tilde{k}\cdot q_{1})^{\alpha_{2}}}{(q_{1}^{2})^ {\beta_{1}}((q_{1}-\tilde{k})^{2})^{\beta_{2}}}\,, \tag{114}\] and the coefficients \(B_{1,\ldots,9}\) given by \[B_{1}=\frac{4\sigma(\varepsilon_{-}\cdot u_{2})}{(\tilde{k}\cdot u _{2})}(u_{1}\cdot\tilde{f}_{-}\cdot u_{2}) B_{2}=2(2\sigma^{2}-X)\frac{(\varepsilon_{-}\cdot u_{2})}{( \tilde{k}\cdot u_{2})} B_{3}=-(2\sigma^{2}-X)\frac{(\varepsilon_{-}\cdot u_{2})^{2}}{( \tilde{k}\cdot u_{2})^{2}}\] \[B_{4}=\frac{4\sigma(\varepsilon_{-}\cdot u_{1})}{(\tilde{k}\cdot u _{1})}(u_{2}\cdot\tilde{f}_{-}\cdot u_{1}) B_{5}=-2(2\sigma^{2}-X)\frac{(\varepsilon_{-}\cdot u_{1})}{( \tilde{k}\cdot u_{1})} B_{6}=(2\sigma^{2}-X)\frac{(\varepsilon_{-}\cdot u_{1})^{2}}{ (\tilde{k}\cdot u_{1})^{2}}\] \[B_{7}=4(u_{2}\cdot\tilde{f}_{-}\cdot u_{1})^{2} B_{8}=8\sigma(u_{1}\cdot\tilde{f}_{-}\cdot u_{2}) B_{9}=2(2\sigma^{2}-X). \tag{111}\] In these expressions, \(\tilde{f}_{-}^{\mu\nu}=\tilde{k}^{\mu}\varepsilon_{-}^{\nu}-\tilde{k}^{\nu} \varepsilon_{-}^{\mu}\), and \(X=1,0\) for GR and \(\mathcal{N}=8\) supergravity, respectively. With foresight on the integrals required by the evaluation of time-domain observables, it is convenient to follow the strategy of Ref. [43] and compute the \(\mathcal{J}_{\beta_{1},\beta_{2}}^{\alpha_{1},\alpha_{2}}\) integrals in \(d=4\) by decomposing the integration variable \(q_{1}\) along four orthogonal fixed vectors as in eq. (110). The integrals over \(z_{1}\), \(z_{2}\), and \(z_{b}\) are localized due to the three delta functions in eq. (111). This leaves the integral over \(z_{v}\) as the only nontrivial integral. The first six integrals in eq. (110) have been evaluated in Ref. [43]. Using this method, the last three integrals can be brought to a one-parameter integral over a finite range. The background at \(\tau\to-\infty\) is also subtracted. To demonstrate the final result at LO, we plot the evolution of the waveform at a particular location at the spatial infinity in figure 9. The GR and \(\mathcal{N}=8\) waveforms have the same qualitative features. The difference is approximately an overall scale. Here and after, all the plots are made with \(m_{1}=m_{2}\). ### Next-to-leading order (NLO) The NLO time-domain waveform follows from the \(L=1\) component of eq. (109). Here we further separate out the tail contribution from the IR finite one-loop amplitude (104), \[M_{5,1\ {\rm loop}}^{0,2{\rm MPI},\ {\rm cl}} =\overline{M}_{5,1\ {\rm loop}}^{0,2{\rm MPI},\ {\rm cl}}+M_{\rm tail}\,,\] \[M_{\rm tail} =\frac{i\,\kappa^{2}}{64\pi}(k\cdot p_{1}+k\cdot p_{2})\left[\log \frac{16\pi^{2}(k\cdot u_{1})^{2}(k\cdot u_{2})^{2}}{\Lambda^{4}}\right]M_{5, {\rm tree}}^{\rm cl}\,. \tag{112}\] Figure 9: The evolution of the LO waveform at a particular location at the spatial infinity. We work in the COM frame with \(v=1/5\) and \(m_{1}=m_{2}\). We first compute the contribution from \(\overline{M}^{0,\text{2MPI, cl}}_{5,1\text{ loop}}\), which can be written as \[(h_{+}^{\infty}+ih_{\times}^{\infty})(t,\mathbf{n})\Big{|}_{\text{NLO}}=\mathcal{H}_ {R}+i\,\mathcal{H}_{I}+(h_{+}^{\infty}+ih_{\times}^{\infty})(t,\mathbf{n})\Big{|}_ {\text{NLO}}^{\text{tail}}\,, \tag{112}\] where \(\mathcal{H}_{R,I}\) are obtained by combining eqs. (109), (110) and (111): \[\mathcal{H}_{R} =-\frac{1}{2\pi}\int\frac{\text{d}^{d}q_{1}}{(2\pi)^{d}}\hat{ \delta}(2p_{1}\cdot q_{1})\hat{\delta}\big{(}2p_{2}\cdot(q_{1}-\tilde{k})\big{)} \,\varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\text{Re}\left[\overline{M}^{0, \text{2MPI, cl}}_{5,1\text{ loop}}\right]_{\mu\nu}\] \[\qquad\qquad\qquad\qquad\times\left[\frac{1}{(\tau-q_{1}\cdot b+ i0)^{2}}+\frac{1}{(\tau-q_{1}\cdot b-i0)^{2}}\right], \tag{113}\] \[\mathcal{H}_{I} =-\frac{1}{2\pi}\int\frac{\text{d}^{d}q_{1}}{(2\pi)^{d}}\hat{ \delta}(2p_{1}\cdot q_{1})\hat{\delta}\big{(}2p_{2}\cdot(q_{1}-\tilde{k}) \big{)}\,\varepsilon_{-}^{\mu}\varepsilon_{-}^{\nu}\text{Im}\left[\overline{M} ^{0,\text{2MPI, cl}}_{5,1\text{ loop}}\right]_{\mu\nu}\] \[\qquad\qquad\qquad\qquad\times\left[\frac{1}{(\tau-q_{1}\cdot b+ i0)^{2}}-\frac{1}{(\tau-q_{1}\cdot b-i0)^{2}}\right].\] We note that \(\mathcal{H}_{R}\) and \(\mathcal{H}_{I}\) do not correspond to the real and imaginary part of the final waveform, because the polarization vectors in these expressions are still complex. The tail contribution will be computed later in this section. As at leading order, we compute the integral over \(q_{1}\) in \(d=4\) by decomposing this variable in the basis of 4d vectors in eq. (105). For both GR and \(\mathcal{N}=8\) supergravity, it is easier to compute \(\mathcal{H}_{I}\) due to the presence of \[\frac{1}{(\tau-q_{1}\cdot b+i0)^{2}}-\frac{1}{(\tau-q_{1}\cdot b-i0)^{2}}=2\pi i \delta^{\prime}(\tau-q_{1}\cdot b)\,. \tag{114}\] Thus we can use the three delta functions in \(\mathcal{H}_{I}\) to localize the integrals over the \(z_{1}\), \(z_{2}\) and \(z_{b}\) variables. The integrand is now an algebraic function of \(z_{v}\) parametrized by \(\tau\). The resulting integral over \(z_{v}\) is convergent and performed numerically. The background at \(\tau\to-\infty\) is also subtracted at the end. In contrast, there are only two delta functions in \(\mathcal{H}_{R}\), which localize \(z_{1}\) and \(z_{2}\). For \(\mathcal{N}=8\) supergravity, the integrand is a rational function in \(z_{b}\) and \(z_{v}\) according to eq. (102). Therefore, we can integrate \(z_{b}\) using the residue theorem, resulting in an algebraic function of \(z_{v}\), which will be integrated numerically. For \(\mathcal{N}=8\) supergravity, the subtraction of background at \(\tau\to-\infty\) can be done before or after the final \(z_{v}\) integral since it does not affect the convergence. However, for GR, after localizing \(z_{1}\) and \(z_{2}\), the integrand contains square roots according to eq. (101). For this case, we still first integrate over \(z_{b}\). After some rescaling, the integral has the following generic form, \[\int_{-\infty}^{\infty}\text{d}z_{b}\frac{f(z_{b})}{(z_{b}^{2}+1)^{1/2}}=\int_ {0}^{\infty}\text{d}w\frac{f(\frac{w^{2}-1}{2w})}{w}=-\sum_{w_{i}}\text{Res}_ {w=w_{i}}\frac{f(\frac{w^{2}-1}{2w})\log w}{w}\,, \tag{115}\] where \(f(z_{b})\) is a rational function, and we have changed the variable to \(z_{b}=\frac{w^{2}-1}{2w}\). Since the original \(z_{b}\) integral is convergent, there are no poles at \(w=0\) and \(w=\infty\) after the change of variable. We can evaluate this integral by summing over all the residues, which leads to the final answer in eq. (110). Now if we subtract the background at \(\tau=-\infty\), the final \(z_{v}\) integral is convergent and we integrate it numerically. Unlike the previous cases, the background subtraction needs to be done before the \(z_{v}\) integrals because the integral is divergent otherwise. Alternatively, if one keeps \(d=4-2\epsilon\), the integration and background subtraction can be done in any order, the difference being at most some local \(\delta(|\mathbf{b}|)\) terms in the waveform and memory. We plot the NLO waveforms in GR observed at several angles at the spatial infinity in figure 10. In this figure, we fix the COM velocity of the two black holes to be \(v=1/5\). Then, in figure 11, we fix the observation angle to be \((\theta,\phi)=(\frac{7\pi}{10},\frac{7\pi}{5})\) and vary the velocities of the black holes. At low velocities, the waveforms have a much larger magnitude and are more spread out in the time domain; we may understand this intuitively by noting that at lower velocities, the two particles have a smaller minimal separation, so they experience larger accelerations and therefore radiate more. We also note that at lower velocities, we are closer to the boundary of the PM regime, which requires that \(Jv^{2}\) be sufficiently large. Finally, for comparison, we include in figures 12 and 13 the NLO waveforms in \(\mathcal{N}=8\) supergravity. As expected, the \(\mathcal{N}=8\) waveforms do not display any memory effect at NLO. At low velocities, for example, \(v=1/20\), the imaginary part Figure 11: The dependence of the NLO waveform in GR on the COM velocities at the fixed angle \(\theta=7\pi/10\) and \(\phi=7\pi/5\). Figure 10: The NLO waveform in GR with COM velocity \(v=1/5\). at \(\tau\to\infty\); this is merely an artifact of our initial and final times not being infinite. We note that the waveform profile is spread over a larger time interval and that they have a larger amplitude than the GR waveform. A possible explanation is that fields of the \(\mathcal{N}=8\) supergravity yield an additional net attractive force increasing the acceleration experienced by the two particles relative to GR. We remark that at certain angles, our GR waveform displays features which, for lack of a better name, we refer to as "kinks." For example, such kinks can be seen in figure 10 for \(\hat{h}_{+}^{(2)}\) at \((\theta,\phi)=(\frac{7\pi}{10},\frac{7\pi}{5})\) and \((\frac{7\pi}{10},\frac{2\pi}{5})\). At these locations, several terms in the integrand become sharply peaked (but finite), and the resulting large values of the integral cancel over many orders of magnitude. Our numerical evaluation further suggests that more prominent kinks exist at some other angles, such as \((\theta,\phi)=(\frac{\pi}{2},0)\), which corresponds to an observation direction parallel to \(\mathbf{b}\) in the plane of the two incoming particles. It is unclear whether these features are merely numerical artifacts due to cancelation between large numbers, or they are physical. A definitive answer can be provided by an analytic evaluation of the waveform, which we leave for future work. We now turn to the evaluation of the gravitational-wave tails. To this end we plug \(M_{\rm tail}\) of eq. (8.20) into \(\mathcal{I}_{L}\) of eq. (3.55). As discussed in sec. 3.4, to evaluate the Fourier transform to the time domain in the presence of the logarithmic dependence on \(\omega\) we simply differentiate eq. (3.55) with respect to \(n\). Thus, \[(h_{+}^{\infty}+ih_{\times}^{\infty})(t,\mathbf{n})\Big{|}_{\text{NLO}}^{ \text{tail}} =\frac{i\,\kappa^{2}}{32\pi}\int\frac{\text{d}^{d}q_{1}}{(2\pi)^{d}} \hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}\big{(}2p_{2}\cdot(q_{1}-\tilde{k}) \big{)}(\tilde{k}\cdot p_{1}+\tilde{k}\cdot p_{2})M_{5,\text{tree}}^{\text{cl.}}\] \[\quad\times\int_{-\infty}^{+\infty}\frac{\text{d}\omega}{2\pi}e^ {-i\omega(\tau-q_{1}\cdot b)}\omega\log\frac{4\pi\omega^{2}(u_{1}\cdot\tilde{k })(u_{2}\cdot\tilde{k})}{\Lambda^{2}} \tag{8.26}\] \[=\frac{i\,\kappa^{2}}{16\pi}\int\frac{\text{d}^{d}q_{1}}{(2\pi)^ {d}}\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}\big{(}2p_{2}\cdot(q_{1}-\tilde {k})\big{)}(\tilde{k}\cdot p_{1}+\tilde{k}\cdot p_{2})M_{5,\text{tree}}^{ \text{cl.}}\] \[\quad\times\left[\frac{\gamma_{\text{E}}-1+\log\frac{i(\tau-q_{1 }\cdot b-i0)\Lambda}{\sqrt{4\pi(u_{1}\cdot\tilde{k})(u_{2}\cdot\tilde{k})}}}{ (\tau-q_{1}\cdot b-i0)^{2}}-\frac{\gamma_{\text{E}}-1+\log\frac{-i(\tau-q_{1} \cdot b+i0)\Lambda}{\sqrt{4\pi(u_{1}\cdot\tilde{k})(u_{2}\cdot\tilde{k})}}}{( \tau-q_{1}\cdot b+i0)^{2}}\right]\;.\] Since the integral is finite, we compute it directly in four dimensions. To this end we parametrize \(q_{1}\) as in (8.10) and evaluate the \(z_{1,2}\) integrals using the explicit \(\delta\)-functions. Unlike eqs. (8.23) and (8.24), the logarithmic dependence on \(\omega\) prevents the appearnce a third \(\delta\) function. Since the integrand does not exhibit branch cuts in \(z_{v}\) we evaluate this integral analytically using Cauchy's residue theorem. The last integral, over \(z_{b}\), is evaluated numerically. To this end we also need to choose a value for the cutoff \(\Lambda\) defining the infrared soft virtual gravitons.20 It is required to be well-inside the soft region defining the classical limit, \(\Lambda\ll|\mathbf{q}|\). We may therefore set a dimensionless bound on the product of \(\Lambda\) and the impact parameter \(\mathbf{b}\), which is \(\mathcal{O}(|\mathbf{q}|^{-1})\) being the Fourier-conjugate of the momentum transfer. We will choose \(\Lambda|\mathbf{b}|=5\times 10^{-10}\). Since in the classical limit \(|\mathbf{q}|\) and the frequency \(\omega\) of the outgoing graviton are of the same order, we may also relate \(\Lambda\) and the lowest frequency accessible to a detector. The results for GR and \(\mathcal{N}=8\) supergravity are plotted in figures 14 to 17, and we have chosen the same observation angles and velocities as before. Figure 14: The tail contribution to the NLO waveform in GR with the COM velocity \(v=1/5\) and IR cutoff \(\Lambda|\mathbf{b}|=5\times 10^{-10}\). Figure 16: The tail contribution to the NLO waveform in \(\mathcal{N}=8\) supergravity with the COM velocity \(v=1/5\) and IR cutoff \(\Lambda|\mathbf{b}|=5\times 10^{-10}\). Figure 17: The tail contribution to NLO waveform in \(\mathcal{N}=8\) supergravity on the COM velocities at the fixed angle \(\theta=7\pi/10\) and \(\phi=7\pi/5\), and IR cut off \(\Lambda|\mathbf{b}|=5\times 10^{-10}\). Figure 15: The tail contribution to NLO waveform in GR on the COM velocities at the fixed angle \(\theta=7\pi/10\) and \(\phi=7\pi/5\), and IR cut off \(\Lambda|\mathbf{b}|=5\times 10^{-10}\). We note the absence of a memory contribution from the gravitational-wave tail, in agreement with the vanishing soft limit of the imaginary part of the one-loop five-point amplitude, as well as the close similarities of the velocity dependence of the tail and \(\overline{M}^{0,\text{2MPI, cl}}_{5,1\text{ loop}}\) to the two gravitational-wave polarizations at fixed angle. In both contributions a higher amplitude of the wave corresponds to lower-velocity scattering, in agreement with the intuition that for fixed impact parameter lower-velocity particles experience a larger acceleration. Moreover, the angular dependence of the tail and of \(\overline{M}^{0,\text{2MPI, cl}}_{5,1\text{ loop}}\) contributions to \(\hat{h}^{(2)}_{+}\) are also similar in shape and velocity dependence. In contrast, their contributions \(\hat{h}^{(2)}_{\times}\) exhibit quite different, cf. e.g. the right panels of figures 11 and 15. While the gravitational-wave tail contributions may be changed somewhat by varying the cutoff \(\Lambda\), a difference between \((\theta,\phi)<(\pi/2,\pi)\) and \((\theta,\phi)>(\pi/2,\pi)\) persists. It is tempting to speculate that such a difference might offer an observable signature distinguishing the gravitational-wave tail from the local-in-time effects. ## 9 Conclusions The observable-based formalism [42; 43] directly links scattering amplitudes and observables of unbound binary systems in the classical regime. This framework naturally includes both conservative and dissipative effects, providing a means to finding local observables such as the asymptotic gravitational waveform for a scattering event in addition to inclusive ones such as the impulse or the energy and angular momentum loss. With the possible exception of singular momentum configurations, they are governed by four and five-point amplitudes [97]. We examined in detail this connection for the scattering waveform and found that, beyond leading order, the exponential form of amplitudes proposed in Refs. [60; 61] implies that only (the suitably-defined) two-matter-particle-irreducible components of amplitudes can contribute to this observable. We computed these parts of the four-scalar-one-graviton amplitude at one-loop order in GR with minimally-coupled scalars and in \(\mathcal{N}=8\) supergravity. We obtained the former using generalized unitarity, both by evaluating the classical part of the complete amplitude and by using a HEFT approach which effectively corresponds to truncating the tree amplitudes to classical order. Comparing the two results and requiring that they agree identifies a prescription for treating uncut matter propagators, that they be treated as principal-valued. We obtained the corresponding integrand in \(\mathcal{N}=8\) supergravity, which describes the emission of gravitational radiation from the scattering of two half-BPS black holes, by using the double copy of the corresponding integrand in \(\mathcal{N}=4\) super-Yang-Mills theory and dimensional reduction. Upon reduction to master integrals, the five-point classical amplitudes exhibit novel features compared to their four-point counterparts. For example, the outgoing graviton momentum injects a scale in some of the scaleless (and thus vanishing) integrals in the four-point amplitude, implying that they can contribute to the five-point classical amplitude. The one-loop master integrals are sufficiently simple so we evaluated them by direct integration for physical kinematics.21 A distinguishing feature of the (2MPI part of the) classical five-point amplitude is that it exhibits an infrared divergent phase. The structure of this phase divergence is governed by Weinberg's classic result [68] and the classical nature of the amplitude; we argue that at this order - and perhaps to all orders in perturbation theory - it can be absorbed as a suitable shift in the definition of the retarded time. Consequently, the observation of the gravitational waveform is sensitive only to differences of retarded times rather than the absolute time of the process. This is consistent with the basic assumption of scattering theory that observations are carried out at infinite times. We also gave a precise relation between the gravitational wave memory and the soft limit of the corresponding scattering amplitude, along the lines of Ref. [113], relating the amplitude of the memory to the scattering angle to at least \(\mathcal{O}(G^{3})\). Interestingly, this implies that the gravitational-wave memory vanishes at \(\mathcal{O}(G^{2})\) for half-BPS black holes in \(\mathcal{N}=8\) supergravity, which is confirmed by the explicit calculation. Footnote 10: The \(\mathcal{N}=8\) supergravity is the \(\mathcal{N}=8\) supergravity with \(\mathcal{N}=8\) supergravity. Even though our results are analytic in frequency and momentum space and in time-momentum space, we resorted to numerical integration to evaluate the one-dimensional integral completing the transformation to time-impact parameter space. A complete analytic evaluation of the waveform, which we leave for future work, would involve obtaining an analytic expression for this last one-dimensional integral. The result would resolve certain numerical features that we encountered at special observation angles and would open the door to the evaluation of interesting inclusive and local observables, such as the energy and angular momentum loss at \(\mathcal{O}(G^{4})\) and the radiated energy spectrum \(E(\omega)\) through traditional general relativity methods [114; 115; 116; 117]. The classical one-loop five-point amplitude constructed here allows the computation of inclusive quantities at 4PM order through the KMOC formalism, along the lines of the 3PM calculations of Refs. [29; 33; 86]. While the next-to-leading order is the current state of the art for relativistic scattering waveforms, it is natural to think about higher orders. The experience with \(\mathcal{N}\)-extended supergravity loop calculations [118; 119; 120; 121; 122; 123] suggests that the construction of the relevant classical integrand through double copy and generalized unitarity should scale well to higher orders and the results will provide tests of the HEFT approach to classical amplitudes beyond those already available. We expect, however, that the evaluation of the resulting master integrals will benefit from advanced techniques such as differential equations and method of regions, as was done for the full quantum two-loop five-point amplitude of \(\mathcal{N}=8\) supergravity in Ref. [124]. Going to higher order could also help us better understand radiation-reaction, which already appears at one-loop [70], and other physical phenomena that could appear. The current approach to analytic and semi-analytic binary inspiral waveforms makes use of the effective one-body formalism [2; 4], whose input is an off-shell binary Hamiltonian and radiation reaction forces. It would be interesting to identify features of gravitational waveforms that can be analytically continued between bound and unbound systems, in analogy with inclusive observables discussed in Refs. [38; 39; 41]. We may expect that a detailed understanding of the analytic structure of the waveforms, both in the bound and unbound case, will be important. ###### Acknowledgements. We thank A. Elkhidir, D. O'Connell, M. Sergola, I. Vazquez-Holm, and A. Brandhuber, G. Brown, G. Chen, S. De Angelis, J. Gowdy, G. Travaglini for coordination on ongoing work. We also thank N. Arkani-Hamed, R. Akhoury, J. Berman, R. Britto, L. Dixon, H. Elvang, S. Mizera and C.-H. Shen for stimulating discussions. RR and FT also thank Z. Bern, E. Herrmann, J. Parra-Martinez, M. Ruf, C.-H. Shen, and M. Zeng for collaboration on related topics, and especially M. Ruf for providing valuable comments on our draft. RR and FT would like to thank the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, for hospitality during the program "High-Precision Gravitational Waves". Their research there was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. AH is supported by a Rackham Predoctoral Fellowship from the University of Michigan. RR and FT are supported by the U.S. Department of Energy (DOE) under award number DE-SC00019066. ## Appendix A Self-consistency of HEFT Consistency of the HEFT construction of the classical part of amplitudes requires that the 2MPR contributions are always super-classical and then subtracted in the KMOC formalism. We will show that, should 2MPR terms exist at the classical order, all such contributions require quantum information about the HEFT tree amplitudes used in the unitarity construction. To this end, we will use the \(\hbar\) counting as introduced in eq. (2). Consider a cut of a four-scalar amplitude containing \(k\) HEFT tree amplitudes, each with \(\tilde{n}_{g,i}\) internal gravitons \(n_{g,i}\) external gravitons, and \(n_{\phi,i}\) scalars; call \(\tilde{N}_{g}=\frac{1}{2}\sum_{i=1}^{k}\tilde{n}_{g,i}\) and \(n_{g}=\sum_{i=1}^{k}n_{g,i}\) respectively the total number of internal and external gravitons, and \(N_{\phi}=\frac{1}{2}\sum_{i=1}^{k}n_{\phi,i}-2\) the total number of internal scalars. Restoring the integration measure and the matter and graviton propagators, this cut scales as \[\underbrace{\left[\prod_{i=1}^{k}\hbar^{-3(\frac{1}{2}n_{\phi,i}-1)-\frac{1}{ 2}\tilde{n}_{g,i}-\frac{1}{2}n_{g,i}}\right]}_{\text{HEFT tree amplitudes}}\underbrace{\hbar^{-2\tilde{N}_{g}-N_{\phi}}\hbar^{4L}}_{ \text{propagators and measures}}=\hbar^{-3-\frac{1}{2}n_{g}}\hbar^{\tilde{N}_{g}+1-k}\,, \tag{104}\] where we have used eq. (3) and the loop number \(L=\tilde{N}_{g}+N_{\phi}-k+1\). This cut scales classically if \[\tilde{N}_{g}+1-k=0. \tag{105}\] Consistency requires that if we cut any line the result should still scale classically. Let us consider cutting a pair of matter lines. The resulting eight-scalar amplitude scales as \[\hbar^{-9-\frac{1}{2}n_{g}}\hbar^{\tilde{N}_{g}+1-k}\hbar^{8-a}\,. \tag{106}\] Here we have isolated the classical scaling \(\hbar^{-9}\) of an eight-scalar amplitude. If the graph remains connected, then we remove two matter propagators and two integral measures from eq. (104), which leads to \(a=8\). On the other hand, if the graph is now disconnected, then we only need to remove two matter propagators and one integral measure instead, which leads to \(a=4\). Thus, if \(a=8\) - i.e. if the cut of the eight-scalar amplitude is connected - then it has classical scaling upon use of eq. (100) and the truncation to classical order is iteratively consistent. In contrast, if \(a=4\) - i.e. if the cut of the eight-scalar amplitude is disconnected - the cut is suppressed by \(\hbar^{4}\), which means that the disconnected components contain quantum contributions. Repeating the calculation above for any pair of matter lines, it follows that any 2MPR contribution with a classical scaling requires quantum information about some amplitude contributing to it. Since the HEFT approach prescribes that tree amplitudes be truncated to classical orders, the \(\hbar\) scaling of 2MPR diagrams and cuts with classical vertices is super-classical, with one factor of \(\hbar^{-1}\) for each present two-matter-particle cut. On the other hand, if we explicitly include some quantum operators in the Lagrangian and keep higher-order terms in the HEFT tree amplitudes, we will get classical 2MPR contributions. However, they will be consistently subtracted in the KMOC formalism. Thus, only 2MPI cuts - and consequently only 2PMI diagrams - should contribute to classical scattering observables to any loop order. ## Appendix B Evaluating box master integrals In this appendix, we derive all the box master integrals listed in section 6.2. We will repeatedly use the bubble integral with a cut matter propagator and a generic propagator in \(d=4-2\epsilon\), \[\int\mathrm{d}^{4-2\epsilon}\ell\frac{\delta(\bar{u}_{2}\cdot\ell)}{(\ell-Q_{ 1})^{2}-m^{2}+i0}=-\frac{\pi^{(3-2\epsilon)/2}\Gamma(\frac{2\epsilon-1}{2})}{ [-(\bar{u}_{2}\cdot Q_{1})^{2}+m^{2}-i0]^{(2\epsilon-1)/2}}\,, \tag{101}\] where \(Q_{1}\) is a generic vector. ### \(I_{1,1,1,0}\) We first compute the box integral with three graviton propagators, \[I_{1,1,1,0}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar {u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}[(\ell-q_{1})^{2}+i0]}=\int_{0}^{1} \mathrm{d}x\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar {u}_{2}\cdot\ell)}{\ell^{2}[(\ell-q_{1}+xk_{5})^{2}+i0]^{2}}\,, \tag{102}\] where we have introduced a Feynman parameter to combine the two massless propagators \((\ell+q_{2})^{2}\) and \((\ell-q_{1})^{2}\) into a single propagator. We next use an IBP identity to reduce the double propagator \([(\ell-q_{1}+xk_{5})^{2}+i0]^{2}\), \[\int_{0}^{1}{\rm d}x\int\frac{{\rm d}^{d}\ell}{(2\pi)^{d}}\frac{ \hat{\delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}[(\ell-q_{1}+xk_{5})^{2}+i0]^{2}}\] \[=-\frac{d-3}{2(\bar{u}_{2}\cdot q_{1})^{2}}\int_{0}^{1}{\rm d}x \frac{1}{(1-x)^{2}[(1-x)q_{1}^{2}+xq_{2}^{2}]}\int\frac{{\rm d}^{d}\ell}{(2 \pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1}+xk_{5})^{2}+i0}\] \[\quad-(d-4)\int_{0}^{1}{\rm d}x\frac{1}{(1-x)q_{1}^{2}+xq_{2}^{2} }\int\frac{{\rm d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot \ell)}{\ell^{2}[(\ell-q_{1}+xk_{5})^{2}+i0]}\,.\] (B.3) The integral in the last line is finite in \(d=4\),22 Footnote 22: One can show this by using \[\int{\rm d}^{4}\ell\frac{\delta(\bar{u}_{2}\cdot\ell)}{\ell^{2}[(\ell-Q_{1})^{ 2}-m^{2}]}=\frac{2\pi^{2}}{\sqrt{(\bar{u}_{2}\cdot Q_{1})^{2}-m^{2}-Q_{1}^{2}} }\arcsin\sqrt{1+\frac{(\bar{u}_{2}\cdot Q_{1})^{2}-m^{2}}{-Q_{1}^{2}}}\] for the \(\ell\) integral. The remaining \(x\) integral is convergent. such that the \((d-4)\) prefactor makes this term only contribute to \({\cal O}(\epsilon)\) order. Thus both the IR divergence and the finite part are given by the first term of eq. (B.3), \[I_{1,1,1,0} =-\frac{d-3}{2(\bar{u}_{2}\cdot q_{1})^{2}}\int_{0}^{1}{\rm d}x \frac{1}{(1-x)^{2}[(1-x)q_{1}^{2}+xq_{2}^{2}]}\int\frac{{\rm d}^{d}\ell}{(2 \pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1}+xk_{5})^{2}}+ {\cal O}(\epsilon)\] \[=\frac{\pi^{5/2-\epsilon}\Gamma(\epsilon+1/2)}{(2\pi)^{4-2\epsilon }[-(\bar{u}_{2}\cdot q_{1})^{2}-i0]^{1/2+\epsilon}}\int_{0}^{1}\frac{{\rm d}x }{(1-x)^{1+2\epsilon}[(1-x)q_{1}^{2}+xq_{2}^{2}]}+{\cal O}(\epsilon)\] (B.4) \[=\frac{1}{32\pi q_{2}^{2}\sqrt{-(\bar{u}_{2}\cdot q_{1})^{2}-i0}} \left[-\frac{1}{\epsilon_{\rm IR}}+\log(-4\pi(\bar{u}_{2}\cdot q_{1})^{2}-i0) +2\log\frac{q_{2}^{2}}{q_{1}^{2}}\right]+{\cal O}(\epsilon)\,.\] The \(i0\) prescription suggests the analytic continuation \(\log(-x-i0)=\log(x)-i\pi\) for \(x>0\) and \(\sqrt{-(\bar{u}_{2}\cdot q_{1})^{2}-i0}=-i(\bar{u}_{2}\cdot q_{1})\), which gives \[I_{1,1,1,0}=\frac{1}{32q_{2}^{2}(\bar{u}_{2}\cdot q_{1})}+\frac{i}{32\pi q_{2} ^{2}(\bar{u}_{2}\cdot q_{1})}\left[-\frac{1}{\epsilon_{\rm IR}}+\log(4\pi( \bar{u}_{2}\cdot q_{1})^{2})+2\log\frac{q_{2}^{2}}{q_{1}^{2}}\right]+{\cal O}( \epsilon)\,.\] (B.5) The result agrees with eq. (6.17) in the main text. ### \(I_{1,1,0,1}^{\pm}\) We next consider the box integrals with one uncut matter propagator. We start with \[I_{1,1,0,1}^{+} =\int\frac{{\rm d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar{u }_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\] \[=\int_{0}^{\infty}{\rm d}x\int\frac{{\rm d}^{d}\ell}{(2\pi)^{d}} \frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell+q_{2})^{2}[(\ell+x\bar{u}_{1} )^{2}-x^{2}+i0]^{2}}\,,\] (B.6) where we have used the eq. (6.9) to combine \(\ell^{2}\) and \(2\bar{u}_{1}\cdot\ell+i0\). The IBP identity can reduce the double propagator \[\int_{0}^{\infty}\mathrm{d}x\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{ d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell+q_{2})^{2}[(\ell+x\bar{u}_{1})^{2 }-x^{2}+i0]^{2}}\] \[=\frac{d-3}{2(y^{2}-1)}\int_{0}^{\infty}\mathrm{d}x\frac{1}{x^{2} (2x\bar{u}_{1}\cdot q_{2}-q_{2}^{2}-i0)}\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{ d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell+x\bar{u}_{1})^{2}-x^{2}+i0}\] \[\quad+(d-4)\int_{0}^{\infty}\mathrm{d}x\frac{1}{2x\bar{u}_{1} \cdot q_{2}-q_{2}^{2}}\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{ \delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}[(\ell-q_{2}+x\bar{u}_{1})^{2}-x^{2}+i 0]}\,.\] (B.7) The integral proportional to \(d-4\) contributes to \(\mathcal{O}(\epsilon)\) order, for a reason similar to the discussion below eq. (B.3). Now using eq. (B.1) for the bubble integral in the second line of eq. (B.7), we get the IR divergence and the finite part of this integral, \[I^{+}_{1,1,0,1} =\frac{d-3}{2(y^{2}-1)}\int_{0}^{\infty}\mathrm{d}x\frac{1}{x^{2} (2x\bar{u}_{1}\cdot q_{2}-q_{2}^{2}-i0)}\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{ d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell+x\bar{u}_{1})^{2}-x^{2}+i0}+ \mathcal{O}(\epsilon)\] \[=-\frac{\pi^{5/2-\epsilon}\Gamma(\epsilon+1/2)}{(2\pi)^{4-2 \epsilon}[-(y^{2}-1)-i0]^{1/2+\epsilon}}\int_{0}^{\infty}\frac{\mathrm{d}x}{x^ {1+2\epsilon}(2x\bar{u}_{1}\cdot q_{2}-q_{2}^{2}-i0)}+\mathcal{O}(\epsilon)\] \[=\frac{i}{32\pi q_{2}^{2}\sqrt{y^{2}-1}}\left[-\frac{1}{\epsilon _{\mathrm{IR}}}+\log(-\pi(y^{2}-1)-i0)-2\log\frac{\bar{u}_{1}\cdot q_{2}-i0}{ -q_{2}^{2}}\right]+\mathcal{O}(\epsilon)\] \[=\frac{i}{32\pi q_{2}^{2}\sqrt{y^{2}-1}}\left[-\frac{1}{\epsilon _{\mathrm{IR}}}+\log(-\pi(y^{2}-1)-i0)-2\log\frac{\bar{u}_{1}\cdot q_{2}}{-q_ {2}^{2}}\right]+\mathcal{O}(\epsilon)\,.\] (B.8) Here we can omit the \(i0\) prescription in \(\log\frac{\bar{u}_{1}\cdot q_{2}}{-q_{2}^{2}}\) because the argument is positive. To compute \(I^{-}_{1,1,0,1}\), we first redefine the loop momentum to flip the sign of \(i0\), \[I^{-}_{1,1,0,1}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2 \bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{2})^{2}(2\bar{u}_{1}\cdot\ell-i0)}=- \int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot \ell)}{\ell^{2}(\ell-q_{2})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\,.\] (B.9) We can directly obtain the result of this integral by sending \(q_{2}\to-q_{2}\) in eq. (B.8) while preserving the sign of \(i0\), \[I^{-}_{1,1,0,1} =-\frac{i}{32\pi q_{2}^{2}\sqrt{y^{2}-1}}\left[-\frac{1}{\epsilon _{\mathrm{IR}}}+\log(-\pi(y^{2}-1)-i0)-2\log\frac{-\bar{u}_{1}\cdot q_{2}-i0}{ -q_{2}^{2}}\right]+\mathcal{O}(\epsilon)\] \[=\frac{1}{16q_{2}^{2}\sqrt{y^{2}-1}}-I^{+}_{1,1,0,1}+\mathcal{O} (\epsilon)\,.\] (B.10) Therefore, the combination \(I^{+}_{1,1,0,1}+I^{-}_{1,1,0,1}\) is free of IR divergences, \[I^{+}_{1,1,0,1}+I^{-}_{1,1,0,1}=\frac{1}{16q_{2}^{2}\sqrt{y^{2}-1}}\,,\] (B.11) which reproduces eq. (6.18a). ### \(I^{\pm}_{1,0,1,1}\) We apply the same strategy to compute the last set of box integrals, \[I^{\pm}_{1,0,1,1}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2 \bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot\ell\pm i0)}\,. \tag{111}\] The only difference here is that the combination \(I^{+}_{1,0,1,1}+I^{-}_{1,0,1,1}\) is now complex, with the imaginary part given by further cutting \((\ell-q_{1})^{2}\). Starting with \(I^{+}_{1,0,1,1}\), we first apply eq. (109) to combine \((\ell-q_{1})^{2}\) and \(2\bar{u}_{1}\cdot\ell+i0\), followed by using an IBP identity to get, \[I^{+}_{1,0,1,1}=\int_{0}^{\infty}\mathrm{d}x\int\frac{\mathrm{d}^{d}\ell}{(2 \pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}[(\ell-q_{1}+x\bar {u}_{1})^{2}-x^{2}+i0]^{2}}\equiv I^{+}_{a}+I^{+}_{b}\,, \tag{112}\] where \(I^{+}_{a}\) and \(I^{-}_{b}\) are defined as \[I^{+}_{a}=-\frac{d-3}{2q_{1}^{2}}\int_{0}^{\infty}\mathrm{d}x \frac{1}{(y^{2}-1)x^{2}-2(\bar{u}_{2}\cdot q_{1})yx+(\bar{u}_{2}\cdot q_{1})^{ 2}}\] \[\qquad\qquad\times\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{ \hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1}+x\bar{u}_{1})^{2}-x^{2}+i0}\,,\] \[I^{+}_{b}=-\frac{d-4}{q_{1}^{2}}\int_{0}^{\infty}\mathrm{d}x \int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2}\cdot \ell)}{\ell^{2}[(\ell-q_{1}+x\bar{u}_{1})^{2}-x^{2}+i0]}\,. \tag{113}\] Similar to eq. (109), we can rewrite \(I^{-}_{1,0,1,1}\) as \[I^{-}_{1,0,1,1}=\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac {\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell-q_{1})^{2}(2\bar{u}_{1} \cdot\ell-i0)} =-\int\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2 \bar{u}_{2}\cdot\ell)}{\ell^{2}(\ell+q_{1})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\] \[\equiv-(I^{-}_{a}+I^{-}_{b})\,, \tag{114}\] where \(I^{-}_{a,b}=I^{+}_{a,b}\big{|}_{q_{1}\to-q_{1}}\). Therefore, we have \[I^{+}_{1,0,1,1}+I^{-}_{1,0,1,1}=(I^{+}_{a}-I^{-}_{a})+(I^{+}_{b}-I^{-}_{b})\,, \tag{115}\] such that we only need to compute the combination \(I^{+}_{a}-I^{-}_{a}\) and \(I^{+}_{b}-I^{-}_{b}\). If we use eq. (101) with \(d=4\) to integrate out the bubble in \(I^{\pm}_{a}\), we will find that the resulting \(x\) integral is divergent at \(x\to\infty\). However, by taking the difference \(I^{+}_{a}-I^{-}_{a}\), we get a finite result since the range of \(x\) gets truncated. Interestingly, the result is proportional to the triangle integral (104), \[I^{+}_{a}-I^{-}_{a}=\frac{1}{q_{1}^{2}}(I^{+}_{0,0,1,1}+I^{-}_{0,0,1,1})=\frac {1}{16q_{1}^{2}\sqrt{y^{2}-1}}+\frac{i}{8\pi q_{1}^{2}\sqrt{y^{2}-1}}\,\mathrm{ arccosh}(y)\,. \tag{116}\] To compute \(I^{+}_{b}-I^{-}_{b}\), we introduce another Feynman parameter \(w\) and perform the \(\ell\) integral in \(d=4\). The result is finite due to the same truncation on the range of \(x\), \[I_{b}^{+}-I_{b}^{-}=-\frac{d-4}{16\pi^{2}q_{1}^{2}}\int_{-\alpha}^{\alpha}\text{ d}x\int_{0}^{1}\frac{\text{d}w}{[-w^{2}(x^{2}+\beta)(y^{2}-1)-w(1-w)q_{1}^{2}-i0] ^{1/2}}\,, \tag{111}\] such that \(I_{b}^{+}-I_{b}^{-}=0\) in \(d=4\). Therefore, we have reproduced eq. (111), \[I_{1,0,1,1}^{+}+I_{1,0,1,1}^{-}=I_{a}^{+}-I_{a}^{-}=\frac{1}{16q_{1}^{2}\sqrt{y ^{2}-1}}+\frac{i}{8\pi q_{1}^{2}\sqrt{y^{2}-1}}\,\text{arccosh}(y)\,. \tag{112}\] ### \(I_{0,1,1,1}^{\pm}\) We first use the same Feynman parameterization as in eq. (109) to bring together the two massless propagators, \[I_{0,1,1,1}^{+} =\int\frac{\text{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar {u}_{2}\cdot\ell)}{(\ell+q_{2})^{2}(\ell-q_{1})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\] \[=\int_{0}^{1}\text{d}x\int\frac{\text{d}^{d}\ell}{(2\pi)^{d}} \frac{\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1}+xk_{5})^{4}(2\bar{u}_{ 1}\cdot\ell+i0)}\,. \tag{113}\] The resultant double propagator \((\ell-q_{1}+xk_{5})^{4}\) is then reduced by the IBP relation, \[\int_{0}^{1}\text{d}x\int\frac{\text{d}^{d}\ell}{(2\pi)^{d}}\frac {\hat{\delta}(2\bar{u}_{2}\cdot\ell)}{(\ell-q_{1}+xk_{5})^{4}(2\bar{u}_{1} \cdot\ell+i0)}\] \[=\frac{d-3}{4(\bar{u}_{2}\cdot q_{1})^{2}}\int_{0}^{1}\text{d}x \frac{x(\bar{u}_{1}\cdot q_{2})+(1-x)y(\bar{u}_{2}\cdot q_{1})}{(1-x)^{2} \mathcal{F}(x)}\int\frac{\text{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar {u}_{2}\cdot\ell)}{(\ell-q_{1}+xk_{5})^{2}+i0}\] \[\quad+\frac{d-4}{2}\int_{0}^{1}\text{d}x\frac{y^{2}-1}{\mathcal{ G}(x)}\int\frac{\text{d}^{d}\ell}{(2\pi)^{d}}\frac{\hat{\delta}(2\bar{u}_{2} \cdot\ell)}{(\ell-q_{1}+xk_{5})^{2}(2\bar{u}_{1}\cdot\ell+i0)}\,, \tag{114}\] where \(\mathcal{G}(x)\) is a quadratic polynomial in \(x\), \[\mathcal{G}(x)=x^{2}(\bar{u}_{1}\cdot q_{2})^{2}+2x(1-x)y(\bar{u}_{1}\cdot q_{ 2})(\bar{u}_{2}\cdot q_{1})+(1-x)^{2}(\bar{u}_{2}\cdot q_{1})^{2}\,. \tag{115}\] Under the principal value combination \(I_{0,1,1,1}^{+}+I_{0,1,1,1}^{-}\), the first integral in eq. (114) is doubled while the second integral is finite in \(d=4\), which only contributes to \(\mathcal{O}(\epsilon)\) due to the prefactor. We can now reproduce eq. (112) through a direct integration, \[I_{0,1,1,1}^{+}+I_{0,1,1,1}^{-} =-\frac{\pi^{5/2-\epsilon}\Gamma(\epsilon+1/2)}{(2\pi)^{4-2 \epsilon}[-(\bar{u}_{2}\cdot q_{1})^{2}-i0]^{1/2+\epsilon}}\int_{0}^{1}\text{ d}x\frac{x(\bar{u}_{1}\cdot q_{2})+(1-x)y(\bar{u}_{2}\cdot q_{1})}{(1+x)^{1+2 \epsilon}\mathcal{F}(x)}\] \[=-\frac{1}{32(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2}\cdot q_{1})} \tag{116}\] \[\quad-\frac{i}{32\pi(\bar{u}_{1}\cdot q_{2})(\bar{u}_{2}\cdot q_{ 1})}\left[-\frac{1}{\epsilon_{\text{IR}}}+\log(4\pi(\bar{u}_{2}\cdot q_{1})^{2 })+2\log\frac{\bar{u}_{1}\cdot q_{2}}{\bar{u}_{2}\cdot q_{1}}\right]\,.\]
2302.09108
ViTA: A Vision Transformer Inference Accelerator for Edge Applications
Vision Transformer models, such as ViT, Swin Transformer, and Transformer-in-Transformer, have recently gained significant traction in computer vision tasks due to their ability to capture the global relation between features which leads to superior performance. However, they are compute-heavy and difficult to deploy in resource-constrained edge devices. Existing hardware accelerators, including those for the closely-related BERT transformer models, do not target highly resource-constrained environments. In this paper, we address this gap and propose ViTA - a configurable hardware accelerator for inference of vision transformer models, targeting resource-constrained edge computing devices and avoiding repeated off-chip memory accesses. We employ a head-level pipeline and inter-layer MLP optimizations, and can support several commonly used vision transformer models with changes solely in our control logic. We achieve nearly 90% hardware utilization efficiency on most vision transformer models, report a power of 0.88W when synthesised with a clock of 150 MHz, and get reasonable frame rates - all of which makes ViTA suitable for edge applications.
Shashank Nag, Gourav Datta, Souvik Kundu, Nitin Chandrachoodan, Peter A. Beerel
2023-02-17T19:35:36Z
http://arxiv.org/abs/2302.09108v1
# ViTA: A Vision Transformer Inference Accelerator for Edge Applications ###### Abstract Vision Transformer models, such as ViT, Swin Transformer, and Transformer-in-Transformer, have recently gained significant traction in computer vision tasks due to their ability to capture the global relation between features which leads to superior performance. However, they are compute-heavy and difficult to deploy in resource-constrained edge devices. Existing hardware accelerators, including those for the closely-related BERT transformer models, do not target highly resource-constrained environments. In this paper, we address this gap and propose ViTA - a configurable hardware accelerator for inference of vision transformer models, targeting resource-constrained edge computing devices and avoiding repeated off-chip memory accesses. We employ a head-level pipeline and inter-layer MLP optimizations, and can support several commonly used vision transformer models with changes solely in our control logic. We achieve nearly 90% hardware utilization efficiency on most vision transformer models, report a power of 0.88W when synthesised with a clock of 150 MHz, and get reasonable frame rates - all of which makes VITA suitable for edge applications. Vision Transformer, Swin Transformer, Hardware Accelerator, Computer Vision, Edge Computing, FPGA ## I Introduction The success of transformer models for NLP applications has led to self-attention-based models being applied to computer vision tasks. Although the initial works in this direction lacked scalability, subsequent works such as Vision Transformers (ViT) [1], Swin Transformers [2], TNT [3], and Data-efficient image Transformers (DeiT) [4] that adopted model architectures similar to the NLP-based Transformer [5], replacing word tokens with image patches, have yielded state-of-the-art (SOTA) results. For some applications, such as autonomous driving and drone navigation, computer vision tasks have demanded real-time implementations on the edge. This has led to the development of energy-efficient hardware accelerators such as Eyeriss [6, 7], ShiDianNao [8, 9] for inferences of traditional CNN-based models [10]. With vision transformer models outperforming the conventional CNN-based models, exploring energy-efficient hardware accelerators for ViTs, which could be implemented at the edge, is similarly important. FPGAs for edge applications are useful because they provide flexibility in computation while still enabling very low latencies. For example, the Zynq Multiprocessor System-on-Chip (MPSoC) ZC7020 [11] and low-end Ultrascale MPSoC ZU3EG [12, 13] are some FPGAs that can be used in applications such as drones and similar low-energy vision oriented tasks. The defining characteristics of such platforms are limited parallelism (100s of DSP units rather than 1000s) and limited on-chip memory (100s of KB of Block RAM (BRAM) rather than MB). Naturally, this means any application targeting such platforms should focus on using the limited available parallelism and minimizing the amount of data transfer to and from off-chip memory. In this work, we propose ViTA - a hardware accelerator architecture and an efficient dataflow that supports several popular vision transformer models, targeting such resource-constrained FPGA devices. We evaluate the performance of ViTA for these models with commonly used configurations in terms of both hardware utilization efficiency and throughput. The remaining of this paper is organized as follows. We provide a primer of the different vision transformer models, the computations involved therein, and existing related hardware accelerators in Section II. We propose ViTA - an architecture suitable for edge devices and an efficient dataflow within typical memory bandwidth constraints in Section III. We evaluate our design for different model architectures and configurations in Section IV. Section V concludes the paper. ## II Preliminaries and Related Work The Vision Transformer (ViT) [1] is one of the first proposed transformer-based models for vision tasks, and is similar to the encoder stack of the BERT transformer [5]. In ViT, the input image is split into patches of 16\(\times\)16 pixels, which constitute a linear sequence of tokens, similar to words in the case of BERT. Its success motivated the development of other transformer-based models, such as the Swin Transformer [2], Data-efficient image Transformers [4], and Transformer-in-Transformer [3]. The key operation of all these models is the Multi-head Self Attention (MSA), applied on the sequence of image patches, followed by fully connected layers. The variation among these models is related to how these attention blocks are applied to the input patches, and the model parameters such as latent space dimensions, patch sizes, and the number of heads, for each of their variants. Figure 1 illustrates some of these model architectures. There have been a few works on hardware accelerators for vision transformers [14][15][16]. Wang et al. [14] targets an application-specific integrated circuit (ASIC), but does not consider off-chip data movement optimizations. Li et al. [15] and Sun et al. [16] target a large FPGA device, which enables storing all the weights and activations on-chip, thereby allowing a larger design space for exploration. However, edge computing devices, such as the Zynq ZC7020 MPSoC, pose more severe design constraints, owing to the lower on-chip computational resources, BRAM memory, and bandwidth for off-chip access. In particular, off-chip memory accesses typically involve high energy and should be minimized. As could be seen from the model architectures, the vision transformer models are quite similar to the BERT transformer [5]. There have been prior works on hardware accelerators targeting different stages of the BERT transformer. Liu et al. [17] proposed an accelerator for a quantized BERT model, and Lu et al. [18] proposed an accelerator design for the MSA and feed-forward blocks, with dedicated computation units for Softmax and LayerNorm. Li et al. [19] proposed FTRANS, an acceleration framework for large scale language representations. However, since NLP applications are typically not deployed at the edge, these target higher-end FPGA devices, and do not consider data movement from off-chip storage devices. ReTransformer [20] proposed a Re-RAM based [21] in-memory computation engine with granular pipeline to accelerate the self-attention stage. In-memory computations typically pose additional limitations in terms of correctly aligning the results into the in-memory compute unit, which can be avoided in FPGA-based digital designs. However, the granular pipeline aspect of this work could potentially reduce the intermediate memory requirements while ensuring high efficiency, and we could explore this aspect in our design. ## III Proposed Architecture ### _Model Architecture_ We base our design on the ViT-B/16 model [1], and in Section IV show how it can be extended to other vision transformers. The ViT-B/16 model considers a patch size \((P\times P)\) where \(P\)=\(16\), having \(L\)=\(12\) layers, a latent vector dimension (\(D\)) of 768, \(k\)=\(12\) heads, a head level latent vector dimension \((D_{h}=D/k)\) of 64, and an MLP hidden layer dimension \((M)\) of 3072 [1]. We consider an image dimension \((H\times W)\) of \(256\times 256\), thus giving the sequence length \((N)\) as \((H.W)/P^{2}\) = 256. We follow a post-training quantization approach, with all the weights and activations quantized to int8 representations for inference. We observe that when evaluated on ViT, this results in almost no degradation (\(<\)\(0.04\%\)) in the top-1 test accuracy on the ImageNet [22] dataset. ### _Hardware Architecture and Dataflow_ We target the Zynq ZC7020 MPSoC (an edge FPGA) for our design. As noted previously, targeting edge computing devices poses multiple constraints. Table I shows the memory requirement for the ViT-B model, which is beyond the typical on-chip memory capacity of edge FPGA devices, indicated in Table II. This necessitates most of the data to be stored in off-chip DRAMs, and brought into the on-chip BRAM cells when required, while minimizing back-and-forth data movement. Most of the computations in inference of vision transformers are matrix multiplications. Schemes such as input-stationary, weight-stationary, or block multiplication can be adopted to meet on-chip memory constraints. We note that in all the vision transformer architectures in Fig. 1, multiple layers of self attention and MLP blocks are stacked one after the other. While the weights are different for each of these layers, the input activations for a layer are passed from the previous layer. This suggests that an input stationary scheme, with the weights loaded to the chip on-demand would be ideal. Once the input activations are loaded, the results of each layer could be stored in the same on-chip location, and only the final results (after all the layers) need to be written back to off-chip memory. Given this arrangement, it would be ideal to hide the off-chip memory access latency involved in fetching the weights, by scheduling computations and weight fetching appropriately. Our proposed approach is to implement this schedule at a column level of the weight matrix. For each weight matrix, BRAM cells that can store two columns of weights are allocated. The access latency can be hidden by ensuring that while one column of the weights is being operated upon, the other column is fetched from off-chip. \begin{table} \begin{tabular}{c|c|c|c} Model & \begin{tabular}{c} Zynq ZC7020 MPSoC \\ \end{tabular} & \begin{tabular}{c} Zynq ZCU102 \\ \end{tabular} \\ \hline LUT6 & 53200 & 274080 \\ DSP slices & 220 & 2520 \\ BRAM & 630 KB & 4 MB \\ \end{tabular} \end{table} TABLE II: Resources available on the Zynq ZC7020 (embedded) MPSoC vs. the Zynq ZCU102 (high-end) MPSoC \begin{table} \begin{tabular}{c|c|c|c|c|c} Input/Weights & Input & \(W^{Q}\) & \(W^{K}\) & \(W^{V}\) & MSA Weights \\ \hline Memory (in KB) & 192 & 576 & 576 & 576 \\ \end{tabular} \end{table} TABLE I: Memory requirements for the input activations and weights for the ViT-B/16 model. The memory requirements for intermediate results are significantly higher. Fig. 1: Vision Transformers under consideration. The Swin Transformer has the same set of blocks as ViT, but the MSA is applied on disjoint windows on the image, and the patch merging blocks scales the image dimensions down in successive stages. #### Iii-C1 Architecture for MLP Layer As noted in Table III, the majority of MAC computations are in the MLP layer, accounting for nearly 60% of the total MAC operations across the different model variants. Thus, it is pertinent to accelerate this computation efficiently. The design for a simple feed-forward layer could be trivial, with the same weights and biases being applied on multiple input activations concurrently. However, we observe that the memory required to store the hidden layer, with dimensions of 256\(\times\)3072 for the ViT-B model, is beyond the total on-chip memory capacity of the target device. To avoid repeated DRAM accesses, we employ the inter-layer optimization technique proposed in [23], using two sets of MAC units. As elaborated in Fig. 3, the hidden layer values computed in the first set of MAC units are broadcast to the second set of MAC units through the non-linear activation, to compute the partial products corresponding to the output layer. This leads to one of the key ideas that inspire our design. In order to process this in a pipelined fashion, we allocate resources such that the hidden layer value computations and the output layer partial product computations take approximately equal times. This ensures they can be pipelined with minimal stalls. This implies that we should dedicate an equal number of MAC units to compute the hidden and output layers. While the MACs dedicated to the hidden layer accumulate the results over multiple cycles, those dedicated to the output layer compute different partial products in each of those cycles. #### Iii-C2 Architecture for MSA Layer For all the model architectures illustrated in Table III, the self attention operation also accounts for about 30-40% of the total computations. Consequently, we try to optimize the hardware accelerator design to suit these computations, while adhering to the above scheme for the MLP layer. The MSA layer involves the following computations [24], where \(z\) refers to the input to the layer, \(i\) to the head number, \(k\) to the total number of heads, and Q, K, V & SA to the standard self-attention terminology: \[[Q|K|V]_{i}=z.[W^{Q}|W^{K}|W^{V}]_{i}\quad,W^{Q/K/V}\in R^{DxD_{h}} \tag{1}\] \[S_{i}(z)=\text{Softmax}(Q_{i}K_{i}^{T}/\sqrt{D_{h}}) \tag{2}\] \[SA_{i}(z)=S_{i}.V_{i} \tag{3}\] \[MSA(z)=[SA_{1}(z),...,SA_{k}(z)].W^{msa}\quad,W^{msa}\in R^{DxD} \tag{4}\] where (.) refers to matrix multiplication. As illustrated, the MSA block involves computing self attention on each head, and finally concatenating the results. Although the operations on each head can be parallelized, the on-chip resource constraints dominate the design choice. Processing multiple heads implies that the computed values (say \(Q\), \(K\), and \(V\)) for all the heads will have to be staged until the next set of values (\(Q.K^{T}\), \(S\), and \(SA\)) are computed. Given the on-chip memory constraints, storing these intermediate matrices for all the heads is not feasible. Hence, to avoid unnecessary data movement to and from off-chip, we instead perform head-wise computations. As detailed in Sec II, there have been several works focused on accelerating the MSA block, such as the ReTransformer [20], which proposes a row-level fine-grained pipeline for efficiently computing the self attention block using two PE engines. However, since the input matrix has been held stationary, the remaining available memory is not sufficient to hold the required weights for computing the \(Q\), \(K\), and \(V\) for a head. To avoid repeated off-chip accesses, we, in contrast, store the weights one column at a time and proceed to the next column once all rows of input activations are multiplied with this column. As a consequence, we can only compute the \(Q\), \(K\), and \(V\) matrices in a column-wise fashion, and hence the \(QK^{T}\) and further computations for a given head cannot occur until both \(Q\) and \(K\) matrices are completely computed. To this effect, we propose a head-level coarse-grained pipeline with two dedicated sets of processing units handling these different computations, with near-equal latency. #### Iii-C3 Overall Architecture and Optimal Configuration Drawing conclusions from the observations and implications made on accelerating the MLP and MSA layer, we propose a configurable processing element array-based hardware accelerator and the corresponding scheduling scheme, illustrated in Figs. 2 and 4. The PE blocks 1, 2, and 3 together make up the first compute engine that performs the computations for generating \(Q\), \(K\), and \(V\), while the PE blocks 4 and 5 form the second compute engine that performs the \(QK^{T}\) and \(S.V\) operation, respectively. Dedicated units for SoftMax, LayerNorm, skip connections and non-linear activations have been included, with the former adapted from [18]. We reuse the same PE blocks for computing the MSA concatenation and the MLP block results. The condition required for efficient acceleration of the MLP layer, as mentioned in Sec. III-B1, is to have equal MAC units dedicated to computing the hidden layer and output layer. This is realised by using half the rows in the PE blocks for computing the hidden layer and the other half for the output layer. As shown in Fig. 2, \(k_{1}\times k_{2}\) refers to the configuration of PE blocks 1, 2 & 3, while \(k_{3}\times k_{4}\) refers to the configuration of PE block 4 & 5, which together provide the configurability in our architecture. As illustrated in Fig. 4, in order to time match the MSA computations in the two compute engines and enable head-wise pipelining proposed in Sec. III-B2, the optimal values of \(k_{1}\), \(k_{2}\), \(k_{3}\), and \(k_{4}\) should satisfy: \[\frac{D}{k_{1}.k_{2}}=\frac{N}{k_{3}.k_{4}} \tag{5}\] while maximising resource utilization subject to the on-chip resource constraints. Consequently, while PE Blocks 1, 2, and 3 operate on head \(h\), PE Blocks 4 and 5 operate on head \(h-1\). Within a head, PE Block 4, Softmax Module and PE Block 5 operate at a row granularity. ## IV Analysis & Experiments We evaluate the performance of ViTA across different model architectures in terms of the hardware utilization efficiency (HUE) [25] of the utilized resources on the FPGA. We choose a design configuration that works best for the ViT-B/16 model, while also restricting ourselves to the resources available on the Zynq ZC7020 MPSoC. From Eq. 5, this corresponds to the PE block configurations being \(k_{1}=16\), \(k_{2}=6\), \(k_{3}=8\) & \(k_{4}=4\). We rely on the LUTs, rather than DSP slices, for compute capability, and ensure that all the memory can be held and accessed as intended from the BRAM cells. Although the DSP slices are power efficient, they are limited in number for the considered FPGA. Moreover, these are optimized for \(18\text{bits}\times 27\text{bits}\) operations, and performing \(8\text{bits}\times 8\text{bits}\) operations on these would lead to significant under-utilization. Although the design parameters of the PE array can be configured at run time, we demonstrate how having a single fixed configuration also allows for a run-time selection of the desired model and input image size, without a significant drop in efficiency. Since the other vision transformer models are typically made up of the MSA and MLP blocks, we can infer any of these models on ViTA by changing the control logic appropriately. For the case of Swin transformer, the W-MSA is performed on windows of \(M\times M\), where \(M{=}7\) by default. On the same hardware architecture, this would just correspond to the regular MSA being performed on \(N{=}49\) repeatedly over these windows. Due to space constraints, we have not elaborated implementation aspects of other models in further detail. When synthesized on the Zynq ZC7020 MPSoC, the design is run at a clock of 150 MHz, and consumes a power of 0.88W for the given configuration. The obtained values of HUE and energy for processing an image are summarized in Table IV. We ensure that the latency in accessing the off-chip memory is hidden in all intermediate cases, as mentioned in Sec III-B, with the DRAM access bandwidth well under 1 word/cycle - a reasonable number for the considered FPGA. Table V compares the performance our design against other vision transformer accelerators. As illustrated, ViTA achieves a significant reduction in power consumption. Since ViTA is focused on highly resource-constrained environments, it is difficult to compare it against other designs in terms of the processing frame rate. However, for the lower compute capability and technology node of our target device, the performance scales comparably with other works. Furthermore, the frame rates achieved by ViTA for the smaller model variants, such as DeiT-S, DeiT-T, and Swin-T are reasonable for embedded applications, such as drone navigation. Moreover, an FPGA environment makes ViTA reconfigurable for different model configurations for improved performance. This coupled with the energy efficiency of ViTA makes it a good choice for deployment on the edge. ## V Conclusions We proposed ViTA - a configurable hardware accelerator and dataflow design that can be employed for inference of ViT models on resource-constrained edge devices. In particular, we introduced a head-level coarse-grained pipeline and performed inter-layer optimization on the MLP layer to avoid unnecessary intermediate result staging. Our design avoids repeated off-chip memory accesses, achieves a high resource utilization efficiency of about 90%, and reports a significantly low power of 0.88W while having a reasonable frame rate - making it well suited for various edge applications. \begin{table} \begin{tabular}{c|c|c|c|c} Accelerator Design & Target Device & Power (W) & fps & fps/W \\ \hline Row-wise-acc. [14] & ASIC (40nm) & * & 44.5 & * \\ Auto-vit-acc. [15] & FPGA (16nm) & 9.40 & 25.9 & 2.76 \\ **ViTA (ours)** & FPGA (28nm) & **0.88** & 2.75 & 3.12 \\ & \multicolumn{3}{c}{} & \multicolumn{1}{c}{* not reported} \\ \end{tabular} \end{table} TABLE V: Performance comparison of vision transformer accelerators for DeiT-B on \(224\times 224\times 3\) dimensioned images Fig. 4: Scheduling the MSA computations. PE Block 4, Softmax Module, and PE Block 5 process in a row-granular pipeline fashion. \begin{table} \begin{tabular}{c|c|c|c|c} Model & Image Dim. & HUE & fps & Energy (J) \\ \hline ViT-B/16 & \(256\times 256\times 3\) & 93.2\% & 2.17 & 0.406 \\ ViT-B/16 or DeiT-B & \(224\times 224\times 3\) & 92.8\% & 2.75 & 0.320 \\ DeiT-S & \(224\times 224\times 3\) & 87.2\% & 9.36 & 0.094 \\ DeiT-T & \(224\times 224\times 3\) & 66.2\% & 19.01 & 0.046 \\ Swin-T & \(224\times 224\times 3\) & 81\% & 8.71 & 0.101 \\ \end{tabular} \end{table} TABLE IV: Overall hardware utilization efficiency (HUE), Frame Rate (fps) and energy for processing one image for different model architectures - for the optimal architecture configuration chosen for ViT-B/16 with a 256\(\times\)256\(\times\)3 image dimensions Fig. 3: Inter-layer optimization for MLP Fig. 2: ViTA : Proposed design for the hardware accelerator
2305.05795
On the Extremality of the Tensor Product of Quantum Channels
Completely positive and trace preserving (CPT) maps are important for Quantum Information Theory, because they describe a broad class of of transformations of quantum states. There are also two other related classes of maps, the unital completely positive (UCP) maps and the unital completely positive and trace preserving (UCPT) maps. For these three classes, the maps from a finite dimensional Hilbert space $X$ to another one $Y$ is a compact convex set and, as such, it is the convex hull of its extreme points. The extreme points of these convex sets are yet not well understood. In this article we investigate the preservation of extremality under the tensor product. We prove that extremality is preserved for CPT or UCP maps, but for UCPT it is not always preserved.
James Miller S. T. da Silva
2023-05-09T23:00:48Z
http://arxiv.org/abs/2305.05795v1
# On the Extremality of the Tensor Product of Quantum Channels ###### Abstract Completely positive and trace preserving (CPT) maps are important for Quantum Information Theory, because they describe a broad class of of transformations of quantum states. There are also two other related classes of maps, the unital completely positive (UCP) maps and the unital completely positive and trace preserving (UCPT) maps. For these three classes, the maps from a finite dimensional Hilbert space \(X\) to another one \(Y\) is a compact convex set and, as such, it is the convex hull of its extreme points. The extreme points of these convex sets are yet not well understood. In this article we investigate the preservation of extremality under the tensor product. We prove that extremality is preserved for CPT or UCP maps, but for UCP it is not always preserved. ## 1 Introduction Quantum Information Theory is a theory of storage and processing of information in the form of quantum states. Completely positive and trace preserving maps [10] are used to describe open quantum systems, and is also a broad class of transformations of quantum states. Being a central object of Quantum Information Theory, understanding its mathematical structure is important for the development of the subject. We'll consider not only CPT maps, but also related classes of maps, the unital completely positive (UCP) maps and the unital completely positive and trace preserving (UCPT) maps. CPT maps from a finite dimensional Hilbert space \(X\) to another \(Y\) forms a compact convex set \(CPT(X,Y)\). The same is true for \(UCP\) or \(UCPT\) maps. Every compact convex set is the convex hull of its extreme points. There are characterizations of these set of extreme points due to Choi [11] (see theorem 2.11) and Laudau and Streater[12] (see theorem 2.18), but the extreme points are yet not well understood. An important question is the stability of extremality under some operations. These include the operations of composition and tensor product. It is already known that the composition of two extreme CPT maps need not be extreme [Exc]. But, apparently, it wasn't yet answered the question about the extremality of the tensor product of extreme CPT, UCP or UCPT maps ([1]). In this article we prove that extremality is preserved by the tensor product in the cases of CPT or UCP maps. For UCPT maps extremality is preserved if one of the Hilbert spaces has dimension 2, but it may not be the case for higher dimensions, as counterexamples shows. For this particular article the maximal Choi ranks of UCPT maps will be important, as we'll see in theorem 3.5. In [1] it was investigated the maximal Choi ranks of UCPT maps, but they were only computed up to dimension 4. For higher dimensions, there was given a lower bound that the maximal Choi rank is at least the dimension of the space. Also, for dimensions 3 and 4 were given single examples of such extreme UCPT maps of maximal rank. In [1] was constructed families of examples of high rank extreme maps, but the lower bounds on the maximal ranks were not improved. A better lower bound on the maximal ranks of extreme UCPT maps may be enough to decide for which dimensions the tensor product preserves extremality for UCPT maps, but this wasn't done yet, as far as I know. The article is organized as follows. In section 2 we review some concepts of Linear Algebra, Convex Sets and Quantum Information Theory. Also, we review the characterization of linear independence as the invertibility of the Gram matrix, which will be important in the cases of CPT and UCP maps. Then, in section 3, we apply these results to prove that the tensor product of extreme CPT or UCP maps is also extreme. Also, we show that if two extreme UCPT maps have high enough Choi ranks, their tensor product cannot be an extreme point of UCPT maps. Then we use examples of [1] of high enough Choi ranks so that their tensor product is not extreme. ## 2 Preliminaries ### Linear Algebra #### 2.1.1 Hilbert spaces As usual in Quantum Mechanics, the inner product of a Hilbert space is antilinear in the first argument and linear in the second. Also, we restrict to complex finite dimensional vector spaces. Therefore, such a Hilbert space is a finite dimensional complex vector space \(X\), together with an inner product \(\langle\,,\,\rangle\colon X\times X\to\mathbb{C}\). The inner product is a sesquilinear form, which means that the following properties are valid: * \(\forall x,x^{\prime},x^{\prime\prime}\in X\), \(\langle x,x^{\prime}+x^{\prime\prime}\rangle=\langle x,x^{\prime}\rangle+ \langle x,x^{\prime\prime}\rangle\); * \(\forall x,x^{\prime}\in X\), \(\forall\lambda\in\mathbb{C}\), \(\langle x,\lambda x^{\prime}\rangle=\lambda\langle x,x^{\prime}\rangle\); * \(\forall x,x^{\prime}\in X\), \(\langle x,x^{\prime}\rangle=\overline{\langle x^{\prime},x\rangle}\). We denote by \(\operatorname{Hom}(X,Y)\) the set of linear maps \(A\colon X\to Y\). We define \(\operatorname{End}(X)=\operatorname{Hom}(X,X)\), the set of linear endomorphisms of \(X\). Also, \(\operatorname{Hom}(X,Y)\) have the Hilbert-Schmidt inner product, given by \(\langle A,B\rangle=tr(A^{\dagger}B)\), for \(A,B\in\operatorname{Hom}(X,Y)\). A construction that will be important in this article is the tensor product. The tensor product of vector spaces \(X\) and \(Y\) is a \(\mathbb{C}\)-bilinear map \(\otimes\colon X\times Y\to X\otimes Y\) that satisfies the following universal property: if \(B\colon X\times Y\to Z\) is \(\mathbb{C}\)-bilinear, then there exists a unique \(\mathbb{C}\)-linear map \(u\colon X\otimes Y\to Z\) such that \(b=u\otimes\). Given a basis \(x_{1}\),...,\(x_{n}\) for \(X\) and \(y_{1}\),...,\(y_{m}\) for \(Y\), \((x_{i}\otimes y_{j})\underset{j=1,...,m}{=1,...,n}\) is a basis for \(X\otimes Y\). We defined the tensor product of vector spaces, but there is also a tensor product of linear maps. Let \(A\colon X\to X^{\prime}\) and \(B\colon Y\to Y^{\prime}\) be \(\mathbb{C}\)-linear maps. Their tensor product \(A\otimes B\colon X\otimes Y\to X^{\prime}\otimes Y^{\prime}\) is the unique linear map such that the next diagram commutes: That is, \(A\otimes B\) is computed on elementary tensors as \((A\otimes B)(x\otimes y)=A(x)\otimes B(y)\), for any \(x\in X\) and \(y\in Y\). Given basis \(x_{1}\),...,\(x_{n}\) for \(X\), \(x^{\prime}_{1}\),...,\(x^{\prime}_{n^{\prime}}\) for \(X^{\prime}\), \(y_{1}\),...,\(y_{m}\) for \(Y\) and \(y^{\prime}_{1}\),...,\(y^{\prime}_{m^{\prime}}\) for \(Y^{\prime}\), we have the basis \((x_{i}\otimes y_{j})\underset{j=1,...,n}{=1,...,n}\) for \(X\otimes Y\) and \((x^{\prime}_{i^{\prime}}\otimes y^{\prime}_{j^{\prime}})\underset{i^{\prime}=1,...,n^{\prime}}{=1,...,n^{\prime}}\) for \(X^{\prime}\otimes Y^{\prime}\). We can express the matrix \([A\otimes B]\) of \(A\otimes B\) in these basis. Computing \(A\otimes B\) on \(x_{i}\otimes y_{j}\), we have \((A\otimes B)(x_{i}\otimes y_{j})=A(x_{i})\otimes B(y_{j})=(\sum_{i^{\prime}}[A ]_{i^{\prime},i}x^{\prime}_{i^{\prime}})\otimes(\sum_{j^{\prime}}[B]_{j^{ \prime},j}y^{\prime}_{j^{\prime}})\). By bilinearity of \(\otimes\), we get \((A\otimes B)(x_{i}\otimes y_{j})=\sum_{i^{\prime},j^{\prime}}[A]_{i^{\prime},i} [B]_{j^{\prime},j}x^{\prime}_{i^{\prime}}\otimes y^{\prime}_{j^{\prime}}\). Therefore, \([A\otimes B]\) has elements \([A\otimes B]_{(i^{\prime},j^{\prime}),(i,j)}=[A]_{i^{\prime},i}[B]_{j^{\prime},j}\). Since \([A\otimes B]\) has double indices \((i,j)\), to interpret it as a matrix we have to choose an order. This is done by taking the lexicographic order, that is, \((i,j)<(i^{\prime},j^{\prime})\iff i<i^{\prime}\) or \(i=i^{\prime}\) and \(j<j^{\prime}\). With this order, the matrix \([A\otimes B]\) is called the Kronecker product of \([A]\) and \([B]\), which we denote as \([A]\otimes[B]\). If \(X\) and \(Y\) are Hilbert spaces, then their tensor product \(X\otimes Y\) is also a Hilbert space, with inner product given by \(\langle x\otimes y,x^{\prime}\otimes y^{\prime}\rangle=\langle x,x^{\prime} \rangle\langle y,y^{\prime}\rangle\), for \(x,x^{\prime}\in X\) and \(y,y^{\prime}\in Y\). We have a similar property for the Hilbert-Schmidt inner product on \(\operatorname{End}(X\otimes Y)\). Any element of \(\operatorname{End}(X\otimes Y)\) is a linear combination of linear maps of the form \(A\otimes B\), for \(A\in\operatorname{End}(X)\) and \(B\in\operatorname{End}(Y)\). Therefore, they are sufficient to characterize the inner product on \(\operatorname{End}(X\otimes Y)\). Given \(A,A^{\prime}\in\mathrm{End}(X)\) and \(B,B^{\prime}\in\mathrm{End}(Y)\), we have \(\langle A\otimes B,A^{\prime}\otimes B^{\prime}\rangle=tr((A\otimes B)^{\dagger} (A^{\prime}\otimes B^{\prime}))=tr((A^{\dagger}\otimes B^{\dagger})(A^{\prime} \otimes B^{\prime}))=tr(A^{\dagger}A^{\prime}\otimes B^{\dagger}B^{\prime})\). The trace is multiplicative, that is, we have the property \(tr(f\otimes g)=tr(f)tr(g)\). Using this property, we get \(\langle A\otimes B,A^{\prime}\otimes B^{\prime}\rangle=tr(A^{\dagger}A^{ \prime})tr(B^{\dagger}B^{\prime})=\langle A,A^{\prime}\rangle\langle B,B^{ \prime}\rangle\). Therefore \[\langle A\otimes B,A^{\prime}\otimes B^{\prime}\rangle=\langle A,A^{\prime} \rangle\langle B,B^{\prime}\rangle, \tag{1}\] for any \(A,A^{\prime}\in\mathrm{End}(X)\) and \(B,B^{\prime}\in\mathrm{End}(Y)\). #### 2.1.2 Gram matrices As we'll see in theorem 2.11, it characterizes extremality of CPT maps in terms of linear independence. It will be useful to restate linear independence as the invertibility of the Gram matrix. In this section, we recall some results regarding Gram matrices. A proof of lemma 2.1 may be found in theorem 7.2.10 of [12]. **Lemma 2.1**.: _Let \((V,\langle\,\ \rangle)\) be an inner product space and \(v_{1}\),..., \(v_{n}\) be vectors of \(V\). Let_ \[G=\begin{pmatrix}\langle v_{1},v_{1}\rangle&\langle v_{1},v_{2}\rangle&\cdots &\langle v_{1},v_{n}\rangle\\ \langle v_{2},v_{1}\rangle&\langle v_{2},v_{2}\rangle&\cdots&\langle v_{2},v_ {n}\rangle\\ \vdots&\vdots&\ddots&\vdots\\ \langle v_{n},v_{1}\rangle&\langle v_{n},v_{2}\rangle&\cdots&\langle v_{n},v_ {n}\rangle\end{pmatrix} \tag{2}\] _be the Gram matrix of \(v_{1}\),...,\(v_{n}\). Then \(v_{1}\),..., \(v_{n}\) are linearly independent if, and only if, its Gram matrix is invertible._ Proof.: First, given vectors \(a=(a_{1},...,a_{n})\) and \(b=(b_{1},...,b_{n})\), which we'll represent as column vectors, lets compute \(a^{\dagger}Gb\). We have \[Gb =\begin{pmatrix}\langle v_{1},v_{1}\rangle&\langle v_{1},v_{2} \rangle&\cdots&\langle v_{1},v_{n}\rangle\\ \langle v_{2},v_{1}\rangle&\langle v_{2},v_{2}\rangle&\cdots&\langle v_{2},v _{n}\rangle\\ \vdots&\vdots&\ddots&\vdots\\ \langle v_{n},v_{1}\rangle&\langle v_{n},v_{2}\rangle&\cdots&\langle v_{n},v _{n}\rangle\end{pmatrix}\begin{pmatrix}b_{1}\\ b_{2}\\ \vdots\\ b_{n}\end{pmatrix}\] \[=\begin{pmatrix}\sum_{j=1}^{n}\langle v_{1},v_{j}\rangle b_{j}\\ \sum_{j=1}^{n}\langle v_{2},v_{j}\rangle b_{j}\\ \vdots\\ \sum_{j=1}^{n}\langle v_{n},v_{j}\rangle b_{j}\end{pmatrix}\] \[=\begin{pmatrix}\langle v_{1},\sum_{j=1}^{n}b_{j}v_{j}\rangle\\ \langle v_{2},\sum_{j=1}^{n}b_{j}v_{j}\rangle\\ \vdots\\ \langle v_{n},\sum_{j=1}^{n}b_{j}v_{j}\rangle\end{pmatrix}.\] Therefore \(a^{\dagger}Gb=\sum_{i=1}^{n}\overline{a_{i}}\langle v_{i},\sum_{j=1}^{n}b_{j} v_{j}\rangle=\langle\sum_{i=1}^{n}a_{i}v_{i},\sum_{j=1}^{n}b_{j}v_{j}\rangle\). Notice that the inner product is antilinear in the first argument, so \(a_{i}\) passes to inside the inner product without the conjugation. The last result implies that is a positive matrix, since \(a^{\dagger}Ga=||\sum_{i=1}^{n}a_{i}v_{i}||^{2}\), which is \(\geq 0\). Therefore, \(G\) is diagonalizable and its eigenvalues are real and non-negative. Now we prove the equivalence of \(v_{1}\),...,\(v_{n}\) being linearly independent and \(G\) being invertible. By the equation \(a^{\dagger}Ga=||\sum_{i=1}^{n}a_{i}v_{i}||^{2}\), the vectors \(v_{1}\),..., \(v_{n}\) are linearly independent if, and only if, \(a=0\) is the unique solution to the equation \(a^{\dagger}Ga=0\). Also notice that, since \(G\) is diagonalizable, it is invertible if, and only if, \(0\) isn't an eigenvalue of \(G\). Let \(c_{1}\),..., \(c_{n}\) be orthonormal eigenvectors of \(G\) and \(\lambda_{i}\) the eigenvalue of \(c_{i}\). Then \(a\) is written as \(a=\sum_{i=1}^{n}\alpha_{i}c_{i}\), for some \(\alpha_{i}\in\mathbb{C}\). Then we have \(a^{\dagger}Ga=a^{\dagger}G\sum_{i=1}^{n}\alpha_{i}c_{i}=\sum_{i=1}^{n}\alpha_{ i}a^{\dagger}Gc_{i}=\sum_{i=1}^{n}\alpha_{i}a^{\dagger}\lambda_{i}c_{i}=\sum_{i=1}^{n} \lambda_{i}\alpha_{i}a^{\dagger}c_{i}=\sum_{i=1}^{n}\lambda_{i}\alpha_{i} \overline{\alpha_{i}}=\sum_{i=1}^{n}\lambda_{i}|\alpha_{i}|^{2}\). If each \(\lambda_{i}\) isn't \(0\), then for any \(a\neq 0\) we have \(\sum_{i=1}^{n}\lambda_{i}|\alpha_{i}|^{2}>0\). Therefore, when each \(\lambda_{i}\) isn't \(0\), \(a^{\dagger}Ga\) can only be \(0\) if \(a=0\). If for some \(j\) we have \(\lambda_{j}=0\), then we can pick \(\alpha_{j}=1\) and \(\alpha_{i}=0\) for \(i\neq j\), which gives a vector \(a\neq 0\) with \(a^{\dagger}Ga=0\). Therefore, \(a=0\) is the unique solution to the equation \(a^{\dagger}Ga=0\) precisely when each \(\lambda_{i}\) isn't \(0\), that is, when \(G\) is invertible. This concludes that \(G\) is invertible if, and only if, \(v_{1}\),..., \(v_{n}\) are linearly independent. Gram matrices of tensor products of vectors will be important. For this reason, we prove the following lemmas: **Lemma 2.2**.: _Let \(V,W\) be vectors spaces. Let also \((v_{i})_{i=1,\ldots,n}\) be vectors of \(V\) and \((w_{j})_{j=1,\ldots,m}\) be vectors of \(W\). Let \(G\) be the Gram matrix of \((v_{i})_{i=1,\ldots,n}\), \(G^{\prime}\) be the Gram matrix of \((w_{j})_{j=1,\ldots,m}\) and \(G^{\prime\prime}\) be the Gram matrix of \((v_{i}\otimes w_{j})_{\begin{subarray}{c}i=1,\ldots,n\\ j=1,\ldots,m\end{subarray}}\). We regard \((v_{i}\otimes w_{j})_{\begin{subarray}{c}i=1,\ldots,n\\ j=1,\ldots,m\end{subarray}}\) as a sequence ordered in the lexicographic order, that is, \((i,j)<(i^{\prime},j^{\prime})\iff i<i^{\prime}\) or \(i=i^{\prime}\) and \(j<j^{\prime}\). Then \(G^{\prime\prime}\) is the Kronecker product of \(G\) and \(G^{\prime}\), that is, \(G^{\prime\prime}=G\otimes G^{\prime}\). Its matrix elements are, therefore, \(G^{\prime\prime}_{(i,j),(i^{\prime},j^{\prime})}=G_{i,i^{\prime}}G^{\prime}_{ j,j^{\prime}}\)._ Proof.: The matrix elements of \(G\), \(G^{\prime}\) and \(G^{\prime\prime}\) are, by definition, \[G_{i,i^{\prime}}=\langle v_{i},v_{i^{\prime}}\rangle, \tag{3}\] \[G^{\prime}_{j,j^{\prime}}=\langle w_{j},w_{j^{\prime}}\rangle, \tag{4}\] \[G^{\prime\prime}_{(i,j),(i^{\prime},j^{\prime})}=\langle v_{i}\otimes w_{j},v_ {i^{\prime}}\otimes w_{j^{\prime}}\rangle. \tag{5}\] By definition of the inner product of \(V\otimes W\), we have \(G^{\prime\prime}_{(i,j),(i^{\prime},j^{\prime})}=\langle v_{i}\otimes w_{j},v_ {i^{\prime}}\otimes w_{j^{\prime}}\rangle=\langle v_{i},v_{i^{\prime}}\rangle \langle w_{j},w_{j^{\prime}}\rangle\). By (3) and (4), we have \(G^{\prime\prime}_{(i,j),(i^{\prime},j^{\prime})}=G_{i,i^{\prime}}G^{\prime}_{ j,j^{\prime}}\). Since the indices \((i,j)\) have the lexicographic order, the last equation says that \(G^{\prime\prime}=G\otimes G^{\prime}\). **Lemma 2.3**.: _Let \((v_{i})_{i}\) be a finite sequence in some vector space \(V\) and let \((w_{j})_{j}\) be a finite sequence in some vector space \(W\). Then \((v_{i}\otimes w_{j})_{i,j}\) is linearly independent if, and only if, both \((v_{i})_{i}\) and \((w_{j})_{j}\) are linearly independent._ Proof.: We use the same notation of lemma 2.2. Let \(n\) be the size of \((v_{i})_{i}\) and \(m\) be the size of \((w_{j})_{j}\). The sequences \((v_{i})_{i}\), \((w_{j})_{j}\) and \((v_{i}\otimes w_{j})_{i,j}\) have Gram matrices \(G\), \(G^{\prime}\) and \(G^{\prime\prime}\), respectively. By lemma 2.2, we have \(G^{\prime\prime}=G\otimes G^{\prime}\) Also, \(\det(G^{\prime\prime})=\det(G\otimes G^{\prime})=\det(G)^{m}\det(G^{\prime})^{n}\), so \(G^{\prime\prime}\) is invertible if, and only if, both \(G\) and \(G^{\prime}\) are invertible. In this case, \({G^{\prime\prime}}^{-1}=G^{-1}\otimes G^{\prime-1}\). Using lemma 2.1 we conclude that \((v_{i}\otimes w_{j})_{i,j}\) is linearly independent if, and only if, both \((v_{i})_{i}\) and \((w_{j})_{j}\) are linearly independent. ### Convex Sets Given a vector space \(X\), a convex set \(C\subseteq X\) is a subset \(C\) that is closed by convex combinations. A convex combination of vectors \(v_{1}\),...,\(v_{n}\) is a linear combination \(\sum_{i=1}^{n}p_{i}v_{i}\), where \(p_{i}\geq 0\) and \(\sum_{i=1}^{n}p_{i}=1\). For any subset \(S\subseteq X\), its convex hull is the set of convex combinations of points of \(S\). Therefore, a subset \(C\subseteq X\) is convex if it equals its convex hull. An extreme point of a convex \(C\) is one that can't be written as a convex combination of different points of \(C\). For example, \([0,1]\) is a convex with \(0\) and \(1\) as extreme points. By a theorem of Krein and Milman, every compact convex set of a finite dimensional vector space is the convex hull of its extreme points (theorem 3.3 of [1]). **Theorem 2.4**.: _Let \(X\) be a finite dimensional complex vector space and \(C\subseteq X\) a compact convex set. Then \(C\) is the convex hull of its extreme points._ Of course, this is not true for general convex sets. For example, \([0,+\infty)\) is a convex set, but isn't compact. Also, it has \(0\) as its unique extreme point, whose convex hull is \(\{0\}\), which isn't \([0,+\infty)\). ### CP maps Every map we'll consider in this article will be some special kind of completely positive (CP) map. Let \(X\) and \(Y\) be finite dimensional Hilbert spaces. A CP map \(\varepsilon\colon X\to Y\) is a linear map \(\varepsilon\colon\operatorname{End}(X)\to\operatorname{End}(Y)\) such that \(\operatorname{id}_{\operatorname{End}(Z)}\otimes\varepsilon\colon \operatorname{End}(Z\otimes X)\to\operatorname{End}(Z\otimes Y)\) is a positive map, for any finite dimensional Hilbert space \(Z\). A positive map is one that sends positive operators to positive operators. A positive operator is a diagonalizable operator whose eigenvalues are all real and non-negative. We'll denote as \(CP(X,Y)\) the set of CP maps from \(X\) to \(Y\). Instead of using this definition of complete positivity, we'll often use the following characterization. A linear map \(\varepsilon\colon\operatorname{End}(X)\to\operatorname{End}(Y)\) is CP if, and only if, it is of the form \(\varepsilon(A)=\sum_{k}E_{k}AE_{k}^{\dagger}\), for some finite sequence \((E_{k})_{k}\) of operators \(E_{k}\colon X\to Y\). The operators \(E_{k}\) are called operation elements or Kraus operators. Note that the sequence \((E_{k})_{k}\) that represents a CP map is not unique. The minimum number of operators \(E_{k}\) necessary to represent \(\varepsilon\) is called its Choi rank, which we'll denote by \(CR(\varepsilon)\). The number of indices \(k\) in the sequence \((E_{k})_{k}\) equals the Choi rank if, and only if, \((E_{k})_{k}\) is linearly independent. For a proof of these statements, see [11] or [16]. There is a duality between CP maps that will be useful to relate CPT and UCP maps. Since \(\operatorname{End}(X)\) and \(\operatorname{End}(Y)\) are Hilbert spaces with the Hilbert-Schmidt inner product, we have a notion of dual map \(\varepsilon^{\dagger}\). It is defined as the unique linear map \(\varepsilon^{\dagger}\colon\operatorname{End}(Y)\to\operatorname{End}(X)\) that satisfies \(\langle\varepsilon(A),B\rangle=\langle A,\varepsilon^{\dagger}(B)\rangle\), for every \(A\in\operatorname{End}(X)\) and \(B\in\operatorname{End}(Y)\). **Lemma 2.5**.: _A linear map \(\varepsilon\colon\operatorname{End}(X)\to\operatorname{End}(Y)\) is CP if, and only if, its dual \(\varepsilon^{\dagger}\colon\operatorname{End}(Y)\to\operatorname{End}(X)\) is CP. Also, if \(\varepsilon\) has operation elements \((E_{k})_{k}\), then \((E_{k}^{\dagger})_{k}\) are operation elements for \(\varepsilon^{\dagger}\)._ Proof.: Suppose that \(\varepsilon\) is a completely positive map with operation elements \((E_{k})_{k}\). For any \(A\in\operatorname{End}(X)\) and \(B\in\operatorname{End}(Y)\), we have \[\langle A,\varepsilon^{\dagger}(B)\rangle =\langle\varepsilon(A),B\rangle\] \[=\langle\sum_{k}E_{k}AE_{k}^{\dagger},B\rangle\] \[=\sum_{k}\langle E_{k}AE_{k}^{\dagger},B\rangle\] \[=\sum_{k}\mathit{tr}((E_{k}AE_{k}^{\dagger})^{\dagger}B)\] \[=\sum_{k}\mathit{tr}(E_{k}A^{\dagger}E_{k}^{\dagger}B)\] \[=\sum_{k}\mathit{tr}(A^{\dagger}E_{k}^{\dagger}BE_{k})\] \[=tr(A^{\dagger}\sum_{k}E_{k}^{\dagger}BE_{k})\] \[=\langle A,\sum_{k}E_{k}^{\dagger}BE_{k}\rangle.\] Since \(\varepsilon^{\dagger}\) is unique, we conclude that \(\varepsilon^{\dagger}(B)=\sum_{k}E_{k}^{\dagger}BE_{k}\). Therefore, \(\varepsilon^{\dagger}\) is completely positive and has \((E_{k}^{\dagger})_{k}\) as operation elements. If we assume instead that \(\varepsilon^{\dagger}\) is completely positive, using that \(\varepsilon^{\dagger\dagger}=\varepsilon\) we conclude, by the previous results, that \(\varepsilon\) is completely positive. Therefore, \(\varepsilon\) is completely positive if, and only if, \(\varepsilon^{\dagger}\) is completely positive. The tensor product will be a central topic of this article, so we review some results about it. For any CP maps \(\varepsilon\in CP(X,Y)\) and \(\varepsilon^{\prime}\in CP(X^{\prime},Y^{\prime})\) there is the tensor product \(\varepsilon\otimes\varepsilon^{\prime}\). We'll see that \(\varepsilon\in CP(X\otimes X^{\prime},Y\otimes Y^{\prime})\). Also, given operation elements for \(\varepsilon\) and \(\varepsilon^{\prime}\), we can construct operation elements for \(\varepsilon\otimes\varepsilon^{\prime}\) as in the next lemma. **Lemma 2.6**.: _If \(\varepsilon\in CP(X,Y)\) and \(\varepsilon^{\prime}\in CP(X^{\prime},Y^{\prime})\) have operation elements \((E_{k})_{k}\) and \((F_{l})_{l}\), respectively, then \(\varepsilon\otimes\varepsilon^{\prime}\) have \((E_{k}\otimes F_{l})_{k,l}\) as operation elements. In particular, \(\varepsilon\in CP(X\otimes X^{\prime},Y\otimes Y^{\prime})\)._ Proof.: \(\varepsilon\otimes\varepsilon^{\prime}\) is characterized as the map that satisfies \((\varepsilon\otimes\varepsilon^{\prime})(A\otimes B)=\varepsilon(A)\otimes \varepsilon^{\prime}(B)\), for any \(A\in\operatorname{End}(X)\) and \(B\in\operatorname{End}(X^{\prime})\). Therefore \((\varepsilon\otimes\varepsilon^{\prime})(A\otimes B)=\varepsilon(A)\otimes \varepsilon^{\prime}(B)=(\sum_{k}E_{k}AE_{k}^{\dagger})\otimes(\sum_{l}F_{l}BF_{ l}^{\dagger})=\sum_{k,l}E_{k}AE_{k}^{\dagger}\otimes\sum_{l}F_{l}BF_{l}^{\dagger}\stackrel{{ (*)}}{{=}}\sum_{k,l}(E_{k}\otimes F_{l})(A\otimes B)(E_{k}^{\dagger}\otimes F _{l}^{\dagger})=\sum_{k,l}(E_{k}\otimes F_{l})(A\otimes B)(E_{k}\otimes F_{l}) ^{\dagger}\). In the equality \((*)\) we used the identity \((f^{\prime}\otimes g^{\prime})(f\otimes g)=f^{\prime}f\otimes g^{\prime}g\). The identity \((\varepsilon\otimes\varepsilon^{\prime})(A\otimes B)=\sum_{k,l}(E_{k}\otimes F _{l})(A\otimes B)(E_{k}\otimes F_{l})^{\dagger}\) together with the linearity of \(\varepsilon\otimes\varepsilon^{\prime}\) implies that \((\varepsilon\otimes\varepsilon^{\prime})(C)=\sum_{k,l}(E_{k}\otimes F_{l})C(E_{k} \otimes F_{l})^{\dagger}\), for any \(C\in\mathrm{End}(X\otimes X^{\prime})\). Therefore \((E_{k}\otimes F_{l})_{k,l}\) are operation elements for \(\varepsilon\otimes\varepsilon^{\prime}\). Since \(\varepsilon\otimes\varepsilon^{\prime}\) is of the form \((\varepsilon\otimes\varepsilon^{\prime})(C)=\sum_{k,l}(E_{k}\otimes F_{l})C(E_{ k}\otimes F_{l})^{\dagger}\), it is completely positive. **Corollary 2.7**.: _For any CP maps \(\varepsilon\) and \(\varepsilon^{\prime}\) we have \(CR(\varepsilon\otimes\varepsilon^{\prime})=CR(\varepsilon)CR(\varepsilon^{ \prime})\)._ Proof.: Let \((E_{k})_{k}\) be linearly independent operation elements for \(\varepsilon\) and let \((F_{l})_{l}\) be linearly independent operation elements for \(\varepsilon^{\prime}\). By lemmas 2.3 and 2.6 we know that \((E_{k}\otimes F_{l})_{k,l}\) are linearly independent operation elements for \(\varepsilon\otimes\varepsilon^{\prime}\). Since the operations elements are linearly independent, the sequences \((E_{k})_{k}\) and \((F_{l})_{l}\) have \(CR(\varepsilon)\) and \(CR(\varepsilon^{\prime})\) operators, respectively. Therefore \((E_{k}\otimes F_{l})_{k,l}\) is a sequence with \(CR(\varepsilon)CR(\varepsilon^{\prime})\) operators. But \((E_{k}\otimes F_{l})_{k,l}\) is linearly independent, so it must have \(CR(\varepsilon\otimes\varepsilon^{\prime})\) operators. Therefore \(CR(\varepsilon\otimes\varepsilon^{\prime})=CR(\varepsilon)CR(\varepsilon^{ \prime})\). For later use, we'll show that \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}=\varepsilon^{\dagger} \otimes\varepsilon^{\prime\dagger}\). **Lemma 2.8**.: _Let \(\varepsilon\) and \(\varepsilon^{\prime}\) be completely positive maps. We have that \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}=\varepsilon^{\dagger} \otimes\varepsilon^{\prime\dagger}\)._ Proof.: Lets write that \(\varepsilon\in CP(X,Y)\) and \(\varepsilon^{\prime}\in CP(X^{\prime},Y^{\prime})\). Both \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}\) and \(\varepsilon^{\dagger}\otimes\varepsilon^{\prime\dagger}\) are elements of \(CP(Y\otimes Y^{\prime},X\otimes X^{\prime})\). Their domain is \(\mathrm{End}(Y\otimes Y^{\prime})\), which is spanned by the maps \(A\otimes A^{\prime}\), for \(A\in\mathrm{End}(Y)\) and \(A^{\prime}\in\mathrm{End}(Y^{\prime})\). Similarly for the codomain \(\mathrm{End}(X\otimes X^{\prime})\), it is spanned by the maps \(B\otimes B^{\prime}\) for \(B\in\mathrm{End}(X)\) and \(B^{\prime}\in\mathrm{End}(X^{\prime})\). By linearity, we'll only need to consider these \(A\otimes A^{\prime}\) and \(B\otimes B^{\prime}\). By definition of the dual, \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}\) is the unique linear map that satisfies \(\langle(\varepsilon\otimes\varepsilon^{\prime})(A\otimes A^{\prime}),B \otimes B^{\prime}\rangle=\langle A\otimes A^{\prime},(\varepsilon\otimes \varepsilon^{\prime})^{\dagger}(B\otimes B^{\prime})\rangle\). We have that \[\langle(\varepsilon\otimes\varepsilon^{\prime})(A\otimes A^{ \prime}),B\otimes B^{\prime}\rangle =\langle\varepsilon(A)\otimes\varepsilon^{\prime}(A^{\prime}),B \otimes B^{\prime}\rangle\] \[=\langle\varepsilon(A),B\rangle\langle\varepsilon^{\prime}(A^{ \prime}),B^{\prime}\rangle\] \[=\langle A,\varepsilon^{\dagger}(B)\rangle\langle A^{\prime}, \varepsilon^{\dagger}(B^{\prime})\rangle\] \[=\langle A\otimes A^{\prime},\varepsilon(B)\otimes\varepsilon^{ \prime}(B^{\prime})\rangle\] \[=\langle A\otimes A^{\prime},(\varepsilon\otimes\varepsilon^{ \prime})(B\otimes B^{\prime})\rangle.\] By uniqueness it follows that \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}=\varepsilon^{\dagger} \otimes\varepsilon^{\prime\dagger}\). #### 2.3.1 CPT maps A CPT map \(\varepsilon\colon X\to Y\) is an \(\varepsilon\in CP(X,Y)\) which is also trace preserving. Being trace preserving means that \(tr(\varepsilon(A))=trA\), for every \(A\in\mathrm{End}(X)\). This is equivalent to the equation \(\sum_{k}E_{k}^{\dagger}E_{k}=id_{X}\). Lets denote by \(CPT(X,Y)\) the set of CPT maps \(\varepsilon\colon X\to Y\). As the two next propositions shows, \(CPT(X,Y)\) is convex and compact. **Proposition 2.9**.: \(CPT(X,Y)\) _is a convex set._ Proof.: Let \(p_{1},...,p_{n}\) be numbers satisfying \(p_{i}\geq 0\) and \(\sum_{i=1}^{n}p_{i}=1\). Also, let \(\varepsilon_{1}\),...,\(\varepsilon_{n}\) be CPT maps in \(CPT(X,Y)\). Let \((E_{i,k_{i}})_{k_{i}}\) be operation elements for \(\varepsilon_{i}\), so that \(\varepsilon_{i}(A)=\sum_{k_{i}}E_{i,k_{i}}AE_{i,k_{i}}^{\dagger}\). Then the convex combination of \(\varepsilon_{i}\) is given by \((\sum_{i=1}^{n}p_{i}\varepsilon_{i})(A)=\sum_{i=1}^{n}p_{i}\sum_{k_{i}}E_{i,k _{i}}AE_{i,k_{i}}^{\dagger}=\sum_{i=1}^{n}\sum_{k_{i}}\ (\sqrt{p_{i}}E_{i,k_{i}})A\)\((\sqrt{p_{i}}E_{i,k_{i}})^{\dagger}\). This shows that \(\sum_{i=1}^{n}p_{i}\varepsilon_{i}\) is completely positive and has operation elements \((\sqrt{p_{i}}E_{i,k_{i}})_{i,k_{i}}\). Also, it is trace preserving, since \(\sum_{i,k_{i}}(\sqrt{p_{i}}E_{i,k_{i}})^{\dagger}\)\((\sqrt{p_{i}}E_{i,k_{i}})=\sum_{i}p_{i}\sum_{k_{i}}E_{i,k_{i}}^{\dagger}\)\(E_{i,k_{i}}=\sum_{i}p_{i}id_{X}=id_{X}\). For the compacity we use the Choi-Jamiolkowski isomorphism [20]. There is a well known basis dependent version of this isomorphism, but here we use a basis independent one. Given an orthonormal basis \(x_{1}\),..., \(x_{n}\) of \(X\), let \(x^{1}\),...,\(x^{n}\) denote the dual basis for \(X^{*}\). That is, \(x^{i}\in X^{*}\) and \(x^{i}(x_{j})=\delta_{i,j}\). Let \(\Omega=\sum_{i=1}^{n}x^{i}\otimes x_{i}\in X^{*}\otimes X\). The Choi-Jamiolkowski isomorphism is the linear isomorphism \(CJ\colon\operatorname{Hom}(\operatorname{End}(X),\operatorname{End}(Y)) \rightarrow\operatorname{End}(X^{*}\otimes Y)\) given by \(CJ(\varepsilon)=(id_{\operatorname{End}(X^{*})}\otimes\varepsilon)(|\Omega \rangle\langle\Omega|)\) (in Dirac notation). When restricted to \(CPT(X,Y)\), it gives a bijection \(CJ\colon CPT(X,Y)\rightarrow\{A\in\operatorname{End}(X^{*}\otimes Y)\mid A\) is positive, \(tr_{Y}A=id_{X^{*}}\}\). The well known basis dependent version differs by replacing \(x^{i}\) with \(x_{i}\) and \(X^{*}\) with \(X\) in the previous definitions. **Proposition 2.10**.: \(CPT(X,Y)\) _is compact, in the norm topology._ Proof.: By the Choi-Jamiolkowski isomorphism, \(CPT(X,Y)\) is homeomorphic to \(CJ(X,Y)\coloneqq\{A\in\operatorname{End}(X^{*}\otimes Y)\mid A\text{ is positive, }tr_{Y}A=id_{X^{*}}\}\). Since for finite dimensions every norm is equivalent, it suffices to show that \(CJ(X,Y)\) is bounded and closed, with respect to any norm on \(CJ(X,Y)\). We show this for the operator norm. First we show that \(CJ(X,Y)\) is bounded. For any \(A\in CJ(X,Y)\) we have \(tr_{Y}A=id_{X^{*}}\), therefore \(trA=tr_{X^{*}}tr_{Y}A=tr_{X^{*}}id_{X^{*}}=\dim X^{*}=\dim X\). Since \(A\) is also positive, it is diagonalizable and has non negative eigenvalues. Let \(Sp(A)\) be the spectrum of \(A\), that is, its set of eigenvalues. Then \(||A||=max_{\lambda\in Sp(A)}|\lambda|\overset{(*)}{=}max_{\lambda\in Sp(A)} \lambda=max(Sp(A))\). In \((*)\) we've used that \(A\) has non negative eigenvalues. Since \(trA\) is the sum of the eigenvalues of \(A\), we have \(\lambda\leq trA\leq dimX\), for any \(\lambda\in Sp(A)\). Therefore \(||A||\leq dimX\). This concludes that \(CJ(X,Y)\) is bounded. Now lets show that \(CJ(X,Y)\) is closed. Let \((A_{n})_{n\in\mathbb{N}}\) be a sequence in \(CJ(X,Y)\) that converges to some operator \(A\in\operatorname{End}(X^{*}\otimes Y)\). We can easily show that \(A\) is positive. Let \(v\in X^{*}\otimes Y\), then \(\langle v,A(v)\rangle=\langle v,\lim_{n}A_{n}(v)\rangle\). Since the inner product is continuous in both its entries, then \(\langle v,\lim_{n}A_{n}(v)\rangle=\lim_{n}\langle v,A_{n}(v)\rangle\). Since each \(A_{n}\) is positive, we have \(\langle v,A_{n}(v)\rangle\geq 0\), so \(\lim_{n}\langle v,\)\(A_{n}(v)\rangle\geq 0\). Therefore \(\langle v,A(v)\rangle\geq 0\), for any \(v\in X^{*}\otimes Y\), which implies that \(A\) is positive. Also, since \(tr_{Y}\) is linear, it is continuous. Therefore, \(tr_{Y}A=tr_{Y}(\lim_{n}A_{n})=\lim_{n}tr_{Y}A_{n}\). Since \(tr_{Y}A_{n}=id_{X^{*}}\), we have \(\lim_{n}tr_{Y}A_{n}=\lim_{n}id_{X^{*}}=id_{X^{*}}\). Therefore, \(tr_{Y}A=id_{X}\) and we conclude that \(A\in CJ(X,Y)\). This proves that \(CJ(X,Y)\) is closed. Since \(CJ(X,Y)\) is bounded and closed, it is compact. Using the Choi-Jamiolkowski isomorphism we conclude that \(CPT(X,Y)\) is also compact. Being a compact convex set, by theorem 2.4 we know that \(CPT(X,Y)\) is the convex hull of its extreme points. In [20] is given the following characterization of extremality of a CPT map (theorem 2.31 of [20]): **Theorem 2.11**.: _Let \(\varepsilon\in CPT(X,Y)\) be a CPT map with linearly independent operation elements \((E_{k})_{k}\). Then \(\varepsilon\) is an extreme point of \(CPT(X,Y)\) if, and only if, the family \((E_{k}^{\dagger}E_{k^{\prime}})_{k,k^{\prime}}\) is linearly independent._ Notice that we can always find operation elements for \(\varepsilon\) which are linearly independent. This can be achieved by applying the Choi-Jamiolkowski isomorphism to \(\varepsilon\) and computing an orthonormal basis that diagonalizes \(CJ(\varepsilon)\). Then the \(E_{k}\)'s can be constructed from the eigenvectors with non zero eigenvalue. In lemma 2.6 we've shown that \(CP\) maps are closed under the tensor product. For completeness, lets show the same result for CPT maps. **Lemma 2.12**.: _If \(\varepsilon\in CPT(X,Y)\) and \(\varepsilon^{\prime}\in CPT(X^{\prime},Y^{\prime})\) then \(\varepsilon\otimes\varepsilon^{\prime}\in CPT(X\otimes X^{\prime},Y\otimes Y ^{\prime})\)._ Proof.: By lemma 2.6 we already know that \(\varepsilon\otimes\varepsilon^{\prime}\in CP(X\otimes X^{\prime},Y\otimes Y ^{\prime})\). It remains to check that \(\varepsilon\otimes\varepsilon^{\prime}\) is trace preserving. Lets use the same notation of lemma 2.6. Being trace preserving means that \(\sum_{k,l}(E_{k}\otimes F_{l})^{\dagger}(E_{k}\otimes F_{l})=id_{X\otimes X^{ \prime}}\). We prove it as follows: \(\sum_{k,l}(E_{k}\otimes F_{l})^{\dagger}(E_{k}\otimes F_{l})=\sum_{k,l}(E_{k} ^{\dagger}\otimes F_{l}^{\dagger})(E_{k}\otimes F_{l})=\sum_{k,l}E_{k}^{ \dagger}E_{k}\otimes F_{l}^{\dagger}F_{l}=(\sum_{k}E_{k}^{\dagger}E_{k})\otimes (\sum_{l}F_{l}^{\dagger}F_{l})=id_{X}\otimes id_{X^{\prime}}=id_{X\otimes X^{ \prime}}\). #### 2.3.2 UCP maps Unital completely positive (UCP) maps are CP maps \(\varepsilon\in CP(X,Y)\) that preserve the identity, that is, \(\varepsilon(id_{X})=id_{Y}\). We denote the set of UCP maps from \(X\) to \(Y\) by \(UCP(X,Y)\). If \(\varepsilon\) has operation elements \((E_{k})_{k}\), being unital is equivalent to the equation \(\sum_{k}E_{k}E_{k}^{\dagger}=id_{Y}\). We can see that this equation is similar to \(\sum_{k}E_{k}^{\dagger}E_{k}=id_{X}\), which is equivalent to being trace preserving. Such similarity can be seen as a consequence of the fact that UCP maps are dual to CPT maps. **Proposition 2.13**.: _A linear map \(\varepsilon\colon\operatorname{End}(X)\to\operatorname{End}(Y)\) is CPT if, and only if, \(\varepsilon^{\dagger}\colon\operatorname{End}(Y)\to\operatorname{End}(X)\) is UCP. In particular, we have an anti-linear bijection \(\dagger\colon CPT(X,Y)\to UCP(Y,X)\)._ Proof.: Assume that \(\varepsilon\) is completely positive with operation elements \((E_{k})_{k}\). By lemma 2.5 we know that \(\varepsilon^{\dagger}\) is also completely positive and has \((E_{k}^{\dagger})_{k}\) as operation elements. Then \(\varepsilon\) is trace preserving if \(\sum_{k}E_{k}^{\dagger}E_{k}=id_{X}\), and \(\varepsilon^{\dagger}\) is unital if \(\sum_{k}E_{k}^{\dagger}(E_{k}^{\dagger})^{\dagger}=id_{X}\). Both are the same equation, therefore \(\varepsilon\) is trace preserving if, and only if, \(\varepsilon^{\dagger}\) is unital. We can use the duality between \(CPT\) and \(UCP\) maps to transfer results between them. **Proposition 2.14**.: _UCP(X,Y) is compact convex, in the norm topology._ Proof.: By proposition 2.13, we have an anti-linear bijection \(\dagger\colon CPT(X,Y)\to UCP(Y,X)\), for any finite dimensional Hilbert spaces \(X\) and \(Y\). Since it is anti-linear, it is also continuous. Then \(UCP(X,Y)\) is the image of the compact \(CPT(Y,X)\) by the continuous function \(\dagger\), and therefore \(UCP(X,Y)\) is compact. Also, if \(\varepsilon_{i}\in UCP(X,Y)\) and \((p_{i})_{i}\) is a discrete probability distribution, then \(\sum_{i}p_{i}\varepsilon_{i}=(\sum_{i}p_{i}\varepsilon_{i})^{\dagger\dagger}= (\sum_{i}p_{i}(\varepsilon_{i})^{\dagger})^{\dagger}\). Sincd \(CPT(Y,X)\) is convex, we know that \(\sum_{i}p_{i}(\varepsilon_{i})^{\dagger}\in CPT(Y,X)\), and therefore \((\sum_{i}p_{i}(\varepsilon_{i})^{\dagger})^{\dagger}\in UCP(X,Y)\). That is, \(\sum_{i}p_{i}\varepsilon_{i}\in UCP(X,Y)\), which implies that \(UCP(X,Y)\) is convex. The anti-linear bijection \(\dagger\colon CPT(X,Y)\to UCP(Y,X)\) stablishes a bijection between the extreme points of \(CPT(X,Y)\) and the extreme points of \(UCP(Y,X)\). Using this duality, we obtain a characterization of extreme points for \(UCP(X,Y)\) that is similar to theorem 2.11. **Theorem 2.15**.: _Let \(\varepsilon\in UCP(X,Y)\) be a UCP map with linearly independent operation elements \((E_{k})_{k}\). Then it is an extreme point of \(UCP(X,Y)\) if, and only if, \((E_{k}E_{l}^{\dagger})_{k,l}\) is linearly independent._ Proof.: \(\varepsilon\) is an extreme point of \(UCP(X,Y)\) if, and only if, \(\varepsilon^{\dagger}\) is an extreme point of \(CPT(Y,X)\). By lemma 2.5, \((E_{k}^{\dagger})_{k}\) are operation elements for \(\varepsilon\). Since \((E_{k})_{k}\) is linearly independent, it follows that \((E_{k}^{\dagger})_{k}\) is also linearly independent. Then, by theorem 2.11, \(\varepsilon^{\dagger}\) is an extreme point of \(CPT(Y,X)\) if, and only if, \((E_{k}^{\dagger\dagger}E_{l}^{\dagger})_{k,l}\) is linearly independent. But \(E_{k}^{\dagger\dagger}=E_{k}\), so \(\varepsilon\) is an extreme point of \(UCP(X,Y)\) if, and only if, \((E_{k}E_{l}^{\dagger})_{k,l}\) is linearly independent. Finally, lets prove that UCP maps are closed under the tensor product. **Lemma 2.16**.: _If \(\varepsilon\in UCP(X,Y)\) and \(\varepsilon^{\prime}\in UCP(X^{\prime},Y^{\prime})\) then \(\varepsilon\otimes\varepsilon^{\prime}\in UCP(X\otimes X^{\prime},Y\otimes Y ^{\prime})\)._ Proof.: We can prove this result by using the duality between UCP and CPT maps and lemma 2.12. Using lemma 2.8 we have that \(\varepsilon\otimes\varepsilon^{\prime}=(\varepsilon^{\dagger}\otimes \varepsilon^{\prime\dagger})^{\dagger}\). We know that \(\varepsilon\in UCP(X,Y)\) and \(\varepsilon^{\prime}\in UCP(X^{\prime},Y^{\prime})\), then proposition 2.13 implies that \(\varepsilon^{\dagger}\in CPT(Y,X)\) and \(\varepsilon^{\prime\dagger}\in CPT(Y^{\prime},X^{\prime})\). By lemma 2.12 we know that \(\varepsilon^{\dagger}\otimes\varepsilon^{\prime\dagger}\in CPT(Y\otimes Y^{ \prime},X\otimes X^{\prime})\). Using proposition 2.13 again we conclude that \((\varepsilon^{\dagger}\otimes\varepsilon^{\prime\dagger})^{\dagger}\in UCP(X \otimes X^{\prime},Y\otimes Y^{\prime})\). #### 2.3.3 UCP maps An UCP map is one that is both unital and trace preserving, that is, \(UCPT(X,\)\(Y)=UCP(X,Y)\cap CPT(X,Y)\). In this case, we restrict attention to endomorphisms \((X=Y)\), since \(X\) and \(Y\) have necessarily the same dimension. In fact, let \(\varepsilon\in UCP(X,Y)\), then \(\varepsilon(id_{X})=id_{Y}\), because it is unital. But it is also trace preserving, so \(tr(\varepsilon(id_{X}))=tr(id_{X})\), so \(tr(id_{Y})=tr(id_{X})\). But \(tr(id_{X})=\dim X\), so \(\dim X=\dim Y\). We'll denote \(UCPT(X,X)\) as \(UCPT(X)\). **Proposition 2.17**.: \(UCPT(X)\) _is compact convex, in the norm topology._ Proof.: Since both \(UCP(X)\) and \(CPT(X)\) are convex sets, its intersection \(UCPT(X)\) is also convex. Since, both \(UCP(X)\) and \(CPT(X)\) are bounded sets, \(UCPT(X)\) is also bounded. And since both \(UCP(X)\) and \(CPT(X)\) are closed sets, \(UCPT(X)\) is also closed. Therefore \(UCPT(X)\) is closed and bounded, so it is compact. As \(UCPT(X)\) is compact convex, it is the convex hull of its extreme points. The following characterization of the extreme points of \(UCPT(X)\) was proven is [13]. It can also be found in theorem 4.21 of [21]. **Theorem 2.18**.: _Let \(\varepsilon\in UCPT(X)\) be an UCPT map with linearly independent operation elements \((E_{k})_{k}\). Then it is an extreme point of \(UCPT(X)\) if, and only if, \((E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger})_{k,l}\) is linearly independent as vectors of \(\operatorname{End}(X)\oplus\operatorname{End}(X)\)._ As done for the other types of CP maps, lets prove that UCPT is closed under the tensor product. **Lemma 2.19**.: _If \(\varepsilon\in UCPT(X,Y)\) and \(\varepsilon^{\prime}\in UCPT(X^{\prime},Y^{\prime})\) then \(\varepsilon\otimes\varepsilon^{\prime}\in UCPT(X\otimes X^{\prime},Y\otimes Y ^{\prime})\)._ Proof.: By lemmas 2.12 and 2.16 we know that \(\varepsilon\otimes\varepsilon^{\prime}\in CPT(X\otimes X^{\prime},Y\otimes Y ^{\prime})\) and \(\varepsilon\otimes\varepsilon^{\prime}\in UCP(X\otimes X^{\prime},Y\otimes Y ^{\prime})\). Since \(UCPT(X\otimes X^{\prime},Y\otimes Y^{\prime})=CPT(X\otimes X^{\prime},Y \otimes Y^{\prime})\cap UCP(X\otimes X^{\prime},Y\otimes Y^{\prime})\), this concludes the proof. ## 3 Main results ### Extremality is preserved for CPT maps Now we prove that the tensor product preserves extramality for CPT maps. **Theorem 3.1**.: _If \(\varepsilon\) is an extreme point of \(CPT(X,Y)\) and \(\varepsilon^{\prime}\) is an extreme point of \(CPT(X^{\prime},Y^{\prime})\), then \(\varepsilon\otimes\varepsilon^{\prime}\) is an extreme point of \(CPT(X\otimes X^{\prime},Y\otimes Y^{\prime})\)._ Proof.: We always use the Hilbert-Schmidt inner product. Let \((E_{k})_{k}\) be linearly independent operation elements for \(\varepsilon\), and let \((F_{l})_{l}\) be linearly independent operation elements for \(\varepsilon^{\prime}\). By lemma 2.6, \((E_{k}\otimes F_{l})_{k,l}\) are operation elements for \(\varepsilon\otimes\varepsilon^{\prime}\). Also, lemma 2.3 implies that \((E_{k}\otimes F_{l})_{k,l}\) are linearly independent. Then, by theorem 2.11, \(\varepsilon\otimes\varepsilon^{\prime}\) is extreme if, and only if, \(((E_{k}\otimes F_{l})^{\dagger}(E_{k^{\prime}}\otimes F_{l^{\prime}}))_{k,k^{ \prime},l,l^{\prime}}\) is linearly independent (as vectors of \(\operatorname{End}(X\otimes Y)\)). By lemma 2.1, linear independence is equivalent to the invertibility of the Gram matrix. To prove the invertibility of the Gram matrix, we'll show that it is the Kronecker product of two invertible matrices. Let \(G\), \(G^{\prime}\) and \(G^{\prime\prime}\) be the Gram matrices of \((E_{k}^{\dagger}E_{k^{\prime}})_{k,k^{\prime}}\), \((F_{l}^{\dagger}F_{l^{\prime}})_{l,l^{\prime}}\) and \(((E_{k}\otimes F_{l})^{\dagger}(E_{k^{\prime}}\otimes F_{l^{\prime}}))_{k,k^{ \prime},l,l^{\prime}}\), respectively. Its matrix elements are \[G_{(k,k^{\prime}),(k^{\prime\prime},k^{\prime\prime\prime})}=\langle E_{k}^{ \dagger}E_{k^{\prime}},E_{k^{\prime\prime}}^{\dagger}E_{k^{\prime\prime\prime}}\rangle, \tag{6}\] \[G^{\prime}_{(l,l^{\prime}),(l^{\prime\prime},l^{\prime\prime\prime})}=\langle F^{ \dagger}_{l}F_{l^{\prime}},F^{\dagger}_{l^{\prime\prime}}F_{l^{\prime\prime \prime}}\rangle, \tag{7}\] \[G^{\prime\prime}_{(k,k^{\prime},l,l^{\prime}),(k^{\prime\prime},k^{\prime \prime\prime},l^{\prime\prime},l^{\prime\prime\prime})}=\langle(E_{k}\otimes F )^{\dagger}(E_{k^{\prime}}\otimes F_{l^{\prime}}),(E_{k^{\prime\prime}}\otimes F _{l^{\prime\prime}})^{\dagger}(E_{k^{\prime\prime\prime}}\otimes F_{l^{ \prime\prime\prime}})\rangle. \tag{8}\] Since \(\varepsilon\) and \(\varepsilon^{\prime}\) are extreme CPT maps, and since \((E_{k})_{k}\) and \((F_{l})_{l}\) are linearly independent operation elements, by theorem 2.11 we know that \((E^{\dagger}_{k}E_{k^{\prime}})_{k,k^{\prime}}\) and \((F^{\dagger}_{l}F_{l^{\prime}})_{l,l^{\prime}}\) are both linearly independent. By lemma 2.1, it implies that \(G\) and \(G^{\prime}\) are invertible matrices. We can rewrite \(G^{\prime\prime}\) in terms of \(G\) and \(G^{\prime}\) as follows: \[G^{\prime\prime}_{(k,k^{\prime},l,l^{\prime}),(k^{\prime\prime}, k^{\prime\prime\prime},l^{\prime\prime\prime})} =\langle(E_{k}\otimes F_{l})^{\dagger}(E_{k^{\prime}}\otimes F_{ l^{\prime}}),(E_{k^{\prime\prime}}\otimes F_{l^{\prime\prime}})^{\dagger}(E_{k^{ \prime\prime\prime}}\otimes F_{l^{\prime\prime\prime}})\rangle\] \[=\langle(E^{\dagger}_{k}\otimes F^{\dagger}_{l})(E_{k^{\prime}} \otimes F_{l^{\prime}}),(E^{\dagger}_{k^{\prime\prime}}\otimes F^{\dagger}_{ l^{\prime\prime}})(E_{k^{\prime\prime\prime}}\otimes F_{l^{\prime\prime\prime}})\rangle\] \[=\langle E^{\dagger}_{k}E_{k^{\prime}}\otimes F^{\dagger}_{l}F_{l^ {\prime}},E^{\dagger}_{k^{\prime\prime\prime}}E_{k^{\prime\prime\prime}} \otimes F^{\dagger}_{l^{\prime\prime}}F_{l^{\prime\prime\prime}}\rangle\] \[\stackrel{{(*)}}{{=}}\langle E^{\dagger}_{k}E_{k^{ \prime}},E^{\dagger}_{k^{\prime\prime\prime}}E_{k^{\prime\prime\prime}} \rangle\langle F^{\dagger}_{l}F_{l^{\prime}},F^{\dagger}_{l^{\prime\prime}}F_ {l^{\prime\prime\prime}}\rangle\] \[=G_{(k,k^{\prime}),(k^{\prime\prime},k^{\prime\prime\prime})}G^{ \prime}_{(l,l^{\prime}),(l^{\prime\prime},l^{\prime\prime\prime})}.\] In equality (*) we've used equation (1). Given any order of \((k,k^{\prime})\) and \((l,l^{\prime})\), we can give the lexicographic order to \((k,k^{\prime},l,l^{\prime})\). In this way, the identity \(G^{\prime\prime}_{(k,k^{\prime},l,l^{\prime}),(k^{\prime\prime\prime},k^{ \prime\prime\prime},l^{\prime\prime\prime},l^{\prime\prime\prime})}=G_{(k,k^{ \prime}),(k^{\prime\prime},k^{\prime\prime\prime})}G^{\prime}_{(l,l^{\prime}),( l^{\prime\prime},l^{\prime\prime\prime})}\) says that \(G^{\prime\prime}=G\otimes G^{\prime}\), that is, \(G^{\prime\prime}\) is the Kronecker product of \(G\) and \(G^{\prime}\). Since \(G\) and \(G^{\prime}\) are invertible, then \(G^{\prime\prime}\) is also invertible, with inverse \(G^{\prime\prime-1}=G^{-1}\otimes G^{\prime-1}\). Then lemma 2.1 implies that \(((E_{k}\otimes F_{l})^{\dagger}(E_{k^{\prime}}\otimes F_{l^{\prime}}))_{k,k^{ \prime},l,l^{\prime}}\) is linearly independent. Therefore, by theorem 2.11, \(\varepsilon\otimes\varepsilon^{\prime}\) is an extreme CPT map. ### Extremality is preserved for UCP maps The result for UCP maps can be obtained by using the duality between CPT and UCP maps. **Theorem 3.2**.: _If \(\varepsilon\) is an extreme point of \(UCP(X,Y)\) and \(\varepsilon^{\prime}\) is an extreme point of \(UCP(X^{\prime},Y^{\prime})\), then \(\varepsilon\otimes\varepsilon^{\prime}\) is an extreme point of \(UCP(X\otimes X^{\prime},Y\otimes Y^{\prime})\)._ Proof.: Since \(\varepsilon\in UCP(X,Y)\) and \(\varepsilon^{\prime}\in UCP(X^{\prime},Y^{\prime})\), we know that \(\varepsilon^{\dagger}\in CPT(Y,X)\) and \(\varepsilon^{\prime\dagger}\in CPT(Y^{\prime},X^{\prime})\). Also, \(\varepsilon\) and \(\varepsilon^{\prime}\) are extreme points of \(UCP(X,Y)\) and \(UCP(X^{\prime},Y^{\prime})\), so \(\varepsilon^{\dagger}\) and \(\varepsilon^{\prime\dagger}\) are extreme points of \(CPT(Y,X)\) and \(CPT(Y^{\prime},X^{\prime})\). By theorem 3.1, we know that \(\varepsilon^{\dagger}\otimes\varepsilon^{\prime\dagger}\) is an extreme point of \(CPT(Y\otimes Y^{\prime},X\otimes X^{\prime})\). By lemma 2.8 we know that \(\varepsilon^{\dagger}\otimes\varepsilon^{\prime\dagger}=(\varepsilon\otimes \varepsilon^{\prime})^{\dagger}\), so \((\varepsilon\otimes\varepsilon^{\prime})^{\dagger}\) is extreme. Using again the duality between CPT and UCP maps we conclude that \(\varepsilon\otimes\varepsilon^{\prime}\) is an extreme point of \(UCP(X\otimes X^{\prime},Y\otimes Y^{\prime})\). ### The case of UCPT maps The tensor product of extreme UCPT maps may not be extreme. But it is always extreme if one of the two maps is over a Hilbert space of dimension 2. This is a consequence of the next proposition and the fact that, for dimension 2, extreme maps are unitary. **Proposition 3.3**.: _Let \(\varepsilon\in UCPT(X)\) and \(\mathcal{U}\in UCPT(Y)\) be a unitary map, that is, \(\mathcal{U}(A)=UAU^{\dagger}\) for some unitary operator \(U\in\operatorname{End}(Y)\). We assume that \(\dim Y>0\). Then \(\varepsilon\) is an extreme point of \(UCPT(X)\) if, and only if, \(\varepsilon\otimes\mathcal{U}\) is an extreme point of \(UCPT(X\otimes Y)\)._ Proof.: Let \((E_{k})_{k}\) be linearly independent operation elements for \(\varepsilon\). \(\mathcal{U}\) has already \((U)\) as linearly independent operation elements, since there is only one operator. By lemmas 2.3 and 2.6, \(\varepsilon\otimes\mathcal{U}\) has linearly independent operation elements \((E_{k}\otimes U)\). By theorem 2.18, \(\varepsilon\otimes\mathcal{U}\) is an extreme point of \(UCPT(X\otimes Y)\) if, and only if, \(((E_{k}\otimes U)^{\dagger}(E_{l}\otimes U)\oplus(E_{l}\otimes U)(E_{k} \otimes U)^{\dagger})_{k,l}\) is linearly independent. We have that \[(E_{k}\otimes U)^{\dagger}(E_{l}\otimes U)\oplus(E_{l}\otimes U) (E_{k}\otimes U)^{\dagger} =(E_{k}^{\dagger}\otimes U^{\dagger})(E_{l}\otimes U)\oplus(E_{l }\otimes U)\] \[\quad(E_{k}^{\dagger}\otimes U^{\dagger})\] \[=(E_{k}^{\dagger}E_{l}\otimes U^{\dagger}U)\oplus(UU^{\dagger} \otimes E_{l}E_{k}^{\dagger})\] \[=(E_{k}^{\dagger}E_{l}\otimes id_{Y})\oplus(id_{Y}\otimes E_{l}E _{k}^{\dagger}).\] By lemma 2.1, we can check \(((E_{k}^{\dagger}E_{l}\otimes id_{Y})\oplus(id_{Y}\otimes E_{l}E_{k}^{\dagger }))_{k,l}\) is linearly independent or not by analysing its Gram matrix. Let \(G\) be the Gram matrix of \((E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger})_{k,l}\) and \(G^{\prime}\) be the Gram matrix of \(((E_{k}^{\dagger}E_{l}\otimes id_{Y})\oplus(id_{Y}\otimes E_{l}E_{k}^{\dagger }))_{k,l}\). The matrix elements of \(G\) are \[G_{(k,l),(k^{\prime},l^{\prime})} =\langle E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger},E_{k^{ \prime}}^{\dagger}E_{l^{\prime}}\oplus E_{l^{\prime}}E_{k^{\prime}}^{\dagger}\rangle\] \[=\langle E_{k}^{\dagger}E_{l},E_{k^{\prime}}^{\dagger}E_{l^{ \prime}}\rangle+\langle E_{l}E_{k}^{\dagger},E_{l^{\prime}}E_{k^{\prime}}^{ \dagger}\rangle.\] The matrix elements of \(G^{\prime}\) are \[G^{\prime}_{(k,l),(k^{\prime},l^{\prime})} =\langle(E_{k}^{\dagger}E_{l}\otimes id_{Y})\oplus(id_{Y}\otimes E _{l}E_{k}^{\dagger}),(E_{k^{\prime}}^{\dagger}E_{l^{\prime}}\otimes id_{Y}) \oplus(id_{Y}\otimes E_{l^{\prime}}E_{k^{\prime}}^{\dagger})\rangle\] \[=\langle E_{k}^{\dagger}E_{l}\otimes id_{Y},E_{k^{\prime}}^{ \dagger}E_{l^{\prime}}\otimes id_{Y}\rangle+\langle id_{Y}\otimes E_{l}E_{k}^{ \dagger},id_{Y}\otimes E_{l^{\prime}}E_{k^{\prime}}^{\dagger}\rangle\] \[=\langle E_{k}^{\dagger}E_{l},E_{k^{\prime}}^{\dagger}E_{l^{ \prime}}\rangle\langle id_{Y},id_{Y}\rangle+\langle id_{Y},id_{Y}\rangle \langle E_{l}E_{k}^{\dagger},E_{l^{\prime}}E_{k^{\prime}}^{\dagger}\rangle\] \[=\dim Y\langle(E_{k}^{\dagger}E_{l},E_{k^{\prime}}^{\dagger}E_{l^ {\prime}})+\langle E_{l}E_{k}^{\dagger},E_{l^{\prime}}E_{k^{\prime}}^{\dagger }\rangle\rangle\] \[=\dim Y\ G_{(k,l),(k^{\prime},l^{\prime})}.\] That is, \(G^{\prime}=\dim Y\ G\). Therefore \(G\) is invertible if, and only if, \(G^{\prime}\) is invertible. By lemma 2.1, this says that \((E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger})_{k,l}\) is linearly independent if, and only if, \(((E_{k}^{\dagger}E_{l}\otimes id_{Y})\oplus(id_{Y}\otimes E_{l}E_{k}^{\dagger }))_{k,l}\) is linearly independent. Then, by theorem 2.18, \(\varepsilon\) is an extreme point of \(UCPT(X)\) if, and only if, \(\varepsilon\otimes\mathcal{U}\) is an extreme point of \(UCPT(X\otimes Y)\). By theorem 4.23 of [21], for \(\dim X=2\), the extreme points of \(UCPT(X)\) are the unitary maps. Combining this result with proposition 3.3 we obtain the next theorem. **Theorem 3.4**.: _Let \(\varepsilon\) be an extreme point of \(UCPT(X)\) and \(\varepsilon^{\prime}\) be an extreme point of \(UCPT(Y)\). If \(\dim X=2\) or \(\dim Y=2\), then \(\varepsilon\otimes\varepsilon^{\prime}\) is an extreme point of \(UCPT(X\otimes Y)\)._ For higher dimensions the extremality may not be preserved. One reason for this can be seen if we analyze theorem 2.18. Given \(\varepsilon\in UCPT(X)\) with \(CR(\varepsilon)\) linearly independent operation elements \((E_{k})_{k}\), it is an extreme point of the UCPT maps if \((E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger})_{k,l}\) is linearly independent as vectors of \(\operatorname{End}(X)\oplus\operatorname{End}(X)\). The sequence \((E_{k}^{\dagger}E_{l}\oplus E_{l}E_{k}^{\dagger})_{k,l}\) has \(CR(\varepsilon)^{2}\) elements. For it to be linearly independent, \(CR(\varepsilon)^{2}\) must be at most the dimension of \(\operatorname{End}(X)\oplus\operatorname{End}(X)\), which is \(2(\dim X)^{2}\). Therefore, if \(\varepsilon\) is extreme then \(CR(\varepsilon)\leq\sqrt{2}\dim X\). Actually, a slightly better upper bound is given in [1], which says that \(CR(\varepsilon)\leq\sqrt{2(\dim X)^{2}-1}\). If we take two extreme UCPT maps of high enough Choi ranks, their tensor product can have a Choi rank that exceeds the previous upper bound, so the tensor product can't be extreme. This is proven in the next theorem. **Theorem 3.5**.: _Let \(\varepsilon\) be an extreme point of \(UCPT(X)\) and \(\varepsilon^{\prime}\) be an extreme point of \(UCPT(Y)\). In particular, their Choi ranks satisfy \(CR(\varepsilon)\leq\sqrt{2}\dim X\) and \(CR(\varepsilon^{\prime})\leq\sqrt{2}\dim Y\). If they further satisfy that \(\sqrt[4]{2}\dim X<CR(\varepsilon)\) and \(\sqrt[4]{2}\dim Y<CR(\varepsilon^{\prime})\), then \(\sqrt{2}\dim(X\otimes Y)<CR(\varepsilon\otimes\varepsilon^{\prime})\), so \(\varepsilon\otimes\varepsilon^{\prime}\) isn't an extreme point of \(UCPT(X\otimes Y)\)._ Proof.: By corollary 2.7, we know that \(CR(\varepsilon\otimes\varepsilon^{\prime})=CR(\varepsilon)\)\(CR(\varepsilon^{\prime})\). If \(\sqrt[4]{2}\dim X\)\(<CR(\varepsilon)\) and \(\sqrt[4]{2}\dim Y<CR(\varepsilon^{\prime})\), multiplying both inequalities we get \(\sqrt[4]{2}\dim X\sqrt[4]{2}\dim Y<CR(\varepsilon)CR(\varepsilon^{\prime})\), so \(\sqrt{2}\dim(X\otimes Y)<CR(\varepsilon\otimes\varepsilon^{\prime})\). As seen previously, this implies that \(\varepsilon\otimes\varepsilon^{\prime}\) isn't an extreme point of \(UCPT(X\otimes Y)\). By theorem 3.5, to prove that the tensor product of extreme UCPT maps may not be extreme, it is sufficient to find extreme maps of high enough Choi rank. According to theorems 2.5 and 2.6 of [1], there is an extreme map \(\varepsilon_{3}\in UCPT(\mathbb{C}^{3})\) with \(CR(\varepsilon_{3})=4\) and an extreme map of \(\varepsilon_{4}\in UCPT(\mathbb{C}^{4})\) with \(CR(\varepsilon_{4})=5\). More precisely, the operation elements (in the standard basis) for \(\varepsilon_{3}\) are \[E_{0}=\frac{1}{2}\begin{pmatrix}1&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}, \tag{9}\] \[E_{1}=\frac{1}{2}\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&\sqrt{2}&0\end{pmatrix}, \tag{10}\] \[E_{2}=\frac{1}{2}\begin{pmatrix}0&\sqrt{2}&0\\ 0&0&\sqrt{3}\\ 0&0&0\end{pmatrix}, \tag{11}\] \[E_{3}=\frac{1}{2}\begin{pmatrix}0&0&1\\ 0&0&0\\ \sqrt{2}&0&0\end{pmatrix}. \tag{12}\] The operation elements for \(\varepsilon_{4}\) are \[F_{0}=\frac{1}{2}\begin{pmatrix}0&0&0&0\\ 0&0&1&0\\ 1&0&0&0\\ 0&0&0&0\end{pmatrix}, \tag{13}\] \[F_{1}=\frac{1}{2}\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&0&\sqrt{2}\\ 0&\sqrt{2}&0&0\end{pmatrix}, \tag{14}\] \[F_{2}=\frac{1}{2}\begin{pmatrix}0&0&\sqrt{3}&0\\ 0&0&0&0\\ 0&0&0&0\\ \sqrt{2}&0&0&0\end{pmatrix}, \tag{15}\] \[F_{3}=\frac{1}{2}\begin{pmatrix}0&1&0&0\\ 0&0&0&\sqrt{2}\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}, \tag{16}\] \[F_{4}=\frac{1}{2}\begin{pmatrix}0&0&0&0\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\end{pmatrix}. \tag{17}\] Notice that the above matrices are the complex conjugate of the ones defined in [1]. This occurs because in [1] a UCPT map is written as \(\varepsilon(A)=\sum_{k}E_{k}^{\dagger}AE_{k}\), while in this article the convention is to write \(\varepsilon(A)=\sum_{k}E_{k}AE_{k}^{\dagger}\), which is the same convention of [10, 18]. Both \(\varepsilon_{3}\) and \(\varepsilon_{4}\) have maximal Choi ranks. For both \(n=3\) and \(n=4\) we have \(CR(\varepsilon_{n})>\sqrt[4]{2}\,n\). Then, by theorem 3.5, we know that \(\varepsilon_{n}\otimes\varepsilon_{m}\) is not an extreme UCPT map, for \(n,m\in\{3,4\}\). It remains to show if we can apply theorem 3.5 to obtain counterexamples for even higher dimensions.
2310.01745
A Volumetric Approach to Monge's Optimal Transport on Surfaces
We propose a volumetric formulation for computing the Optimal Transport problem defined on surfaces in $\mathbb{R}^3$, found in disciplines like optics, computer graphics, and computational methodologies. Instead of directly tackling the original problem on the surface, we define a new Optimal Transport problem on a thin tubular region, $T_{\epsilon}$, adjacent to the surface. This extension offers enhanced flexibility and simplicity for numerical discretization on Cartesian grids. The Optimal Transport mapping and potential function computed on $T_{\epsilon}$ are consistent with the original problem on surfaces. We demonstrate that, with the proposed volumetric approach, it is possible to use simple and straightforward numerical methods to solve Optimal Transport for $\Gamma = \mathbb{S}^2$ and the $2$-torus.
Richard Tsai, Axel G. R. Turnquist
2023-10-03T02:13:24Z
http://arxiv.org/abs/2310.01745v3
# A volumetric approach to Monge's optimal transport on surfaces ###### Abstract. We propose a volumetric formulation for computing the Optimal Transport problem defined on surfaces in \(\mathbb{R}^{3}\), found in disciplines like optics, computer graphics, and computational methodologies. Instead of directly tackling the original problem on the surface, we define a new Optimal Transport problem on a thin tubular region, \(T_{\epsilon}\), adjacent to the surface. This extension offers enhanced flexibility and simplicity for numerical discretization on Cartesian grids. The Optimal Transport mapping and potential function computed on \(T_{\epsilon}\) are consistent with the original problem on surfaces. We demonstrate that, with the proposed volumetric approach, it is possible to use simple and straightforward numerical methods to solve Optimal Transport for \(\Gamma=\mathbb{S}^{2}\). ## 1. Introduction We consider computational Optimal Transport problems on smooth hypersurfaces in \(\mathbb{R}^{3}\), with the metric induced by the Euclidean distance in \(\mathbb{R}^{3}\). Our objective is to derive an Optimal Transport problem in a small neighborhood around the hypersurface, one that can be discretized and computed easily. In recent years, there has been much interest in finding solving Optimal Transport problems on Riemannian manifolds, mostly motivated from applications. These applications can roughly be divided into two categories, the first where the Optimal Transport problem is derived from first principles, and the second where Optimal Transport is used in an _ad hoc_ way as a powerful tool, usually in geometric and statistical analysis. In freeform optics, a typical goal is to solve for the shape of reflectors or lenses that take a source light intensity to a desired target intensity pattern. The Optimal Transport partial differential equation (PDE) arises as a consequence of Snell's law, the optical setup, and conservation of light intensity. These formulations are inverse problems in which the potential function of Optimal Transport is directly related to the shapes of lenses or reflectors that redirect source light intensities to required target intensities, see [40, 29, 30], which include examples involving Optimal Transport PDE and other example whose formulations are Generated Jacobian Equations. In Section 4, we will perform one example computation for the cost function arising in the reflector antenna problem, see [36] and [37], which is perhaps the most well-known example of such freeform optics problems with an Optimal Transport formulation. In the adaptive mesh community, Optimal Transport has been used as a convenient tool for finding a mapping that redistributes mesh node density to a desired target density. The first such methods in the moving mesh methods community were proposed in [38]. It is also used for the more general problem of diffeomorphic density matching on the sphere, where other approaches such as Optimal Information Transport can be used, see [2]. In the statistics community, Optimal Transport has been extensively employed in computing the distance, or interpolating between, probability distributions on manifolds, and also in sampling, since solving the Optimal Transport problem allows one to compute a pushforward mapping for measures. In these situations, using Optimal Transport is not the only available computational tool, but has shown to be useful in these communities for its regularity and metric properties, even when source and target probability measures are not smooth or bounded away from zero. Many methods in the last ten or so years have been proposed for computing Optimal Transport related quantities on manifolds, such as the Wasserstein distance. Some of the computations were motivated by applications in computer graphics. In [34], for example, the authors computed the Wasserstein-1 distance on manifolds in order to obtain the geodesic distance on complicated shapes, using a finite-element discretization. In [16], the authors used the Benamou-Brenier formulation of Optimal Transport, a continuous "fluid-mechanics" formulation (as opposed to the static formulation presented here), to compute the Optimal Transport distance, interpolation, mapping, and potential functions on a triangulated approximation of the manifold. However, straightforward implementation using the Brenier-Benamou formulation may suffer from slow convergence. We now briefly review methods for solving Optimal Transport problems on surfaces. One common approach to developing numerical methods for surface problems starts with triangularization of the manifold. Recently, in the paper [41], mean-field games were discretized and solved on manifolds using triangularization, of which the Benamou-Brenier formulation of Optimal Transport (using the squared geodesic cost function) is a subcase. One of the great challenges of applying traditional finite element methods to Optimal Transport PDE is the fact that they are not in divergence form. Thus, the analysis becomes very challenging, but, nevertheless, considerable work has been done for this in the Euclidean case with the Monge-Ampere PDE, see, for example, the work done in [24]. It is also not simple to design higher-order schemes, in contrast to simple finite-differences on a Cartesian grid where it is very simple to design high-order discretizations. There is another general approach to approximating PDE on manifolds, which is done by locally approximating both the manifold and the function to be computed via polynomials using a moving least squares method, originally proposed in [17]. These polynomial approximations are computed in a computational neighborhood. Thus far, these methods have not been applied to solving Optimal Transport PDE on the sphere. The closest point method was originally proposed in [31] (see also [21]), where the solution of certain PDE are extended to be constant via a closest point map in a small neighborhood of the manifold and all derivatives are then computed using finite differences on a Cartesian grid. However, for our purposes, using the closest point operator for the determinant of a Hessian is rather complicated. Furthermore, simply extending the source and target density functions will not necessarily lead to an Optimal Transport problem on the extended domain and extending the cost function via a closest point extension without introducing a penalty in the normal directions will lead to a degenerate PDE. We have decided instead to extend the source and target density functions and compensating using Jacobian term so they remain density functions and extend the cost function with a penalty term. As we will show in Section 3, these choices will naturally lead to a solution which is constant with respect to the closest point map. Also, we will find in Section 3 that our re-formulation leads to a natural new Optimal Transport problem in a tubular neighborhood with natural zero-Neumann boundary conditions which may be of independent interest for the Optimal Transport community. In this article, instead of solving directly the Monge problem of Optimal Transport defined on \(\Gamma\), we propose solving the equivalent Optimal Transport problem on a narrowband \(T_{\epsilon}\) around \(\Gamma\) for a special class of probability measures in \(T_{\epsilon}\) that is constructed from probability measures on \(\Gamma\), and with a class of cost functions that is derived from cost functions on \(\Gamma\). Similar to [22], our approach in this paper is to reformulate the variational formulation of the problem on the manifold, which is done in Section 3. It relies on the fact that the extension presented in Section 3 is itself an Optimal Transport problem, and so the usual techniques for the PDE formulation of Optimal Transport are used to formulate the PDE on the tubular neighborhood \(T_{\epsilon}\). We will demonstrate, however, that solving this new Optimal Transport problem in \(T_{\epsilon}\) will not require that we take thickness of the narrowband to zero. This is achieved by carefully setting up the new Optimal Transport problem with a cost function that penalizes mass transport in the normal direction to the manifold \(\Gamma\). Because the method in this manuscript allows for great flexibility in the choice of cost function, it can also be employed to solve the reflector antenna problem, which involves finding the shape of a reflector given a source directional light intensity and a desired target directional light intensity. Some methods, such as those developed for the reflector antenna and rely on a direct discretization of the Optimal Transport PDE, include [27, 28, 6, 9, 39, 3, 30, 13]. However, these methods (with the exception of [13]) have been designed solely for the reflector antenna problem; that is, they are restricted to the cost function \(c(\mathbf{x},\mathbf{y})=-\log\|\mathbf{x}-\mathbf{y}\|\). The PDE method proposed in this manuscript can be contrasted with the wide-stencil monotone scheme, that has convergence guarantees, developed in [14, 12], where discretization of the second-directional derivatives was performed on local tangent planes. The Cartesian grid proposed here is much simpler, which makes the discretization of the derivatives much simpler. The greatest benefits of the current proposed scheme over monotone methods are that a wider variety of difficult computational examples are possible to compute in a short amount of time with accelerated convergence techniques, see Algorithm 1 which shows that the current implementation uses an accelerated single-step Jacobi method. A possible slowdown that might be expected from extending the discretization to a third dimension is counteracted by the efficiency of the discretization (which does not require computing a large number of derivatives in a search radius, as is done in the monotone scheme in [12]) and the good performance of the accelerated solvers for more difficult computational examples that allow for much larger time step sizes in practice in the solvers than are typically used in monotone schemes. It has been shown in [11] how to construct wide-stencil monotone finite-difference discretizations, for regions in \(\mathbb{R}^{3}\), of the Optimal Transport PDE on \(T_{\epsilon}\). However, we believe that the convergence guarantees will be outweighed by the computational challenges of requiring a large number of discretization points to resolve \(T_{\epsilon}\) especially for relatively small \(\epsilon\). Also, in [11], it was shown, for technical reasons, that discretization points had to remain a certain distance away from the boundary. We argue that in a region like \(T_{\epsilon}\) where most points are close to the boundary this is too restrictive of a choice. The work in [11] was, however, for more general Optimal Transport problems on regions in \(\mathbb{R}^{3}\) and it may be possible to construct monotone discretizations in \(T_{\epsilon}\) by exploiting the symmetry of our Optimal Transport problem in \(T_{\epsilon}\), due to the fact that the solution is constant in the normal direction, but we defer a detailed discussion of this point to future work. The method proposed in this manuscript is a direct discretization of the full Optimal Transport PDE using a grid that is generated from a Cartesian cube of evenly spaced points, with computational stencils formed from the nearest surrounding points. This can be contrasted with the wide-stencil schemes in [12] and the geometric methods in [38], one of the earliest methods proposed for solving the Optimal Transport problem on the sphere with squared geodesic cost. Although the computational methods in [38] perform well, many properties of the discretizations were informed by trial and error. In much of the applied Optimal Transport literature, the fastest method known for computing an approximation of the Optimal Transport distance is achieved by entropically regularizing the Optimal Transport problem and then using the Sinkhorn algorithm, originally proposed in [5]. If one wishes to compute an approximation of the distance between two probability measures, then Sinkhorn is the current state of the art. However, it is unclear from the transference plane one obtains from the entropically regularized problem how to extract the Optimal Transport mapping and the potential function. Nevertheless, one can entropically regularize our extended Optimal Transport problem on \(T_{\epsilon}\) and run the Sinkhorn algorithm to efficiently compute an approximation of the Wasserstein distance between two probability distributions. For our proposed extension, the Wasserstein distance (between two probability distributions) for the Optimal Transport problem on \(\Gamma\) will be equal to the Wasserstein distance between the extended probability distributions on \(T_{\epsilon}\). Moreover, using the divergence as a stopping criterion defined in [5], after a brief investigation, we found that the Sinkhorn algorithm requires approximately the same number of iterations to reach a given tolerance (of the divergence) for the Optimal Transport problem on \(\Gamma\) as for the Optimal Transport problem on \(T_{\epsilon}\). In Section 2, we review the relevant background for the PDE formulation of Optimal Transport on manifolds. We then introduce the Monge problem of Optimal Transport and then characterize the minimizer as a mass-preserving map that arises from a \(c\)-convex potential function. In Section 3, we set up the new Optimal Transport problem carefully and prove how the map from of the original Optimal Transport problem on \(\Gamma\) can be extracted from the map from the new Optimal Transport problem on \(T_{\epsilon}\). The PDE formulation of the Optimal Transport problem on \(T_{\epsilon}\) also naturally comes with Neumann boundary conditions on \(T_{\epsilon}\). In Section 4, we detail how we construct a discretization of the Optimal Transport PDE on \(T_{\epsilon}\) by using a Cartesian grid, and then show how we solve the resulting system of nonlinear equations. Then, we run sample computations for different cost functions and run some tests where we vary \(\epsilon\) and our discretization parameter \(h\). In Section 5, we recap how the reformulation has allowed us to design a simple discretization of the Optimal Transport PDE. ## 2. Background We consider probability measures, \(\mu\) and \(\nu\) that admit densities, i.e. \(d\mu(\mathbf{x})=f(\mathbf{x})dS(\mathbf{x})\) and \(d\nu(\mathbf{x})=g(\mathbf{x})dS(\mathbf{x})\). Furthermore, we consider \(\boldsymbol{\theta}:\Gamma\to\Gamma\), which "transports" mass locally. In other words, \(\nu(B)=\mu(\boldsymbol{\theta}^{-1}(B))\) for all Borel subsets \(B\) of \(\Gamma\). We say that \(\boldsymbol{\theta}\) pushes forward the distribution \(\mu\) onto \(\nu\), and denote this action by \(\boldsymbol{\theta}_{\#}\mu=\nu\). We consider the Monge problem of Optimal Transport on compact, connected, orientable surfaces \(\Gamma\) embedded in \(\mathbb{R}^{3}\), whose metric is the induced metric from the ambient Euclidean space. The Monge problem on \(\Gamma\) is to find a map \(m\) satisfying \(m_{\#}\mu=\nu\) that also minimizes the cost functional \[\mathcal{C}(\boldsymbol{\theta}):=\int_{\Gamma}c(\mathbf{x},\boldsymbol{ \theta}(\mathbf{x}))f(\mathbf{x})dS(\mathbf{x}), \tag{1}\] for a given cost function \(c:\Gamma\times\Gamma\to\mathbb{R}\). The existence of minimizer is guaranteed when the probability measures admit densities on \(\Gamma\) and \(c\) is lower-semicontinuous. See [32] for a deeper discussion on the sufficient conditions for the existence of minimizers. For the remainder of the paper, we will only consider smooth density functions bounded away from zero. We define the following space of density functions: \[\Theta_{\Gamma}:=\left\{\rho(x)\in C^{\infty}:\int_{\Gamma}\rho dS=1,\ \rho(x)>0, \forall x\in\Gamma\right\}. \tag{2}\] Note that the space \(\Theta_{\Gamma}\) depends on the underlying set \(\Gamma\). Given that \(f,g\in\Theta_{\Gamma}\) and some technical conditions on \(c\), known as the MTW conditions [20], we are actually able to find a unique smooth mapping \(\mathbf{m}\) that solves Equation (3). Under these conditions, we will write the Optimal Transport problem as \[\mathbf{m}=\operatorname{argmin}_{\boldsymbol{\theta}_{\#}\mu=\nu}\mathcal{C} (\boldsymbol{\theta}). \tag{3}\] We will refer to such an \(\mathbf{m}\) as the solution of the Optimal Transport problem as well as the Optimal Transport mapping. This is then an optimization problem subject to a nonlinear constraint. The uniqueness of minimizer \(\mathbf{m}\) of Equation (3) can be characterized by the problem's potential function being _c-convex_, with \(c\) associated with the cost function of the problem. **Definition 1**.: _The \(c\)-transform of a function \(u:\Gamma\to\mathbb{R}\), which is denoted by \(u^{c}\) is defined as:_ \[u^{c}(\mathbf{y})=\sup_{\mathbf{x}\in\Gamma}\left(-c(\mathbf{x},\mathbf{y})-u (\mathbf{x})\right). \tag{4}\] **Definition 2**.: _A function \(u\) is \(c\)-convex if at each point \(\mathbf{x}\in\Gamma\), there exists a \(\mathbf{y}\in\Gamma\) and a value \(u^{c}(\mathbf{y})\) such that_ \[\begin{cases}-u^{c}(\mathbf{y})-c(\mathbf{x},\mathbf{y})=u(\mathbf{x}),\\ -u^{c}(\mathbf{y})-c(\mathbf{x}^{\prime},\mathbf{y})\leq u(\mathbf{x}^{\prime }),\ \forall\mathbf{x}^{\prime}\in\Gamma,\end{cases} \tag{5}\] _where \(u^{c}(\mathbf{y})\) is the \(c\)-transform of \(u\)._ Let \(u\in C^{1}(\Gamma;\mathbb{R})\) and \(c\)-convex, we define implicitly a mapping \(\tilde{\mathbf{m}}\) as the solution to the equation: \[\nabla_{\Gamma}u(\mathbf{x})=-\nabla_{\mathbf{x},\Gamma}c(\mathbf{x},\tilde{ \mathbf{m}}),\ \forall\mathbf{x}\in\Gamma, \tag{6}\] where the gradients \(\nabla_{\Gamma}\) are taken with respect to the metric on \(\Gamma\), see [18]. If such a mapping \(\tilde{\mathbf{m}}\) satisfies \(\tilde{\mathbf{m}}_{\#}\mu=\nu\), then it is exactly the unique solution of the Optimal Transport problem in Equation (3). The preceding discussion is summarized in Theorem 2.7 from [18]: **Theorem 3**.: _The Monge problem in Equation (3) with smooth cost function \(c(\mathbf{x},\mathbf{y})\) satisfying the MTW conditions (see [20]) and source and target probability measures \(\mu\) and \(\nu\), respectively, where \(\mu\) and \(\nu\) have density functions \(f,g\in\Theta_{\Gamma}\), respectively, has a solution which is a mapping \(\mathbf{m}\) iff \(\mathbf{m}\) satisfies both \(\mathbf{m}_{\#}\mu=\nu\) and is uniquely solvable via Equation (6), where \(u\) is a \(c\)-convex function._ The Monge problem of Optimal Transport on \(\Gamma\) has a PDE formulation if the cost function and the source and target densities satisfy some additional conditions. The usual additional assumptions on the cost function are called the Ma-Trudinger-Wang (MTW) conditions (see [20] for the original conditions in subsets of Euclidean space). To guarantee \(C^{\infty}\) smoothness of the solution of the Optimal Transport mapping or potential function, in addition the source and target density function must be \(C^{\infty}\) and bounded away from zero. If these conditions are met, then the potential function \(u\) and the Optimal Transport mapping \(\mathbf{m}\) are smooth classical solutions of the following equations: \[\det(D_{\Gamma}^{2}u(\mathbf{x})+D_{\mathbf{x}\mathbf{x},\Gamma}^{2}c( \mathbf{x},\mathbf{y}))=\left|\det D_{\mathbf{x}\mathbf{y},\Gamma}^{2}c( \mathbf{x},\mathbf{y})\right|f(\mathbf{x})/g(\mathbf{m}(\mathbf{x})), \tag{7}\] \[\nabla_{\Gamma}u(\mathbf{x})=-\nabla_{\mathbf{x},\Gamma}c(\mathbf{x},\mathbf{ y}), \tag{8}\] for \(\mathbf{y}=\mathbf{m}(\mathbf{x})\) and \(\mathbf{m}(\mathbf{x})\) is the minimizer in Equation (3). Here the derivatives \(D_{\Gamma}\) are taken on the surface with respect to the induced metric on \(\Gamma\). We will refer to (7)- (8) as the Optimal Transport PDE. We point out that the curvature of the manifold \(\Gamma\) can potentially hinder the smoothness of the potential function \(u\) and the Optimal Transport mapping \(T\) even when employing the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{M}(\mathbf{x},\mathbf{y})^{2}\). For more information, we refer the reader to [8] for some concrete examples where the geometry of \(\Gamma\) prevents the MTW conditions from holding even for the squared geodesic cost function. Nevertheless, for the unit sphere \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\), smooth and nonzero source and target masses with the squared geodesic cost function do lead to a smooth potential function \(u\), see [19]. In this paper, we solve the PDE (7)- (8) on the unit sphere \(\mathbb{S}^{2}\) numerically. We will achieve this by expanding the Optimal Transport problem on \(\Gamma\) to be a three dimensional Optimal Transport problem within a tubular neighborhood (narrow-band) \(T_{\epsilon}\) of \(\Gamma\) and solve a corresponding Optimal Transport PDE. We perform this extension of the problem because the discretization of the PDE is much simpler in a tubular neighborhood using a Cartesian grid than doing a discretization in local tangent planes, as was done in [12]. ## 3. Volumetric Extension of the Optimal Transport Problem In this section, we define a special Optimal Transport problem on a tubular neighborhood of \(\Gamma\) which is equivalent in a suitable sense to the given Optimal Transport problem defined on a closed smooth surface \(\Gamma\subset\mathbb{R}^{3}\). The Optimal Transport mapping from this extended Optimal Transport problem will be shown to have a natural connection with the Optimal Transport mapping on \(\Gamma\). Working in a tubular neighborhood of the surface allows for flexible meshing and use of standard discretization methods for the differential and integral operators in the extended Euclidean domain. The proposed approach follows the strategy developed in [4] and [22]. We formulate the Optimal Transport mapping for the Monge Optimal Transport problem on \(T_{\epsilon}\) as \[\mathbf{m}_{\epsilon}=\operatorname{argmin}_{\boldsymbol{\xi}_{\#}\mu_{ \epsilon}=\nu_{\epsilon}}\int_{T_{\epsilon}}c_{\epsilon}\left(\mathbf{z}^{ \prime},\boldsymbol{\xi}(\mathbf{z}^{\prime})\right)d\mu_{\epsilon}(\mathbf{z} ^{\prime}), \tag{9}\] for a special class of cost function \(c_{\epsilon}:T_{\epsilon}\times T_{\epsilon}\mapsto\mathbb{R}^{+}\) and a special class of probabilities \(\mu_{\epsilon}\) and \(\nu_{\epsilon}\) with density functions \(f_{\epsilon}\) and \(g_{\epsilon}\). Given \(f,g\in\Theta_{\Gamma}\) that are source and target densities and \(c\) a cost function on \(\Gamma\) in the original Optimal transport problem. We will present a particular way of extending \(f,g\) and \(c\) to \(f_{\epsilon},g_{\epsilon}\in\Theta_{T_{\epsilon}}\) and \(c_{\epsilon}\) in Section 3.2 and Section 3.3, respectively. In Section 3.4, we will show that the Optimal Transport problem in Equation (9) is "equivalent" to Equation (3) in a specific sense which will be made clear in Theorem 4. With the judiciously extended \(f_{\epsilon}\), \(g_{\epsilon}\) and \(c_{\epsilon}\), we will solve numerically (up to a constant) the following PDE for the pair \((u_{\epsilon},\mathbf{m}_{\epsilon})\): \[\det\left(D^{2}u_{\epsilon}(\mathbf{z})+D^{2}_{\mathbf{z}\mathbf{ z}}c_{\epsilon}(\mathbf{z},\boldsymbol{\xi})\right) =\left|D^{2}_{\mathbf{z}\boldsymbol{\xi}}c_{\epsilon}(\mathbf{z}, \boldsymbol{\xi})\right|f_{\epsilon}(\mathbf{z})/g_{\epsilon}(\mathbf{m}_{ \epsilon}(\mathbf{z})), \tag{11}\] \[\nabla u_{\epsilon}(\mathbf{z}) =-\nabla_{\mathbf{z}}c_{\epsilon}(\mathbf{z},\boldsymbol{\xi}), \text{ for }\boldsymbol{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z}),\mathbf{z}\in T_{\epsilon}, \tag{10}\] with the Neumann boundary condition \[\frac{\partial u_{\epsilon}(\mathbf{z})}{\partial\mathbf{n}}=0,\quad\mathbf{z }\in\partial T_{\epsilon}. \tag{12}\] We remark that all Optimal Transport problems posed on bounded subsets of Euclidean space have a natural global condition, known as the second boundary value problem, see [35]. The second boundary condition is a global constraint that can be formulated as a global Neumann-type condition; see [10], which can be shown to reduce to Equation (12) on the boundary. ### The general setup Let \(\mathcal{C}_{\Gamma}\) denotes the set of points in \(\mathbb{R}^{3}\) which are equidistant to at least two distinct points on \(\Gamma\). The reach of \(\Gamma\) is defined as \[\tau_{\Gamma}:=\inf_{\mathbf{x}\in\Gamma,\,\mathbf{z}\in\mathcal{C}_{\Gamma}}|| \mathbf{x}-\mathbf{z}||. \tag{13}\] Note that by definition, \(\tau_{\Gamma}\leq 1/\kappa\) where \(\kappa\) is the absolute value of the largest eigenvalue of the second fundamental form over all points in \(\Gamma\). Furthermore, if \(\Gamma\) is a \(C^{2}\) surface in \(\mathbb{R}^{3}\), its reach is bounded below from \(0\), see [7]. By definition, the closest point mapping to \(\Gamma\) \[z^{*}\equiv P_{\Gamma}(\mathbf{z})=\operatorname{argmin}_{\mathbf{z}\in \Gamma}||\mathbf{z}-\mathbf{x}||, \tag{14}\] is well-defined for any point whose (unsigned) distance to \(\Gamma\) is smaller than \(\tau_{\Gamma}\). With a predetermined orientation, we define the signed distance function to \(\Gamma\): \[y=\phi_{\Gamma}(\mathbf{z}):=\operatorname{sgn}(\mathbf{z})\min_{\mathbf{x} \in\Gamma}||\mathbf{z}-\mathbf{x}||, \tag{15}\] where sgn corresponds to the orientation. Typically, one choose \(\text{sgn}(\mathbf{z})<0\) for \(\mathbf{z}\) in the bounded region enclosed by \(\Gamma\). We will denote the corresponding normal vector at \(\mathbf{x}\in\Gamma\) by \(\hat{\mathbf{n}}_{\mathbf{x}}.\) We notice that \(\phi_{\Gamma}(\mathbf{z})=\hat{\mathbf{n}}_{P_{\Gamma}\mathbf{z}}\cdot( \mathbf{z}-P_{\Gamma}\mathbf{z}).\) We denote the \(y\)-level set of \(\phi_{\Gamma}\) by \(\Gamma_{y}\); i.e., \[\Gamma_{y}=\left\{\mathbf{z}\in T_{\epsilon}:\phi_{\Gamma}(\mathbf{z})=y \right\}.\] Hence, we will work with the tubular neighborhood \(T_{\epsilon}\subset\mathbb{R}^{3}\) of \(\Gamma\): \[T_{\epsilon}:=\left\{\mathbf{z}\in\mathbb{R}^{3}:\|\mathbf{z}-\mathbf{p}\|< \epsilon<\tau_{\Gamma},\mathbf{p}\in\Gamma\right\}. \tag{16}\] See Figure 1, for a schematic depiction of this volumetric extension. ### Extension of Surface Density Functions to \(T_{\epsilon}\) Let \(f\in\Theta_{\Gamma}\), we can rewrite the integration of \(f\) on \(\Gamma\) to one on \(\Gamma_{y}\) for any \(y\in(\epsilon,\epsilon)\) as follows: \[\int_{\Gamma}f(\mathbf{x})dS=\int_{\Gamma_{y}}f(P_{\Gamma}\mathbf{z}^{\prime })J(\mathbf{z}^{\prime})dS, \tag{17}\] where \(J(\mathbf{z}^{\prime})=(1-\kappa_{1}y)(1-\kappa_{2}y)\) accounts for the change of variables in the integrations. See e.g. [15]. Furthermore, by the coarea formula, \[1=\int_{\Gamma}f(\mathbf{x})dS=\frac{1}{2\epsilon}\int_{-\epsilon}^{\epsilon} \int_{\Gamma_{y}}f(P_{\Gamma}\mathbf{z}^{\prime})J(\mathbf{z}^{\prime})dSdy= \frac{1}{2\epsilon}\int_{T_{\epsilon}}f(P_{\Gamma}\mathbf{z})J(\mathbf{z})d \mathbf{z}.\] Therefore, our extended Optimal Transport problem is formulated with the class of density function defined below: \[\Theta_{T_{\epsilon}}:=\left\{\frac{1}{2\epsilon}\rho(P_{\Gamma}\mathbf{z})J( \mathbf{z}):\rho\in\Theta_{\Gamma}\right\}. \tag{18}\] Therefore, any density in \(\Theta_{T_{\epsilon}}\) is strictly positive and smooth (The Jacobian \(J\) is smooth and strictly positive since we stay within the reach of \(\Gamma\).) Figure 1. New Optimal Transport problems \(\text{OT}_{\epsilon}\) will be defined on \(T_{\epsilon}\) of \(\Gamma\) to have solutions to the Optimal Transport problems (OTs) on \(\Gamma\). ### Extension of Cost Function to \(T_{\epsilon}\) Let \(c\) be the cost function defined on \(\Gamma\). We define the extended cost function for any two points in \(T_{\epsilon}\) by adding an additional cost to the difference in their distance to \(\Gamma\): \[c_{\epsilon}(\mathbf{z}_{1},\mathbf{z}_{2})=c(P_{\Gamma}\mathbf{z}_{1},P_{ \Gamma}\mathbf{z}_{2})+\frac{\sigma}{2}(\phi_{\Gamma}(\mathbf{z}_{1})-\phi_{ \Gamma}(\mathbf{z}_{2}))^{2}, \tag{19}\] for \(\sigma>0\). We will see that the particular choice of \(\sigma\) will not affect the analysis. ### Equivalence of the Two Optimal Transport Problems In this section, we establish that the Optimal Transport problem defined via our extensions of the density functions and the cost function in Sections 3.2 and 3.3 leads to an Optimal Transport mapping \(\mathbf{m}_{\epsilon}\) that moves mass only along each level set of the distance function to \(\Gamma\). Moreover, \(\mathbf{m}_{\epsilon}\) can be used to find \(\mathbf{m}\), the mapping from the Optimal Transport problem on \(\Gamma\). **Theorem 4**.: _The solution, \(\mathbf{m}_{\epsilon}\), to the new Optimal Transport problem presented in Equation (9) defined with the densities in \(\Theta_{T_{\epsilon}}\) and cost function in Equation (19), satisfies_ \[P_{\Gamma}\mathbf{m}_{\epsilon}(\mathbf{z})=\mathbf{m}(P_{\Gamma}\mathbf{z}), \ \forall\mathbf{z}\in T_{\epsilon}, \tag{20}\] _and_ \[\phi_{\Gamma}(\mathbf{z})=\phi_{\Gamma}(\mathbf{m}_{\epsilon}(\mathbf{z})), \ \forall\mathbf{z}\in T_{\epsilon}. \tag{21}\] This theorem then implies that the Optimal Transport cost for the Optimal Transport problem on \(\Gamma\) is equal to the Optimal Transport cost for the extended Optimal Transport problem on \(T_{\epsilon}\). Thus, the Wasserstein distance, for example, on \(\Gamma\) can be computed via the extended Optimal Transport problem on \(T_{\epsilon}\). **Corollary 5**.: _We have_ \[\min_{\boldsymbol{\xi}_{\#}\mu_{\epsilon}=\nu_{\epsilon}}\int_{T_{\epsilon}}c_ {\epsilon}(\mathbf{z},\boldsymbol{\xi}(\mathbf{z}))f_{\epsilon}(\mathbf{z})d \mathbf{z}=\min_{\boldsymbol{\theta}_{\#}\mu=\nu}\int_{\Gamma}c(\mathbf{x}, \boldsymbol{\theta}(\mathbf{x}))f(\mathbf{x})dS(\mathbf{x}). \tag{22}\] Proof.: \[\min_{\boldsymbol{\xi}_{\#}\mu_{\epsilon}=\nu_{\epsilon}}\int_{T_{ \epsilon}}c_{\epsilon}(\mathbf{z},\boldsymbol{\xi})f_{\epsilon}(\mathbf{z})d \mathbf{z} =\int_{T_{\epsilon}}c_{\epsilon}(\mathbf{z},\mathbf{m}_{\epsilon} )f_{\epsilon}(\mathbf{z})d\mathbf{z}\] \[=\int_{T_{\epsilon}}c(P_{\Gamma}\mathbf{z},P_{\Gamma}\mathbf{m}_{ \epsilon})f_{\epsilon}(\mathbf{z})d\mathbf{z}\] \[=\frac{1}{2\epsilon}\int_{-\epsilon}^{\epsilon}\int_{\Gamma_{y}}c (P_{\Gamma}\mathbf{z},P_{\Gamma}\mathbf{m}_{\epsilon}(\mathbf{z}))f(\mathbf{ x})J(\mathbf{z})dS(\mathbf{z})dy\] \[=\frac{1}{2\epsilon}\int_{-\epsilon}^{\epsilon}\int_{\Gamma}c( \mathbf{x},\mathbf{m}(\mathbf{x}))f(\mathbf{x})dS(\mathbf{x})dy\] \[=\int_{\Gamma}c(\mathbf{x},\mathbf{m}(\mathbf{x}))f(\mathbf{x}) dS(\mathbf{x})\] \[=\min_{\boldsymbol{\theta}_{\#}\mu=\nu}\int_{\Gamma}c(\mathbf{x}, \boldsymbol{\theta}(\mathbf{x}))f(\mathbf{x})dS(\mathbf{x}).\] Mass preservationSince \(\epsilon<\tau_{\Gamma}\) and \(\Gamma\) is compact and closed, the projection \(P_{\Gamma}\) is bijective between \(\Gamma\) and \(\Gamma_{y}\) for any \(y\in(-\epsilon,\epsilon).\) We define the inverse map \[P_{\Gamma,\Gamma_{y}}^{-1}(\mathbf{z}):=\mathbf{z}+y\mathbf{n}(\mathbf{z}),\text { for }\mathbf{z}\in\Gamma, \tag{23}\] where \(\mathbf{n}(\mathbf{z})=\nabla\phi_{\Gamma}(\mathbf{z})\) is the outward normal vector of \(\Gamma\) at \(\mathbf{z}\). This vector remains the same for all \(\mathbf{z}^{\prime}\in T_{\epsilon}\) satisfying \(P_{\Gamma}\mathbf{z}^{\prime}=\mathbf{z}.\) Thus we see that \(P_{\Gamma}\circ P_{\Gamma,\Gamma_{y}}^{-1}(\mathbf{z})=\mathbf{z}\) and \(P_{\Gamma,\Gamma_{y}}^{-1}\circ P_{\Gamma}(\mathbf{z}^{\prime})=\mathbf{z}^{\prime}\) for \(\mathbf{z}^{\prime}\in\Gamma_{y}.\) Therefore, we can associate any mapping \(\boldsymbol{\xi}:\Gamma\mapsto\Gamma\) with \(\boldsymbol{\xi}\circ P_{\Gamma}:\Gamma_{y}\mapsto\Gamma_{y}\) and vice versa. So we define \[\boldsymbol{\xi}_{\epsilon}(\mathbf{z}):=P_{\Gamma,\Gamma_{y}}^{-1}\circ \boldsymbol{\xi}\circ P_{\Gamma}(\mathbf{z}),\text{ }\phi_{\Gamma}(\mathbf{z})=y,\text{ and }y\in(- \epsilon,\epsilon). \tag{24}\] Equivalently, \[P_{\Gamma}\boldsymbol{\xi}_{\epsilon}(\mathbf{z}^{\prime})=\boldsymbol{\xi}(P _{\Gamma}\mathbf{z}^{\prime}),\text{ }\mathbf{z}\in T_{\epsilon}, \tag{25}\] and \[\phi_{\Gamma}(\boldsymbol{\xi}_{\epsilon}(\mathbf{z}^{\prime}))=\phi_{\Gamma} (\boldsymbol{\xi}_{\epsilon}(\mathbf{z}^{\prime})),\text{ }\mathbf{z}\in T_{\epsilon}. \tag{26}\] In particular, suppose \(\mathbf{m}\) is the solution to the Optimal transport problem in Equation (3) on \(\Gamma\). Then, we will show that \[\tilde{\mathbf{m}}_{\epsilon}(\mathbf{z}):=(P_{\Gamma}\mathbf{z}+\phi_{\Gamma }(\mathbf{z})\mathbf{n}(\mathbf{z}))\text{ }\mathbf{m}(P_{\Gamma}\mathbf{z}) \tag{27}\] is a solution to the extended Optimal Transport problem in Equation (9). The first step is to show that if \(\boldsymbol{\xi}\#\mu=\nu\) with \(\mu=fdS\) and \(\nu=gdS\), then \(\boldsymbol{\xi}_{\epsilon}\mu_{\epsilon}=\nu_{\epsilon}\), with \(\mu_{\epsilon}=f_{\epsilon}d\mathbf{x}\), and \(\nu_{\epsilon}=g_{\epsilon}d\mathbf{x}\). But this is another exercise on the coarea formula. Let \(E\subset T_{\epsilon}\), we have \[\int_{E}g_{\epsilon}(\mathbf{x})d\mathbf{x} =\int_{-\epsilon}^{\epsilon}\int_{E\cap\Gamma_{y}}g(P_{\Gamma} \mathbf{z}^{\prime})J(\mathbf{z}^{\prime})dS\text{ }dy\] \[=\int_{-\epsilon}^{\epsilon}\int_{P_{\Gamma}(E\cap\Gamma_{y})}g( \mathbf{z})dSdy\] \[=\int_{-\epsilon}^{\epsilon}\int_{\boldsymbol{\xi}^{-1}(P_{ \Gamma}(E\cap\Gamma_{y}))}f(\mathbf{z})dSdy\] \[=\int_{-\epsilon}^{\epsilon}\int_{P_{\Gamma,\Gamma_{y}}^{-1}( \boldsymbol{\xi}^{-1}(P_{\Gamma}(E\cap\Gamma_{y})))}f(P_{\Gamma}\mathbf{z})J (\mathbf{z})dSdy\] \[=\int_{-\epsilon}^{\epsilon}\int_{\boldsymbol{\xi}_{\epsilon}^{- 1}(E\cap\Gamma_{y})}f(P_{\Gamma}\mathbf{z}^{\prime})J(\mathbf{z}^{\prime})dS \text{ }dy=\int_{\boldsymbol{\xi}_{\epsilon}^{-1}(E)}f_{\epsilon}(\mathbf{x})d \mathbf{x},\] where the penultimate equality follows since \(P_{\Gamma,\Gamma_{y}}^{-1}\circ\boldsymbol{\xi}^{-1}\circ P_{\Gamma}\circ P_{ \Gamma,\Gamma_{y}}^{-1}\circ\boldsymbol{\xi}\circ P_{\Gamma}=P_{\Gamma,\Gamma_ {y}}^{-1}\circ\boldsymbol{\xi}^{-1}\circ\boldsymbol{\xi}\circ P_{\Gamma}=P_{ \Gamma,\Gamma_{y}}^{-1}\circ P_{\Gamma}=\text{Id}\), we then know that \(P_{\Gamma,\Gamma_{y}}^{-1}(\boldsymbol{\xi}^{-1}(P_{\Gamma}(E\cap\Gamma_{y}))) =\boldsymbol{\xi}_{\epsilon}^{-1}(E\cap\Gamma_{y})\). Comparing Equation (9) with Equation (19), Equation (17), and Equation (18), we see that the transport cost over all \(\boldsymbol{\xi}_{\epsilon}\) is minimized for \(\boldsymbol{\xi}_{\epsilon}=\tilde{\mathbf{m}}_{\epsilon}\): \[\begin{split}&\int_{T_{\epsilon}}c_{\epsilon}\left(\mathbf{z}^{ \prime},\boldsymbol{\xi}_{\epsilon}(\mathbf{z}^{\prime})\right)f_{\epsilon}( \mathbf{z}^{\prime})d\mathbf{z}^{\prime}\\ &=\int_{-\epsilon}^{\epsilon}\int_{\Gamma_{y}}c(P_{\Gamma} \mathbf{z}^{\prime},P_{\Gamma}\boldsymbol{\xi}_{\epsilon}(\mathbf{z}^{\prime }))f(P_{\Gamma}\mathbf{z}^{\prime})J(\mathbf{z}^{\prime})dSdy\\ &=2\epsilon\int_{\Gamma}c(\mathbf{z},\boldsymbol{\xi}(\mathbf{z} ))f(\mathbf{z})dS\\ &\geq 2\epsilon\int_{\Gamma}c(\mathbf{z},\boldsymbol{m}(\mathbf{z} ))f(\mathbf{z})dS=\int_{T_{\epsilon}}c_{\epsilon}\left(\mathbf{z}^{\prime}, \tilde{\boldsymbol{m}}_{\epsilon}(\mathbf{z}^{\prime})\right)f_{\epsilon}( \mathbf{z}^{\prime})d\mathbf{z}^{\prime},\end{split} \tag{28}\] Following the above construction, we see that Equation (20) holds for \(\tilde{\mathbf{m}}_{\epsilon}\). Now consider \(\mathbf{z}_{y}\in\Gamma_{y}\). By construction, \(\tilde{\mathbf{m}}_{\epsilon}(\mathbf{z}_{y})\in\Gamma_{y}\), implying Equation (21). \(\tilde{\mathbf{m}}_{\epsilon}(\mathbf{z})\) does not move mass in the normal direction of \(\Gamma\). For our extended problem, since the source and target densities are supported on \(\mathbb{R}^{3}\), the Optimal Mapping \(\mathbf{m}_{\epsilon}\) and potential function \(u_{\epsilon}\) satisfy: \[\nabla u_{\epsilon}(\mathbf{z})=-\nabla_{\mathbf{z}}c_{\epsilon}(\mathbf{z}, \boldsymbol{\xi}), \tag{29}\] for \(\boldsymbol{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z})\), where we emphasize that \(\nabla\) is the Euclidean gradient and \(u_{\epsilon}\) is \(c_{\epsilon}\) convex. The relation defined in Equation (29) applied to \(\tilde{\mathbf{m}}_{\epsilon}\) and separated into one defined on surfaces that are equidistant to \(\Gamma\) becomes \[(I-\mathbf{n}\otimes\mathbf{n})\nabla\tilde{u}_{\epsilon}(\mathbf{z})=-(I- \mathbf{n}\otimes\mathbf{n})\nabla_{\mathbf{z}}c\left(P_{\Gamma}\mathbf{z}, \mathbf{m}\right), \tag{30}\] and one in the normal direction of the surface, \[\nabla\tilde{u}_{\epsilon}(\mathbf{z})\cdot\mathbf{n}=\sigma\left(\phi_{ \Gamma}((\tilde{\mathbf{m}}_{\epsilon}(\mathbf{z}))-\phi_{\Gamma}(\mathbf{z} )\right). \tag{31}\] Here \(\mathbf{n}\) is the normal of \(\Gamma\) at \(P_{\Gamma}\mathbf{z}\). Notice that for \(\mathbf{z}\in\Gamma\), Equation (30) is Equation (6). This means that the restriction of \(u_{\epsilon}\) on \(\Gamma\) solves Equation (6). Using Equation (21) in Equation (31), we obtain \[\frac{\partial\tilde{u}_{\epsilon}}{\partial\mathbf{n}}(\mathbf{z})=0,\ \mathbf{z}\in T_{\epsilon}. \tag{32}\] Then, by Equation (30) \[u_{\epsilon}(\mathbf{z})=u(P_{\Gamma}\mathbf{z})+C,\] for some constant \(C\) and \(P_{\Gamma}\mathbf{m}_{\epsilon}(\mathbf{z})=\mathbf{m}(P_{\Gamma}\mathbf{z})\), which is the solution pair from the Optimal Transport problem on \(\Gamma\). Now, in order to finish the proof of Theorem 4, it remains to show that \(\tilde{u}_{\epsilon}\) is \(c_{\epsilon}\)-convex. Once this is shown, then by Theorem 3 there is a unique solution of the Optimal Transport problem in Equation (9), then we have shown, in fact, \(\tilde{\mathbf{m}}_{\epsilon}=\mathbf{m}_{\epsilon}\) satisfying \(\phi_{\Gamma}(\mathbf{m}_{\epsilon}(\mathbf{z}))=\phi_{\Gamma}(\mathbf{z})\) and \(P_{\Gamma}\mathbf{m}_{\epsilon}(\mathbf{z})=\mathbf{m}(P_{\Gamma}\mathbf{z})\). \(c_{\epsilon}\)_-Convexity of the Potential Function._ **Lemma 6**.: \(\tilde{u}_{\epsilon}(\mathbf{z})=u(P_{\Gamma}\mathbf{z})\) _is \(c_{\epsilon}\)-convex._ Proof.: Recall the definition of \(c\)-convexity from Definition 2: \(u\) is \(c\) convex since it is the potential function for the Optimal Transport problem on \(\Gamma\). This means that for all \(\mathbf{x}\in\Gamma\), there exists a point \(\mathbf{\tilde{x}}\in\Gamma\) and the \(c\)-transform of \(u\) satisfies \(u^{c}(\mathbf{\tilde{x}})\) such that: \[-u^{c}(\mathbf{\tilde{x}})-c(\mathbf{x},\mathbf{\tilde{x}}) =u(\mathbf{x}), \tag{34}\] \[-u^{c}(\mathbf{\tilde{x}})-c(\mathbf{x}^{\prime},\mathbf{\tilde{x}}) \leq u(\mathbf{x}^{\prime}),\ \forall\mathbf{x}^{\prime}\in\Gamma. \tag{33}\] Now, fix \(\mathbf{x}\in\Gamma\) and fix a \(\mathbf{z}\in T_{\epsilon}\) such that \(P_{\Gamma}\mathbf{z}=\mathbf{x}\). From Equation (33), given \(\mathbf{x}\in\Gamma\), we have an \(\mathbf{\tilde{x}}\in\Gamma\) such that Equation (33) holds. Choose \(\mathbf{\tilde{z}}\) such that \(P_{\Gamma}\mathbf{\tilde{x}}=\mathbf{\tilde{x}}\) and \(\phi_{\Gamma}\mathbf{\tilde{z}}=\phi_{\Gamma}\mathbf{z}\). By the definition of the \(c\)-transform in \(T_{\epsilon}\): \[\tilde{u}_{\epsilon}^{c_{\epsilon}}(\mathbf{\tilde{z}})=\sup_{\boldsymbol{ \xi}}\left(-c_{\epsilon}(\boldsymbol{\xi},\mathbf{\tilde{z}})-\tilde{u}_{ \epsilon}(\boldsymbol{\xi})-C\right), \tag{35}\] \[=\sup_{P_{\Gamma}\boldsymbol{\xi},\phi_{\Gamma}\boldsymbol{\xi}}\left(-c(P_{ \Gamma}\boldsymbol{\xi},\mathbf{\tilde{x}})-\frac{\sigma}{2}(\phi_{\Gamma} \boldsymbol{\xi}-\phi_{\Gamma}\mathbf{\tilde{z}})^{2}-u(P_{\Gamma}\boldsymbol{ \xi})-C\right), \tag{36}\] we can immediately take the supremum over \(\phi_{\Gamma}\boldsymbol{\xi}\) by choosing \(\phi_{\Gamma}\boldsymbol{\xi}=\phi_{\Gamma}\mathbf{\tilde{z}}\), since the other terms do not depend on \(\phi_{\Gamma}\boldsymbol{\xi}\). Thus, \[\tilde{u}_{\epsilon}^{c_{\epsilon}}(\mathbf{\tilde{z}})=\sup_{P_{\Gamma} \boldsymbol{\xi}}\left(-c(P_{\Gamma}\boldsymbol{\xi},\mathbf{\tilde{x}})-u(P_ {\Gamma}\boldsymbol{\xi})-C\right)=u^{c}(\mathbf{\tilde{x}})-C. \tag{37}\] Thus, we get: \[-\tilde{u}_{\epsilon}^{c_{\epsilon}}(\mathbf{\tilde{z}})-c_{\epsilon}(\mathbf{ z},\mathbf{\tilde{z}})=-u^{c}(\mathbf{\tilde{x}})-C-c(P_{\Gamma}\mathbf{z}, \mathbf{\tilde{x}})=u(P_{\Gamma}\mathbf{z})-C=\tilde{u}_{\epsilon}(\mathbf{z}), \tag{38}\] by Equation (33). Now, for any \(\mathbf{z}^{\prime}\in T_{\epsilon}\). Then, \[-\tilde{u}_{\epsilon}^{c_{\epsilon}}(\mathbf{\tilde{z}})-c_{ \epsilon}(\mathbf{z}^{\prime},\mathbf{\tilde{z}})=-u^{c}(\mathbf{\tilde{x}})- C-\frac{\sigma}{2}(\phi_{\Gamma}\mathbf{z}^{\prime}-\phi_{\Gamma}\mathbf{ \tilde{z}})^{2}-c(P_{\Gamma}\mathbf{z}^{\prime},P_{\Gamma}\mathbf{\tilde{z}}) \leq\\ -u^{c}(\mathbf{\tilde{x}})-C-c(P_{\Gamma}\mathbf{z}^{\prime},P_{ \Gamma}\mathbf{\tilde{z}})\leq u(P_{\Gamma}\mathbf{z}^{\prime})-C=\tilde{u}_ {\epsilon}(\mathbf{z}^{\prime}),\ \forall\mathbf{z}^{\prime}, \tag{39}\] by Equation (34). Thus, \(\tilde{u}_{\epsilon}\) is \(c_{\epsilon}\)-convex. ### Example: Optimal Transport on the Unit Sphere We have just discussed the general procedure for defining a new Optimal Transport problem on \(T_{\epsilon}\), which uses coordinate-free notation. Here, we narrow our scope to show, specifically, how some quantities discussed earlier in this section can be computed for the case of the sphere \(\Gamma=\mathbb{S}^{2}\) with a specific coordinate system. We also compute various quantities for two specific cost functions, the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\) and the logarithmic cost function \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\) arising in the reflector antenna problem. This allows us to derive concrete formulas, which we can then use in our discretizations in Section 4, where computations are performed on the unit sphere for both cost functions. We show how the Optimal Transport problem in Equation (3) on the unit sphere \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\) is extended. We will use the usual spherical polar coordinates \((\phi,\theta,r)\) in our calculation: \(\mathbf{z}=(r\cos\phi\sin\theta,r\sin\phi\sin\theta,r\cos\theta)\). \(\Gamma\) consists of the set of points whose \(r\) coordinate is \(1\). For any point other than the origin, \(P_{\Gamma}\mathbf{z}=\mathbf{z}/||\mathbf{z}||\), \(\mathbf{z}=(\phi,\theta,r)\), \(\phi_{\Gamma}(\mathbf{z})=1-r\), \(J=1/r^{2}\), \(dS=\sin\theta d\phi d\theta\), and the surface gradient \(\nabla_{\mathbb{S}^{2}}u=\frac{1}{\sin^{2}\theta}\frac{\partial u(\phi,\theta)}{ \partial\phi}\hat{\phi}+\frac{\partial u(\phi,\theta)}{\partial\theta}\hat{\theta}\), where \(\hat{\phi}=(-\sin\phi,\cos\phi,0)\) and \(\hat{\theta}=(\cos\phi\cos\theta,\sin\phi,\cos\theta,-\sin\theta)\). The densities on \(T_{\epsilon}\) are defined: \[f_{\epsilon}(\phi,\theta,r) =\frac{f(\phi,\theta)}{2\epsilon r^{2}}, \tag{41}\] \[g_{\epsilon}(\phi,\theta,r) =\frac{g(\phi,\theta)}{2\epsilon r^{2}}. \tag{40}\] Now we define the cost function \(c_{\epsilon}\) in \(T_{\epsilon}\) as follows for \(\mathbf{z}=(\phi_{1},\theta_{1},r_{1})\) and \(\boldsymbol{\xi}=(\phi_{2},\theta_{2},r_{2})\): \[c_{\epsilon}(\mathbf{z},\boldsymbol{\xi})=\frac{\sigma}{2}(r_{1}-r_{2})^{2}+c \left(\frac{\mathbf{z}}{r_{1}},\frac{\boldsymbol{\xi}}{r_{2}}\right). \tag{42}\] For the squared geodesic cost function on the sphere \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}\arccos(\mathbf{x}\cdot\mathbf{y})^{2}\), we have explicitly: \[c_{\epsilon}(\mathbf{z},\boldsymbol{\xi})=\frac{\sigma}{2}(r_{1}-r_{2})^{2}+ \frac{1}{2}\arccos\left(\frac{\mathbf{z}}{r_{1}}\cdot\frac{\boldsymbol{\xi}}{r _{2}}\right)^{2}, \tag{43}\] and for the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\), we have: \[c_{\epsilon}(\mathbf{z},\boldsymbol{\xi})=\frac{\sigma}{2}(r_{1}-r_{2})^{2}- \log\left(1-\frac{\mathbf{z}}{r_{1}}\cdot\frac{\boldsymbol{\xi}}{r_{2}}\right) ^{2}. \tag{44}\] Denoting \(\mathbf{m}=(m_{\phi},m_{\theta},1)\) to be the \((\phi,\theta,r)\) coordinates of the solution of the original Optimal Transport problem on \(\Gamma=\mathbb{S}^{2}\) in Equation (3), we have shown, in Theorem 4 that \(\mathbf{m}_{\epsilon}\) has the following form in spherical coordinates: \[\mathbf{m}_{\epsilon}(\phi,\theta,r)=\left(m_{\phi},m_{\theta},r\right). \tag{45}\] Here we remind the reader the explicit form of the Optimal Transport mapping from Equation (3) for the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\). We get it by solving Equation (6). It is related to the potential function via the following equation: \[\mathbf{m}(\mathbf{x})=\exp_{\mathbf{x}}\left(\nabla_{\mathbb{S}^{2}}u( \mathbf{x})\right). \tag{46}\] which is a particular case of a result that is well known due to McCann [23]. The notation in Equation (46) means that to get to the point \(\mathbf{m}(\mathbf{x})\in\mathbb{S}^{2}\) from \(\mathbf{x}\) we follow the geodesic on \(\mathbb{S}^{2}\) (on the great circle through \(\mathbf{x}\)) starting from \(\mathbf{x}\) in the direction \(\nabla u(\mathbf{x})\) a distance of \(\|\nabla u(\mathbf{x})\|\). Alternatively, we can write: \[\mathbf{m}(\mathbf{x})=\exp_{\mathbf{x}}\left(\nabla_{\mathbb{S}^{2}}u( \mathbf{x})\right)=\mathbf{x}\cos(\|\nabla_{\mathbb{S}^{2}}u(\mathbf{x})\|)+ \frac{\nabla_{\mathbb{S}^{2}}u(\mathbf{x})}{\|\nabla_{\mathbb{S}^{2}}u( \mathbf{x})\|}\sin(\|\nabla_{\mathbb{S}^{2}}u(\mathbf{x})\|). \tag{47}\] For the logarithmic cost function appearing in the reflector antenna problem, \(c(x,y)=-\log(1-\mathbf{x}\cdot\mathbf{y})\), we get the mapping [13]: \[\mathbf{m}(\mathbf{x})=\mathbf{x}\frac{\left\|\nabla_{\mathbb{S}^{2}}u( \mathbf{x})\right\|^{2}-1}{\left\|\nabla_{\mathbb{S}^{2}}u(\mathbf{x})\right\| ^{2}+1}-\nabla_{\mathbb{S}^{2}}u(\mathbf{x})\frac{2}{\left\|\nabla_{\mathbb{S} ^{2}}u(\mathbf{x})\right\|^{2}+1}. \tag{48}\] Following the computational methods developed in [12] and [13], we also can explicitly derive the formulas for the quantity in Equation (7) for the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\): \[\left|\det D^{2}_{\mathbf{x}\mathbf{y},\mathbb{S}^{2}}\left(\frac{1}{2}d_{ \mathbb{S}^{2}}(\mathbf{x},\mathbf{y})^{2}\right)\right|=\frac{\left\|\nabla_{ \mathbb{S}^{2}}u(\mathbf{x})\right\|}{\sin\left(\left\|\nabla_{\mathbb{S}^{2}} u(\mathbf{x})\right\|\right)}, \tag{49}\] for \(\mathbf{y}=\mathbf{m}(\mathbf{x})\) given in Equation (47). For the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\): \[\left|\det D^{2}_{\mathbf{x}\mathbf{y},\mathbb{S}^{2}}\left(-\log(1-\mathbf{x }\cdot\mathbf{y})\right)\right|=\frac{\left(\left\|\nabla_{\mathbb{S}^{2}}u( \mathbf{x})\right\|^{2}+1\right)^{2}}{4}, \tag{50}\] for \(\mathbf{y}=\mathbf{m}(\mathbf{x})\) given in Equation (48). Therefore, for both cost functions, the term \(\left|D^{2}_{\mathbf{z}\mathbf{\xi}}c_{\epsilon}(\mathbf{z},\mathbf{\xi})\right|\) for \(\mathbf{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z})\) from Equation (10) can be computed explicitly via the methods developed in [12] and via Equation (49) and Equation (50). The result for the cost function \(c_{\epsilon}(\mathbf{z}_{1},\mathbf{z}_{2})=c(P_{\Gamma}\mathbf{z}_{1},P_{ \Gamma}\mathbf{z}_{2})+\frac{\sigma}{2}(\phi_{\Gamma}(\mathbf{z}_{1})-\phi_{ \Gamma}(\mathbf{z}_{2}))^{2}\) for \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y})^ {2}\) is: \[\left|\det D^{2}_{\mathbf{z}\mathbf{\xi}}c_{\epsilon}(\mathbf{z},\mathbf{x}_{ i})\right|=\sigma\frac{\left\|\nabla u_{\epsilon}(\mathbf{z})-\left(\nabla u_{ \epsilon}(\mathbf{z})\cdot\frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right) \frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right\|}{\left\|\mathbf{z}\right\| ^{2}\sin\left(\left\|\nabla u_{\epsilon}(\mathbf{z})-\left(\nabla u_{ \epsilon}(\mathbf{z})\cdot\frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right) \frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right)}, \tag{51}\] for \(\mathbf{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z})\). For the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\), we get: \[\left|\det D^{2}_{\mathbf{z}\mathbf{\xi}}c_{\epsilon}(\mathbf{z},\mathbf{\xi}) \right|=\frac{\sigma\left(\left\|\nabla u_{\epsilon}(\mathbf{x})-\left(\nabla u _{\epsilon}(\mathbf{z})\cdot\frac{\mathbf{z}}{\left\|\mathbf{z}\right\|} \right)\frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right\|^{2}+1\right)^{2}}{4 \left\|\mathbf{z}\right\|^{2}}, \tag{52}\] for \(\mathbf{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z})\). ## 4. Computational Examples The key innovation in reformulating the Optimal Transport problem on \(T_{\epsilon}\) is to allow for a wide selection of numerical discretizations for \(T_{\epsilon}\) and the PDE (10) defined on it. In this section, we discuss a simple method for solving the PDE (10) and demonstrate some computational results using the method. Mainly for the sake of convenience, we use simple finite differences on uniform Cartesian grids (although we emphasize that it is possible to use other types of discretizations), and demonstrate the computational results for the sphere \(\Gamma=\mathbb{S}^{2}\). For most examples we use the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y})^ {2}\). We will also demonstrate the results of one computation using the cost function \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\) arising in the reflector antenna problem, see [36, 37]. We perform this computation in order to show that our formulation in Section 3 will work for a variety of cost functions on the sphere. Given that we are using finite-difference discretizations on a Cartesian grid, it is also straightforward to design higher-order discretizations than those which we present here. ### Description of Scheme In this section, since our discretization is constructed on a Cartesian grid, we use the notation \(x,y,z\) to denote the usual Cartesian coordinates \((x,y,z)\in\mathbb{R}^{3}\). In order to discretize Equation (10), we first generate a cube of \(N_{c}\) points placed on an evenly spaced Cartesian grid \(\mathcal{G}^{h}_{c}\subset\mathbb{R}^{3}\), where the discretization parameter \(h\) satisfies \(h=\min_{i,j}\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|\) for \(j\neq i,\mathbf{x}_{i},\mathbf{x}_{j}\in\mathcal{G}^{h}_{c}\). The computational grid \(\mathcal{G}^{h}\) of \(N\) points is then generated by taking the intersection of the cube grid with the tubular neighborhood \(T_{\epsilon}\) of the surface \(\Gamma=\mathbb{S}^{2}\), i.e. \(\mathcal{G}^{h}=\mathcal{G}^{h}_{c}\cap T_{\epsilon}\). We designate interior points \(\mathbf{x}_{i}=(x,y,z)\) as those surrounded by computational points in the computational grid. These neighboring points will be used in the computation of the first and second discrete derivatives in Equation (10), and thus interior points are those where these derivatives can be computed. **Definition 7**.: _A point \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) will be called an interior point iff \(\mathbf{x}_{i}+(\eta_{1}h,\eta_{2}h,\eta_{3}h)\in\mathcal{G}^{h}\), where \(\eta_{i}\in\{-1,0,1\}\) and \(\eta_{1}\eta_{2}\eta_{3}=0\)._ Boundary points, consequently, are those which are not interior points. The interior points are the points at which we will be able to fully discretize the PDE operator and the boundary points are the points at which we will apply the boundary condition in Equation (12). Since the solution \(u_{\epsilon}\) of Equation (10) is _a priori_ constant in the normal direction, we elect to enforce the condition \(u(\mathbf{x}_{i})=u(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\), for any boundary point \(\mathbf{x}_{i}\). We denote the set interior points by \(\mathcal{I}^{h}\subset\mathcal{G}^{h}\) and the set of boundary points by \(\mathcal{B}^{h}\subset\mathcal{G}^{h}\). An example of the computational grid, as well as the boundary points and interior points, is shown in Figure 2 for a grid with \(40694\) points. For any function \(f:T_{\epsilon}\to\mathbb{R}\), we use the following standard centered-difference discretization \(\mathcal{D}^{h}_{1}f(\mathbf{x}_{i})\), \(\mathcal{D}^{h}_{2}f(\mathbf{x}_{i})\), and \(\mathcal{D}^{h}_{3}f(\mathbf{x}_{i})\) for the first-order derivatives \(f_{x},f_{y},f_{z}\) at a point \(\mathbf{x}_{i}\), respectively. Letting \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) denote \((1,0,0),(0,1,0)\), and \((0,0,1)\), respectively, we compute our first-order derivatives as follows: \[\mathcal{D}^{h}_{j}f(\mathbf{x}_{i})=\frac{f(\mathbf{x}_{i}+h\mathbf{e}_{j})- f(\mathbf{x}_{i}-h\mathbf{e}_{j})}{2h},\] and use the following discretization for the second-order derivatives: \[\mathcal{D}^{h}_{jj}f(\mathbf{x}_{i})=\frac{f(\mathbf{x}_{i}+h\mathbf{e}_{j}) -2f(\mathbf{x}_{i})+f(\mathbf{x}_{i}-h\mathbf{e}_{j})}{h^{2}},\] and the following discretization for mixed second-order derivatives: Figure 2. (a) An example of \(40694\) Cartesian grid nodes in \(T_{\epsilon}\), with \(\epsilon=0.2\) and \(h=0.05\). (b) A cross section showing the interior computational points in red. \[\mathcal{D}_{jk}^{h}f(\mathbf{x}_{i})=\frac{f(\mathbf{x}_{i}+h( \mathbf{e}_{j}+\mathbf{e}_{k}))+f(\mathbf{x}_{i}-h(\mathbf{e}_{j}+\mathbf{e}_{k}) )}{h^{2}}-\\ \frac{f(\mathbf{x}_{i}+h(-\mathbf{e}_{j}+\mathbf{e}_{k}))+f( \mathbf{x}_{i}+h(\mathbf{e}_{j}-\mathbf{e}_{k}))}{h^{2}}.\] These second-order derivatives are used in the computation of terms of the type \(\det D^{2}F(\mathbf{x}_{i})\). Expanding out the determinant of the Hessian, we get \[\det D^{2}F(\mathbf{x}_{i})=F_{xx}\left(F_{yy}F_{zz}-F_{yz}^{2}\right)-F_{xy} \left(F_{xy}F_{zz}-F_{xz}F_{yz}\right)+F_{xz}\left(F_{xy}F_{yz}-F_{xz}F_{yy} \right). \tag{53}\] We now proceed to show how we discretize the PDE (10). Given the potential function \(u_{\epsilon}\), we first show how to compute the mapping \(\mathbf{m}_{\epsilon}\). We compute the gradient via the equation \[\nabla^{h}u_{\epsilon}(\mathbf{x}_{i})=\mathcal{D}_{1}^{h}u_{\epsilon}( \mathbf{x}_{i})\mathbf{e}_{1}+\mathcal{D}_{2}^{h}u_{\epsilon}(\mathbf{x}_{i} )\mathbf{e}_{2}+\mathcal{D}_{3}^{h}u_{\epsilon}(\mathbf{x}_{i})\mathbf{e}_{3}. \tag{54}\] Using this, denote the unit normal vector \(\hat{n}(\mathbf{x}_{i})=\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|\). Then, we compute the projection of the gradient onto the normal direction via the equation \[\mathcal{D}_{\hat{n}}^{h}u_{\epsilon}(\mathbf{x}_{i})=(\nabla^{h}u_{\epsilon} (\mathbf{x}_{i})\cdot\hat{n})\hat{n}, \tag{55}\] and the projection of the gradient onto the local tangent plane and scaled by the radius via \[\mathcal{D}_{\hat{n}^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})=\left\|\mathbf{ x}_{i}\right\|\left(\nabla^{h}u_{\epsilon}(\mathbf{x}_{i})-\mathcal{D}_{ \hat{n}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right). \tag{56}\] The mapping is computed from the projection of the gradient onto the local tangent plane via Equation (30). Therefore, from Equation (47), we see that for the squared geodesic cost \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\) we compute the mapping as follows: \[\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})=\\ \left(\mathbf{x}_{i}\cos\left(\left\|\mathcal{D}_{\hat{n}^{\perp }}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\|\right)+\frac{\mathcal{D}_{\hat{n}^ {\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})}{\left\|\mathcal{D}_{\hat{n}^{\perp} }^{h}u_{\epsilon}(\mathbf{x}_{i})\right\|}\sin\left(\left\|\mathcal{D}_{\hat{n }^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\|\right)\right)\left(\left\| \mathbf{x}_{i}\right\|+\frac{\nabla^{h}u_{\epsilon}(\mathbf{x}_{i})\cdot\hat{n }}{\sigma}\right), \tag{57}\] and from Equation (48), we compute the mapping for the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\) as follows: \[\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})=\left(\mathbf{x}_{i}\frac{\left\| \mathcal{D}_{\hat{n}^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\|^{2}-1}{ \left\|\mathcal{D}_{\hat{n}^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\| ^{2}+1}-\frac{2\mathcal{D}_{\hat{n}^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})}{ \left\|\mathcal{D}_{\hat{n}^{\perp}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\| ^{2}+1}\right)\left(\left\|\mathbf{x}_{i}\right\|+\frac{\nabla^{h}u_{\epsilon} (\mathbf{x}_{i})\cdot\hat{n}}{\sigma}\right). \tag{58}\] In Equation (10), we also have the term \(\left|\det D_{\mathbf{x}\xi}^{2}c_{\epsilon}(\mathbf{z},\mathbf{\xi})\right|\) for \(\mathbf{\xi}=\mathbf{m}_{\epsilon}(\mathbf{z})\), which can be discretized via Equation (54), Equation (55) and either Equation (51) for the squared geodesic cost or Equation (52) for the logarithmic cost. In this way, we can fully discretize the PDE (10) for all interior points by defining the operator \(F^{h}(u^{h}(\mathbf{x}_{i}))\) using Equation (59). For each interior point \(\mathbf{x}_{i}\), first compute \(\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})\). Then, for a point \(\mathbf{C}\in T_{\epsilon}\), define \(U_{\mathbf{C}}(\mathbf{z}):=u_{\epsilon}(\mathbf{z})+c_{\epsilon}(\mathbf{z}, \mathbf{C})\). We then define \(F^{h}\) for the squared geodesic cost \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\) as follows: \[F^{h}(u_{\epsilon}(\mathbf{x}_{i}))=\mathcal{D}_{11}^{h}U_{m_{ \epsilon}^{h}(\mathbf{x}_{i})}(\mathbf{x}_{i})\left(\mathcal{D}_{22}^{h}U_{ \mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}(\mathbf{x}_{i})\mathcal{D}_{33}^{h}U _{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}(\mathbf{x}_{i})-(\mathcal{D}_{23}^ {h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}(\mathbf{x}_{i}))^{2}\right)- \mathcal{D}_{12}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}(\mathbf{x}_{i })\left(\mathcal{D}_{12}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}( \mathbf{x}_{i})\mathcal{D}_{33}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i} )}(\mathbf{x}_{i})-\mathcal{D}_{13}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x} _{i})}(\mathbf{x}_{i})\mathcal{D}_{23}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{ x}_{i})}(\mathbf{x}_{i})\right)+\] \[\mathcal{D}_{13}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})} \left(\mathcal{D}_{12}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i})}( \mathbf{x}_{i})\mathcal{D}_{23}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i} )}(\mathbf{x}_{i})-\mathcal{D}_{13}^{h}U_{\mathbf{m}_{\epsilon}^{h}(\mathbf{x} _{i})}(\mathbf{x}_{i})\mathcal{D}_{22}^{h}U_{\mathbf{m}_{\epsilon}^{h}( \mathbf{x}_{i})}(\mathbf{x}_{i})\right)-\] \[\sigma\frac{\left\|\nabla^{h}u_{\epsilon}(\mathbf{x}_{i})- \mathcal{D}_{\hat{n}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\|f(\mathbf{x}_{i})}{ \left\|\mathbf{x}_{i}\right\|^{2}\sin\left(\left\|\nabla^{h}u_{\epsilon}( \mathbf{x}_{i})-\mathcal{D}_{\hat{n}}^{h}u_{\epsilon}(\mathbf{x}_{i})\right\| \right)g(\mathbf{m}_{\epsilon}^{h}(\mathbf{x}_{i}))}, \tag{59}\] and we get a similar expression for the discretization of the PDE with the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\). We then solve the following system of equations: \[F^{h}(u(\mathbf{x}_{i})) =0, \mathbf{x}_{i}\in\mathcal{I}^{h}\] \[u(\mathbf{x}_{i}) =u(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|), \mathbf{x}_{i}\in\mathcal{B}^{h}. \tag{60}\] The boundary conditions are approximated by interpolation. For each boundary computational point \(\mathbf{x}_{i}\in\mathcal{B}^{h}\), we compute the point \(\boldsymbol{\xi}_{i}=\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|\). Denote its Euclidean coordinates by \((\xi_{i}^{x},\xi_{i}^{y},\xi_{i}^{z})\). We assume that \(\epsilon\) and \(h\) are chosen so that there is a point \(\mathbf{p}_{i}\in\mathcal{B}^{h}\) with Euclidean coordinates \((p_{i}^{x},p_{i}^{y},p_{i}^{z})\) such that \(\mathbf{p}_{i}=\operatorname*{argmin}_{p_{i}^{x}\ \xi_{i}^{x},p_{i}^{y}\ \xi_{i}^{y},p_{i}^{z}\ \xi_{i}^{z}}\| \boldsymbol{\xi}_{i}-\mathbf{p}_{i}\|\). Then, we define the list of points \[R_{i}=\{\mathbf{p}_{i},\mathbf{p}_{i}-h\mathbf{e}_{1},\mathbf{p} _{i}-h\mathbf{e}_{2},\mathbf{p}_{i}-h\mathbf{e}_{3},\mathbf{p}_{i}-h(\mathbf{e} _{1}+\mathbf{e}_{2}),\mathbf{p}_{i}-h(\mathbf{e}_{1}+\mathbf{e}_{3}),\\ \mathbf{p}_{i}-h(\mathbf{e}_{2}+\mathbf{e}_{3}),\mathbf{p}_{i}-h( \mathbf{e}_{1}+\mathbf{e}_{2}+\mathbf{e}_{3})\}.\] Assuming \(\epsilon\) is thick enough with respect to \(h\), the points in \(R_{i}\) are in \(\mathcal{G}^{h}\) and form the vertices of a small cube about \(\boldsymbol{\xi}_{i}\). In our discretization, we will then first update the interior points and then use the values of the new interior points to update the boundary points. To get the value of a function defined for all \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) at \(\boldsymbol{\xi}_{i}\), we use trilinear interpolation using the values of the grid function at all points in \(R_{i}\). We choose to solve the discrete system (60) via Algorithm 1. Given \(T_{\epsilon}\) and the Cartesian discretization, designate \(I\) to be a list of the indices of the interior points and \(B\) to be a list of the indices of the boundary points. This algorithm involves a Jacobi-style update, can thus can be trivially parallelized. Moreover, the iterations can be regarded as an accelerated gradient descent method in the style of Nesterov [25]. It can be compared with the accelerated method proposed in [33], where the choice of \(\gamma(n)=n/(n+n_{0})\), for \(n_{0}\geq 10\), is advocated. In particular, we set \(\gamma(n)=(n+1)/(n+4)\), which is a choice informed from the Nesterov gradient descent method. Alternatively, one can easily implement a batch-style Gauss-Seidel type update scheme. Algorithm 2 offers such an example. Let \(m(j,n)=I_{(nsB+j-1\mod\|I\|)+1}\), where \(|I|\) is the cardinality of the list of interior points \(I\). That is \(m(j,n)\) returns an index in the list \(I\), which are the indices which will be updated in batch sizes \(B\) in Algorithm 2 in order to reduce the time taken in recomputing the full grid function \(F^{h}(u_{n}^{h}(\mathbf{x}_{i}))\) at every point \(\mathbf{x}_{i}\). That is, in every iteration we update a set of indices of size \(B\). The batches could be chosen randomly or computed in order (as is done in Algorithm 2). Notice that if the batch size is \(B=|I|\), then this algorithm reduces to Algorithm 1 with \(\gamma(n)=1\), and if \(B=1\), this is a full Gauss-Seidel-type algorithm, but the computation time is greatly increased. In practice, our choice of acceleration in Algorithm 1 works much better than without acceleration (setting \(\gamma(n)=1\)), and in general requires fewer iterations to achieve a given tolerance level than the Gauss-Seidel method, shown in Algorithm 2. In both algorithms, the iterations terminate when the residual reaches a value below a desired tolerance. Denote the largest index reached in either algorithm by \(K\). As can be seen from Equation (10), the solution \(u_{\epsilon}\) is unique up to a constant, since the PDE just depends on the derivatives of \(u_{\epsilon}\). To settle the constant, once the iterations in either Algorithm 1 or Algorithm 2 terminates, we find the minimum value of \(u_{K}^{h}\) and define \(u_{\epsilon}\) as \(u_{K}^{h}\) minus this minimum value. Thus, the output of the computations is a grid function \(u_{\epsilon}\) whose minimum value is zero. ``` 1:Given \(u_{0}^{h}(\mathbf{x}_{i})\)\(\forall i\in\mathcal{G}^{h}\), \(\Delta t>0\), \(\mathrm{tol}>0\) 2:Set \(u_{1}^{h}(\mathbf{x}_{i})=u_{0}^{h}(\mathbf{x}_{i})+\Delta tF^{h}\left(u_{0}^{ h}(\mathbf{x}_{i})\right)\) for \(\mathbf{x}_{i}\in\mathcal{I}^{h}\) 3:Find the values \(u_{1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\) via interpolation of the data in the list \(\left\{u_{1}^{h}(\mathbf{x}_{i})\right\},\mathbf{x}_{i}\in\mathcal{I}^{h}\) at the points \(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|\) for all \(\mathbf{x}_{i}\in\mathcal{B}_{h}\) 4:Set \(u_{1}^{h}(\mathbf{x}_{i})=u_{1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\) for \(\mathbf{x}_{i}\in\mathcal{B}^{h}\) 5:Compute \(F^{h}(u_{1}^{h}(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) 6:Set \(n=1\) 7:while\(\max_{i\in I}\left|F^{h}\left(u_{n}^{h}(\mathbf{x}_{i})\right)\right|>\mathrm{tol}\), do 8: Set \(u_{n,E}^{h}(\mathbf{x}_{i})=u_{n}^{h}(\mathbf{x}_{i})+\gamma(n)(u_{n}^{h}( \mathbf{x}_{i})-u_{n-1}^{h}(\mathbf{x}_{i})))\), for all \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) 9: Compute \(F^{h}(u_{n,E}^{h}(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{I}^{h}\) 10: Set \(u_{n+1}^{h}(\mathbf{x}_{i})=u_{n,E}^{h}(\mathbf{x}_{i})+\Delta tF^{h}(u_{n,E} ^{h}(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{I}^{h}\) 11: Find the values \(u_{n+1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\) via interpolation of the data in the list \(\left\{u_{n+1}^{h}(\mathbf{x}_{i})\right\},\mathbf{x}_{i}\in\mathcal{I}^{h}\) at the points \(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|\) for all \(\mathbf{x}_{i}\in\mathcal{B}_{h}\) 12: Set \(u_{n+1}^{h}(\mathbf{x}_{i})=u_{n+1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i} \right\|)\) for \(\mathbf{x}_{i}\in\mathcal{B}^{h}\) 13: Compute \(F^{h}(u_{n+1}^{h}(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) 14: Set \(n\) to \(n+1\) 15:endwhile 16:Compute \(C=\min_{i\in\mathcal{G}^{h}}u_{K}^{h}(\mathbf{x}_{i})\) 17:Define \(u_{\epsilon}(\mathbf{x}_{i})=u^{h}(\mathbf{x}_{i})-C\). ``` **Algorithm 1** Jacobi-Type Iteration Update with Acceleration **Remark**.: _Monotone finite difference schemes are used for discretizing fully nonlinear elliptic PDEs to build convergence guarantees of the discrete solutions even in cases where the solution \(u_{\epsilon}\) is possibly only continuous. There is a long line of work on the subject of monotone finite-difference discretizations of fully nonlinear second-order elliptic PDE, see [1] for the theory showing uniform convergence of viscosity solutions of a monotone discretization of certain elliptic PDE, the paper [26] for how the theory allows for the construction of wide-stencil schemes, the paper [14] for a convergence framework for building such monotone discretizations on local tangent planes of the sphere, and [12] for an explicit construction of such a discretization. While it seems possible to construct a monotone scheme for the extended OT PDE (10), we defer such development to a future project._ ### Computational Results All computations in this section were performed using Matlab R2021b on a 2017 MacBook Pro, with a 2.3 GHz Dual-Core Intel Core i5 and 16 GB of 2133 MHz LPDDR3 memory. In all of the computations, we used Algorithm 1 and initialized with the constant function \(u_{0}^{h}=1\), \(\sigma=1\) (unless otherwise indicated) and chose \(\gamma(n)=(n+1)/(n+4)\). We performed all computations in this section on the unit sphere \(\mathbb{S}^{2}\), using the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\) for most computations. However, in order to show that the computational methods developed in this manuscript apply to more than just the squared geodesic cost, in Example 2 we will see the results of a computation that uses the logarithmic cost function \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\) arising in the reflector antenna problem. It should be emphasized that even though we have chosen to show the results of many of our computations by visualizing them on the unit sphere, all computations were performed by discretizing the extended Optimal Transport problem on \(T_{\epsilon}\) as outlined in Section 4.1. ``` 1: Given \(u_{0}^{h}(\mathbf{x}_{i})\)\(\forall i\in\mathcal{G}^{h}\), \(\Delta t>0\), \(\mathrm{tol}>0\), batch size \(B\) 2: Set \(n=0\) 3:while\(\max_{i\in I}\left|F^{h}\left(u_{n}^{h}(\mathbf{x}_{i})\right)\right|>\mathrm{tol}\), do 4: Set \(u_{n+1}^{h}(\mathbf{x}_{i})=u_{n}^{h}(\mathbf{x}_{i})\), for all \(\mathbf{x}_{i}\in\mathcal{I}^{h}\) 5:for\(j=1,\ldots,B\)do 6: Set \(u_{n+1}^{h}(\mathbf{x}_{m(j,n)})=u_{n}^{h}(\mathbf{x}_{m(j,n)})+\Delta tF^{h} \left(u_{n}^{h}(\mathbf{x}_{m(j,n)})\right)\) 7:endfor 8: Find the values \(u_{n+1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\) via interpolation of the data in the list \(\left\{u_{n+1}^{h}(\mathbf{x}_{i})\right\},\mathbf{x}_{i}\in\mathcal{I}^{h}\) at the points \(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|\) for all \(\mathbf{x}_{i}\in\mathcal{B}_{h}\) 9: Set \(u_{n+1}^{h}(\mathbf{x}_{i})=u_{n+1}^{h}(\mathbf{x}_{i}/\left\|\mathbf{x}_{i}\right\|)\) for \(\mathbf{x}_{i}\in\mathcal{B}^{h}\) 10: Compute \(F^{h}(u_{n+1}^{h}(\mathbf{x}_{i}))\) for all \(\mathbf{x}_{i}\in\mathcal{G}^{h}\) 11: Set \(n\) to \(n+1\) 12:endwhile 13: Compute \(C=\min_{i\in\mathcal{G}^{h}}u_{K}^{h}(\mathbf{x}_{i})\) 14: Define \(u_{\epsilon}(\mathbf{x}_{i})=u^{h}(\mathbf{x}_{i})-C\). ``` **Algorithm 2** Gauss-Seidel-Type Batch Iteration Update _Example 1: North Pole to South Pole._ For this example, we have chosen \(\epsilon=0.2\) and \(h=0.1\) and performed the computation on a grid of 5038 points. We performed the computation for squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y} )^{2}\) and for the following source and target density functions for Cartesian coordinates \((x,y,z)\) \[f(x,y,z) =(1-\epsilon_{3})\frac{1}{2\alpha_{5}}e^{-4(\arccos z-\frac{1}{1 0})^{2}}+\frac{\epsilon_{3}}{4\pi},\] \[g(x,y,z) =(1-\epsilon_{4})\frac{1}{2\alpha_{6}}e^{-3(\arccos x-\pi+\frac {3}{10})^{2}}+\frac{\epsilon_{4}}{4\pi},\] where \(\epsilon_{3}=0.4\), \(\epsilon_{4}=0.3\)\(\alpha_{5}=1.042\), and \(\alpha_{6}=2.089\), and whose extended densities are then derived by using Equation (18). Figure 3 shows the source and target densities and the computed potential function with a quiver plot showing the direction of the gradient of the potential function overlaid on top of a world map outline that allows the reader to more easily visualize the location of the mass density concentrations. It should be clear from formulation of the Optimal Transport problem in Equation (3) that in order to preserve mass the source mass located at the north pole needs to move to the target mass located at the south pole. In order to achieve this, Equation (46) shows us that the potential function must have a gradient near the mass concentration at the north pole that points due south (towards the target mass distribution). Correspondingly, the direction of the mapping should point due south as well. This is precisely what we observe in Figure 3. _Example 2: Peanut Reflector._ For this example, we perform a computation with the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\) arising in the reflector antenna problem. In the reflector antenna problem, we use the potential function \(u_{\epsilon}\) to construct the shape of a reflector \(\rho\) via the expression: \[\rho=\mathbf{z}e^{-u_{\epsilon}(\mathbf{z})},\quad\mathbf{z}\in\mathbb{S}^{2}. \tag{61}\] Note that the reflector is a sphere only when \(u_{\epsilon}\) is constant. This case only occurs when \(f=g\). In the reflector antenna problem, light originates at the origin (center of the sphere) with a given directional light intensity pattern \(f(\mathbf{x})\). Light then travels from the origin and reflects off a reflector with shape given in Equation (61), and then travels in a direction \(\mathbf{m}(\mathbf{x})\) to yield a far-field intensity pattern \(g(\mathbf{m}(\mathbf{x}))\). Conservation of light intensity, laws of reflection, and a change of variables allow one to solve for the reflector shape via the function \(u\) in Equation (61) by solving for \(u\) in the Optimal Transport PDE in Equations (7) and (8) with the logarithmic cost \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\). Strictly speaking, a reflector antenna should only be a hemisphere, since the intention is to redirect directional light intensity from the origin (inside the reflector) to the far-field outside the reflector. In order to do this, light must escape, and so the reflector cannot entirely envelop the origin. Temporarily putting physical realities aside, we compute the shape of the reflector for light intensity functions \(f\) and \(g\) with support equal to \(\mathbb{S}^{2}\), demonstrating that the computation can be done for the entire sphere for the cost function \(c(\mathbf{x},\mathbf{y})=-\log(1-\mathbf{x}\cdot\mathbf{y})\). We use a source density function which resembles a headlights of a car projected onto a sphere and a constant target density. The densities and the resulting "peanut-shaped" reflector is shown in Figure 4. This computation was inspired by the work in [30] and was also done in [13]. _Example 3: Non-Lipschitz Target Density._ For this example, we have chosen \(\epsilon=0.2\) and \(h=0.1\) performed the computation on a grid of 5038 points. We perform the computation for the squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y}) ^{2}\) and for the following source and target density functions for Cartesian coordinates \((x,y,z)\): \[f(x,y,z) =(1-\epsilon_{5})\frac{1}{\alpha_{7}}e^{-5\arccos x^{2}}+\frac{ \epsilon_{5}}{4\pi},\] \[g(x,y,z) =(1-\epsilon_{6})/2\pi, \text{if }z\geq 0\] \[g(x,y,z) =\epsilon_{6}/4\pi, \text{otherwise},\] where \(\epsilon_{5}=0.3\), \(\epsilon_{6}=0.1\) and \(\alpha_{7}=0.607788\) and the extended density functions are given by Equation (18). The target mass density in this example is discontinuous and equal to zero over half the sphere. This is an important and difficult test example since the target density, while bounded away from zero, is not Lipschitz Figure 3. Source density in Figure 3(a) and target density in Figure 3(b) and the potential function with the direction of the gradient (red arrows) shown in Figure 3(c) with an outline of the land masses overlaid to enhance visualization. Figure 4. Source density in Figure 4(a) and target density in Figure 4(b) and the resulting reflector shape. Note that the reflector is not a sphere, but rather the shape given in Equation (61). Figure 5. Source density shown in Figure 5(a), target density in Figure 5(b) and the potential function with its gradient (red arrows) showing mass moving from the middle of the Pacific Ocean to the northern hemisphere in Figure 5(c) because it is discontinuous. Figure 5 shows the source and target densities and the resulting quiver plot of the direction of the gradient of the potential function. We have source mass concentrated in the middle of the Pacific Ocean being transported to constant mass density covering the northern hemisphere (note that the maximum values shown in Figure 5 may be the same shade of yellow, but the actual values for the maximum of the density function for the source mass is much higher than the target density). Equation (46) tells us that the gradient of the potential function around where the source mass is concentrated (middle of the Pacific Ocean) should be pointing to the northern hemisphere, so the mass gets appropriately spread out. This is, in fact, what we see from Figure 5. Example 4: Constant SolutionIn the case that the source density function equals the target density function \(f=g\), the solution to the Optimal Transport problem on the sphere is given by the potential function \(u=\text{constant}\) or \(m(x)=x\), for any appropriate cost function. We test our scheme for squared geodesic cost function \(c(\mathbf{x},\mathbf{y})=\frac{1}{2}d_{\mathbb{S}^{2}}(\mathbf{x},\mathbf{y})^ {2}\). Note, again, that all computations are performed on \(T_{\epsilon}\), and thus, for this example we give the formulas for \(f_{\epsilon}\) and \(g_{\epsilon}\): \[f_{\epsilon}(x,y,z)=g_{\epsilon}(x,y,z)=\frac{x^{2}+y^{2}+z^{2}}{8\epsilon\pi}. \tag{62}\] We emphasize that these densities are not constant in \(T_{\epsilon}\), however they are derived from constant densities defined on the unit sphere \(f=g=1/4\pi\). Even though these densities are not constant, the computed potential function is very close to being constant. For \(\epsilon=0.2\) and \(h=0.1\), we have computed a solution \(u\) which satisfies \(|\max_{i}u(x_{i})-\min_{i}u(x_{i})|=0.000563\), see Figure 6. #### 4.2.1. Studies with Varying \(\sigma\), \(\epsilon\), and \(h\) Fixing \(\sigma=1\), we demonstrate that changing the width of the tubular neighborhood does not change the computed solution. Figure 6. A vertical cross section of the densities \(f_{\epsilon},g_{\epsilon}\) in Figure 6(a) and the resulting solution \(u\), which is approximately constant in Figure 6(b) The computation is performed using the source and target densities \[f(x,y,z) =(1-\epsilon_{2})\left(\frac{1}{2\alpha_{1}}e^{-4(\arccos z-\frac{1 }{2})^{2}}+\frac{1}{2\alpha_{2}}e^{-4(\arccos(y)-\frac{5}{2})^{2}}\right)+\frac{ \epsilon_{2}}{4\pi},\] \[g(x,y,z) =(1-\epsilon_{2})\left(\frac{1}{2\alpha_{3}}e^{-4(\arccos x-\pi+ \frac{9}{16})^{2}}+\frac{1}{2\alpha_{4}}e^{-4(\arccos(z)-\frac{7}{16})^{2}} \right)+\frac{\epsilon_{2}}{4\pi}.\] where \(\epsilon_{2}=0.2\), \(\alpha_{1}=2.57656\), \(\alpha_{2}=3.15727\), \(\alpha_{3}=4.10094\)\(\alpha_{4}=3.38728\) and whose extended densities are given by Equation (18). In Figure 7, we show the potential functions \(u\) by varying \(\epsilon\) and \(h\). The difference in the two solutions has an approximate computed \(L^{\infty}\) norm of \(0.0059\). The error is computed by interpolating both solutions onto the unit sphere in order to directly compare the values. The grid with \(\epsilon=0.2\) and \(h=0.1\) has \(5038\) points and the grid with \(\epsilon=0.1\) and \(h=0.05\) has \(20068\) points. In our second study, we examine the convergence of the residual when we change \(\epsilon\) and \(h\). Setting \(\sigma=1\), we compare the convergence rate of the residual to tolerance \(0.1\) for a test using \(\epsilon=0.2\) and \(h=0.1\) and another test using \(\epsilon=0.1\) and \(h=0.05\) and the result is shown in Figure 8. The number of iterations to the same tolerance are very similar, showing that Algorithm 1 is not sensitive, in terms of number of iterations, to changes in \(h\) and \(\epsilon\), as desired. Naturally, the test using \(\epsilon=0.1\) and \(h=0.05\) has more computational points, so each iteration takes longer. In our third study, we examine what happens when we vary \(\sigma\), the penalty parameter. The results seem to indicate that above a certain threshold, the choice of \(\sigma\) does not change the computed solution. The following table shows the maximum absolute difference between the potential function computed with different values for \(\sigma\) for \(\epsilon=0.2\) and \(h=0.1\). Values on the diagonal of the table above are either redundant or uninformative and thus are omitted. #### 4.2.2. Effect of \(\epsilon\) and \(h\) on Condition Number In our first study of varying \(\epsilon\) and \(h\), we approximately fix the number of interior points \(N_{I}\) and vary the value of \(\epsilon\) and \(h\). With a fixed time-step \(\Delta t=0.000025\), we find that the number of iterations \(k\) required to reach a residual tolerance of \(1\) is approximately constant. \begin{tabular}{||c|c|c||c||} \hline \(N_{I}\) & \(\epsilon\) & \(h\) & \(k\) \\ \hline \hline 10552 & 0.1 & 0.046 & 2335 \\ \hline 10424 & 0.15 & 0.057 & 2556 \\ \hline 10516 & 0.2 & 0.065 & 2588 \\ \hline 10698 & 0.25 & 0.072 & 2598 \\ \hline 10320 & 0.3 & 0.079 & 2615 \\ \hline \end{tabular} In our second study, we fix \(h\) and change \(\epsilon\), while using the same step size \(\Delta t=0.000025\) and record the number of iterations \(k\) required to reach a residual tolerance of \(1\). Again, we observe that the number of iterations does not vary much until \(N_{I}\) becomes too small. Figure 8. The convergence of the residual for \(\epsilon=0.2\) and \(h=0.1\) shown in black and \(\epsilon=0.1\) and \(h=0.05\) shown in red for Algorithm 1. ## 5. Conclusion In this article, we have extended the Optimal Transport problem on a compact 2D surface \(\Gamma\subset\mathbb{R}^{3}\) onto a thin tubular neighborhood \(T_{\epsilon}\) with width \(\epsilon\). We showed how one can then compute the Optimal Transport mapping \(\mathbf{m}\) for the Optimal Transport problem on \(\Gamma\) by solving instead for the Optimal Transport mapping \(\mathbf{m}_{\epsilon}\) which is the solution to the extended Optimal Transport problem on \(T_{\epsilon}\). The key is to extend the density functions and cost function in an appropriate way. The primary benefit of this extension is that the PDE formulation of the Optimal Transport problem on \(T_{\epsilon}\) has only Euclidean derivatives. This allows us the flexibility to design a discretization that uses a Cartesian grid. We have discretized the extended PDE formulation of the Optimal Transport problem on \(T_{\epsilon}\) and shown its ease of implementation and success with various computational examples on the sphere, some of which are very challenging with other currently available methods. ## Acknowledgment Tsai's research is supported partially by National Science Foundation Grants DMS-2110895 and DMS-2208504.
2305.14840
Predicting Token Impact Towards Efficient Vision Transformer
Token filtering to reduce irrelevant tokens prior to self-attention is a straightforward way to enable efficient vision Transformer. This is the first work to view token filtering from a feature selection perspective, where we weigh the importance of a token according to how much it can change the loss once masked. If the loss changes greatly after masking a token of interest, it means that such a token has a significant impact on the final decision and is thus relevant. Otherwise, the token is less important for the final decision, so it can be filtered out. After applying the token filtering module generalized from the whole training data, the token number fed to the self-attention module can be obviously reduced in the inference phase, leading to much fewer computations in all the subsequent self-attention layers. The token filter can be realized using a very simple network, where we utilize multi-layer perceptron. Except for the uniqueness of performing token filtering only once from the very beginning prior to self-attention, the other core feature making our method different from the other token filters lies in the predictability of token impact from a feature selection point of view. The experiments show that the proposed method provides an efficient way to approach a light weighted model after optimized with a backbone by means of fine tune, which is easy to be deployed in comparison with the existing methods based on training from scratch.
Hong Wang, Su Yang, Xiaoke Huang, Weishan Zhang
2023-05-24T07:44:16Z
http://arxiv.org/abs/2305.14840v1
# Predicting Token Impact Towards Efficient Vision Transformer ###### Abstract Token filtering to reduce irrelevant tokens prior to self-attention is a straightforward way to enable efficient vision Transformer. This is the first work to view token filtering from a feature selection perspective, where we weigh the importance of a token according to how much it can change the loss once masked. If the loss changes greatly after masking a token of interest, it means that such a token has a significant impact on the final decision and is thus relevant. Otherwise, the token is less important for the final decision, so it can be filtered out. After applying the token filtering module generalized from the whole training data, the token number fed to the self-attention module can be obviously reduced in the inference phase, leading to much fewer computations in all the subsequent self-attention layers. The token filter can be realized using a very simple network, where we utilize multi-layer perceptron. Except for the uniqueness of performing token filtering only once from the very beginning prior to self-attention, the other core feature making our method different from the other token filters lies in the predictability of token impact from a feature selection point of view. The experiments show that the proposed method provides an efficient way to approach a light weighted model after optimized with a backbone by means of fine tune, which is easy to be deployed in comparison with the existing methods based on training from scratch. ## 1 Introduction Transformer as an emerging model for natural language processing [31] has attracted much attention in computer vision. So far, a couple of vision Transformers have been proposed and made tremendous success in promising superior performance in a variety of applications compared with convolution neural network based deep learning frameworks [7, 29, 3, 1, 18, 32, 13, 23, 34]. At the same time, a major problem arises: The heavy computational load prevents such models from being applied to edge computing-based applications. Therefore, a recent trend has been shifted to develop light weighted models of vision Transformer (ViT). It is known that self-attention is the major bottleneck to incur dense computations in a Transformer as it requires permutation to couple tokens. Accordingly, the recent efforts were devoted to the following trials: (1) Enforce the self-attention to be confined in a neighborhood around each token such that fewer tokens will be involved in updating each token. The methods falling in this category include Swin Transformer [23], Pale Transformer [33], HaloNet [30], and CSWin Transformer [6]. These methods are based on such an assumption that tokens spatially far away are not semantically correlated, but this does not always hold true. Moreover, since the neighborhood to confine self-attention is predefined, not machine learning based, it may sometimes not be coherent to practice. (2) Another solution aims to modify the self-attention operations internally [36, 4, 2, 15]. By changing the computing order in self-attention while incorporating the combination of multiple heads into the self-attention, the complexity of Hydra Attention [2] could be made relatively low provided no nonlinear component is contained in the self-attention module, which is a strong constraint to prevent such a solution from being applied broadly. (3) On account of the \(O(N^{2}d)\) complexity of self-attention, where \(N\) is the token number and \(d\) the feature dimension, a straightforward way is to reduce the number of tokens fed to self-attention instead of the effort to modify self-attention itself. One methodology is to group similar tokens into clusters via unsupervised learning and let each cluster act as a higher-level abstractive representation to take part in the self-attention [39, 20]. Here, the difficulty lies in the quality control of clustering, which may lead to not semantically meaningful representations, and thus affect the final decision negatively. The other kind of solution aims to reduce the token number by applying tokens filter explicitly or based on certain heuristics. In [26], a couple of token filters realized using multi-layer perceptron (MLP) are incorporated into some middle layers of ViT as gating functions, which are trained end-to-end with the backbone [29, 14], such that the tokens resulting from one self-attention layer can be selectively forwarded to the subsequent self-attention layers. In [37], an early stop criterion based on the accumulated token value at the first dimension is proposed. In [21], token importance is assumed to be its attentive weight correlated to class token. However, the complex coupling layer by layer brings in uncertainty to the attentive weights in terms of correlating to class token, so gradual token filtering has to be applied while the less attentive tokens are also preserved to aid further testing. In sum, these token filtering methods miss to address the following issue: They are based on heuristics [37, 21] or enclosed in the end-to-end training with backbone [26], so the rationality of discarding some tokens selectively is not straightforward. In other words, due to the heuristic and less explainable nature of these methods, they are unable to foresee the impact of a token on the final decision explicitly. Therefore it is impossible for them to filter out all irrelevant tokens from the very beginning and token filtering has to be done gradually in a layer-wise manner, which results in unpredictable token filtering on the fly, not favored by parallel computing. This study aims to solve the aforementioned problem by proposing a ranking method to measure how relevant a token is in regard to the final decision. Based on such a measure, then, we proceed to train a binary classifier as a token filter with learnable parameters generalized from the whole training corpus, such that we can filter out irrelevant tokens from the very beginning prior to self-attention. For this sake, we propose a measure referred to as delta loss (DL) to evaluate how much the loss changes once masking the token of interest, where the naive Transformer can act as the agent to score the difference of loss caused by with or without a token of interest. The mechanism is similar to a wrapper in the sense of classical feature selection [17]. Then, we label the tokens resulting in big DL values as positive instances since masking them will have a significant impact on the final decision. Further, we train a MLP based binary classifier using the labeled tokens based on their DL values. Finally, we apply such a token filter on each token, prior to all the subsequent Transformer blocks, and fine tune the whole pipeline end-to-end. As a result, the irrelevant tokens can be discarded from the very beginning, which is a one-pass process in contrast to reducing token numbers gradually [26, 37, 21]. The contribution of this work is as follows: (1) In the context of light weighted ViT, it is the first time that token filtering is proposed from a feature selection point of view to rank the relevance of each token in regard to the final decision. Hence, whether a token makes sense for the final decision becomes predictable from the very beginning, which can prevent irrelevant tokens from taking part in self-attention to the best extent. As a one-pass filter deployed at the very beginning prior to self-attention, it can lead to higher efficiency with even fewer token dropout compared with gradual token dropout throughout the pipeline. (2) We propose a new metric referred to as delta loss to weigh the importance of each token in terms of affecting the final decision and then force the token classifier to optimize its performance on the pseudo labels quantized from the DL values. (3) The only change compared to the original ViT is applying a MLP as the pre-filter for binary classification, which is fine turned with backbone, so the deployment is quite simple compared with the state-of-the-art (SOTA) methods, which rely on training from scratch. (4) The experiments show that the proposed method promises SOTA performance in terms of both precision and efficiency in an overall sense. ## 2 Related works **Vision Transformer.** Transformer is initially applied in natural language processing (NLP) [31]. ViT [7] is the first work extending Transformer to computer vision by using no-overlapping image patches for image classification such that no convolution operation is needed. It shows comparable performance to convolution neural networks (CNN) on large-scale datasets. To perform well on various vision tasks, however, ViT and some of its following variants [3, 18, 1] require large-scale data and long training time for pre-training. DeiT [29] improved the training configuration by using a novel distillation method and proposed a Transformer architecture that can be trained only with ImageNet1K [5], whose performance is even better than ViT. **Efficient Transformer.** Although Transformer has recently led to great success in computer vision, it suffers from dense computations arising from self-attention, which is also the major mechanism to grant the promising performance in various down-streaming applications. Therefore, recent efforts are focused on proposing various methods to reduce the self-attention caused by dense computations. Provided there are \(N\) tokens of \(d\) dimension corresponding with the image patches, the self-attention to correlate every couple from the permutation of the \(N\) tokens will result in \(O(N^{2}d)\) complexity in a simple updating round. For deploying Transformer on edge devices, a variety of simplified models have been proposed, aiming to reduce parameters and operations, for example, parameter pruning [12, 28], low-rank factorization [38], and knowledge distillation [24, 35]. Yet, these strategies for acceleration are limited in that they still rely on CNN, which deviates from the original design of Transformer, that is, facilitating deep learning with a new working mechanism other than CNN. One way for rendering light weighted vision Transformer is to simplify the layers of Transformer [40, 25, 8], but its benefit is limited since the major complexity arises from self-attention, not layer stack. So, some other efforts are focused on altering the internal operations of Transformer to make self-attention more efficient [36, 4, 2]. As for Hydra Attention [2], the computing order insider self-attention is reorganized while the combination of multiple heads is incorporated into self-attention to reduce the complexity. Nevertheless, it is workable only when no nonlinear component such as SoftMax is applied in self-attention, which limits its applications. Some other methods try to alleviate the computations of self-attention by reducing the number of tokens. One way is to enforce the computation of self-attention to be conducted in a predefined local region, for instance, Swin Transformer [23], Pale Transformer [33], HaloNet [30], and CSWin Transformer [6]. These methods are based on the assumption that image patches located far from each other are not semantically relevant, but this only partially holds true. Besides, since determining the local context does not rely on machine learning, it cannot be adaptive to various real scenarios end-to-end. Another solution is grouping similar tokens together to obtain more abstractive sparse token representations from clustering. The self-attention confined to such highly abstractive representations can thus be made efficient. TCFormer [39] fuses the tokens in the same cluster into a new one utilizing a weighted average, and the tokens involved in self-attention can then be reduced layer by layer. When tackling high-resolution images, Liang et al. [20] leverage clustering in the first few layers to reduce the number of tokens and reconstruct them in the last few layers. Thus, the dense computations on self-attention can be avoided in the middle layers. The limit for the clustering based methods is: They simply merge similar tokens but ignore the quality control of token clustering in case some clusters might be spanned by less homogeneous tokens. Since the aforementioned approaches suffer from hard quality control or lack of machine learning, this gives rise to another methodology, which aims to filter out tokens gradually throughout the pipeline of ViT. Dynamic ViT [26] incorporates a couple of learnable neural networks to the middle layers of ViT as the gating structure to make tokens gradually sparser throughout a relatively long course. A-ViT [37] calculates the accumulated halting probability of each token by using the feature values resulting from each Transformer layer, which gradually reduces the number of tokens without adding any additional modules, but could result in suddenly halted computing on a token, in general, not favored when scheduling parallel computing. E-ViT [21] assumes that top-k attentive weights correspond with relevant tokens but it still preserves irrelevant tokens throughout the whole pipeline to undergo a gradual token dropout procedure. The reason is: Token impact cannot be related to final decision in an explicit way due to the complex inter-layer coupling between tokens when back tracing each token's correlation to class token. Besides, every trail of the hyper parameter k in preserving selectively the top-k attentive tokens will lead to a new-round training from scratch. A common limit of the aforementioned approaches is: All such works rely on the running results of the backbone for token filtering, as it is impossible for them to foresee the token-caused effect on the final decision from the very beginning. In view of such a limit, we propose a new method from a feature selection point of view to conduct token filtering from the very beginning prior to self-attention to filter out truly irrelevant tokens. **Feature selection.** In the literature on deep learning, Le et al. [19] proposed a feature selector by adding a sparse one-to-one linear layer. It directly uses network weight as the feature weight, so it is sensitive to noise. Roy et al. [27] used the activation potential as a measure for feature selection at each single input dimension but is limited to specific DNNs. Since then, the interest has been turned to the data with a specific structure, which relies more on the progress of traditional data feature selection methods [10, 22]. AFS [11] proposes to transform feature weight generation into a mode that can be solved by using an attention mechanism. Takumi et al. [16] proposed a method that harnesses feature partition in SoftMax loss function for effectively learning the discriminative features. However, these methods are focused on reducing feature map or selecting channels of CNN rather than Transformer. We are the first to use the delta loss value as an indicator for identifying relevant Transformer tokens from a wrapper-based feature selection point of view [17] by testing their impact on the final decision once masked. ## 3 Dl-ViT We propose a metric referred to as delta loss to weigh how vital a token is. In detail, we mask a token at first and then compute its impact on the loss, say, the change of cross entropy with/without such token for the final decision. If masking a token leads to big DL, it means that such a token does affect the final decision much, which should be preserved to take part in the subsequent self-attentions. Vice versa, if the loss does not change much with/without a token, such a token should be discarded due to its less importance to the decision. Correspondingly, a plausible trick arising from the aforementioned scheme is: The Transformer itself can act as the agent to score the importance of each token via DL without any further machine learning required in this phase. By using the DL scores to label the tokens in the training corpus as positive or negative, we can then train a binary classifier to check whether the tokens of an input image should be preserved to take part in the subsequent self-attentions, where the classifier is implemented using MLP. Finally, we preset the simple MLP module prior to the backbone Transformer, and fine tune the whole pipeline, where preset a token filter as such is the only change in the architecture. In the following, we describe the two phases of the delta loss based efficient vision Transformer (DL-ViT): Evaluating token importance with delta loss to train the token filter and then fine tuning the entire network after incorporating the token filter. After an image passes through the embedding layer of the vision Transformer, the non-overlapping image patches are encoded into tokens denoted as: \[X=\left\{x_{i}\in\mathcal{R}^{d}|i=1,2\dots,N\right\}, \tag{1}\] where \(N\) is the total number of tokens and \(d\) the embedding dimension. After masking the \(i\)-th token \(x_{i}\in\mathcal{R}^{d}\), we get the tokens in the following form: \[X_{i}=\left\{x_{1},\dots x_{i-1},\varnothing,x_{i+1},\dots,x_{N}\right\}, \tag{2}\] where \(\varnothing\) means replacing the \(i\)-th token with zeros (masking). Then, we feed \(X\) and \(X_{i}\) to the Transformer, respectively, to obtain the corresponding prediction results: \[\hat{y}=Transformer(X), \tag{3}\] \[\hat{y}_{i}=Transformer(X_{i}). \tag{4}\] Based on the previous prediction results, we calculate the cross-entropy loss of either case in reference to the ground truth y as follows: \[\mathcal{L}=CrossEntropy(\hat{y},y), \tag{5}\] \[\mathcal{L}_{i}=CrossEntropy(\hat{y}_{i},y). \tag{6}\] It is known that the value of loss measures how close the prediction result approaches the ground truth, where a lower value corresponds with closer to the ground truth. Let: \[\Delta\mathcal{L}_{i}=\mathcal{L}-\mathcal{L}_{i}. \tag{7}\] Obviously, if the delta loss defined in Eq. (7) is positive, it means that masking the \(i\)-th token makes the decision closer to the ground truth since masking as such causes a lower cross-entropy value in contrast to the original case. In such a case, discarding the token should not affect but benefit the decision of ViT, and a bigger delta loss corresponds with a better change on the decision. So, we quantize the delta loss measure to mark whether the current token should be discarded or not, formulated as: \[label(x_{i})=\begin{cases}0,&\mathcal{L}-\mathcal{L}_{i}\leq\rho\\ 1,&\mathcal{L}-\mathcal{L}_{i}>\rho\end{cases} \tag{8}\] where 0 means leaving the token out, 1 preserving the token, and \(\rho\) the only hyperparameter to control the significance of the pseudo labeling. After labeling all the tokens in the training corpus, we can then proceed to learn the generalizable law to distinguish positive token examples from negative ones in a population sense, which leads to a binary classifier realized using MLP for token filtering, acting to determine whether each token should be preserved to the next phase of the pipeline or not. So far, there is still a critical problem to be tackled, that is, some similar tokens may lead to contradicting results in terms of delta loss. This is quite common when two images share some similar patches locally but are quite different in an overall sense. Such semantically ambiguous local patches impose difficulty on token filter training, so we attach the profile featuring the whole image to each token as context, namely, global feature, to solve this problem. That is, we not only use the tokens with original embedding but also apply adaptive average pooling (AAP) over all tokens of an image to obtain the global feature of the image, acting as the context to make each token distinguishable from the Figure 1: The overall training process: The two branches of the vision Transformer are in fact the same one, whose parameters are fixed during training. The delta loss and \(\rho\) refer to Eq. (7) and Eq. (8). others. Thus, the overall descriptor for each token becomes: \[x_{i}^{\prime}=[x_{i},x_{global}]. \tag{9}\] \[x_{global}=AAP(X)=\frac{1}{N}\Sigma_{k=1}^{N}x_{k}. \tag{10}\] Consequently, \(x_{i}^{\prime}\) instead of \(x\) is fed to the token selection module for training and inference: \[p_{i}=Sigmoid(MLP(x_{i}^{\prime})). \tag{11}\] During training, we first fix all parameters of the pre-trained backbone Transformer for token labeling, and then, train the MLP only. Here, we use binary-cross-entropy loss to train the network: \[\mathcal{L}_{MLP}=BinaryCrossEntropy(p_{i},label(x_{i})), \tag{12}\] where \(p_{i}\) is the prediction from MLP, and \(label(x_{i})\) the pseudo label calculated from Eq. (8). Fig. 1 depicts how to train the token selection module with delta loss. Algorithm 1 and Algorithm 2 describe respectively how to label tokens with naive DeiT [29] and how to train the selection module. ``` 0:X = \(\left\{x_{i}\in\mathcal{R}^{d}|i=1,2.\ldots,N\right\}\), and the corresponding ground truth \(y\). 0:Label = \(\{label(x_{i})|i=1,2.\ldots,N\}\). 1:Label = \(\varnothing\) 2:Set \(\rho\) to control the significance of pseudo labeling. 3:\(\hat{y}=Transformer(X)\) 4:\(\mathcal{L}=CrossEntropy(\hat{y},y)\) 5:for\(i=1,2,...,N\)do 6:\(X_{i}=\{x_{1},\ldots x_{i-1},\varnothing,x_{i+1},\ldots,x_{N}\}\) 7:\(\hat{y}_{i}=Transformer(X_{i})\) 8:\(\mathcal{L}_{i}=CrossEntropy(\hat{y}_{i},y)\) 9:if\(\mathcal{L}-\mathcal{L}_{i}\leq\rho\)then 10:\(label(x_{i})=0\) 11:else 12:\(label(x_{i})=1\) 13:endif 14:Label = Label \(\cup label(x_{i})\) 15:endfor 16:returnLabel ``` **Algorithm 1** Token labeling with naive DeiT [29] As shown in Fig. 2, before entering the Transformer, all tokens must go through the token selection module that will output the decision of keeping or discarding the token. During fine tuning, we train both the token selection module and the DeiT end-to-end, based on the cross-entropy loss: \[\mathcal{L}_{finetune}=CrossEntropy(\hat{y},y), \tag{13}\] where \(y\) is the ground truth and \(\hat{y}\) the output of the whole network. During fine tuning, in order to make it easy to parallelize the computation, we do not delete tokens directly but replace them with zeros to prevent them from affecting subsequent operations. Such a token masking strategy makes the computational cost of the training iterations similar to those of the original vision Transformer. During inferring, we throw the masked tokens out of the subsequent calculations in order to examine the actual acceleration resulting from the token selection mechanism. ## 4 Experiments **Data:** We evaluate our method for image classification on the 1000-class ImageNet1K ILSVRC 2012 dataset [5], Figure 2: The overall fine tuning and inference process of the proposed approach. All the tokens enter the selection module in turn to decide whether they should be passed to the subsequent pipeline of Transformer according to the predicted probability, after which the number of the preserve tokens will remain unchanged in the rest pipeline. and all images have a resolution of \(224\times 224\). **Experimental setting:** Following the baselines [37, 26, 21], we use the data-efficient vision Transformer (DeiT) [29] as the backbone, and following its training principles, we only use the ImageNet1K dataset for training. We use \(16\times 16\) patch resolution and SGD optimization. The MLP is composed of \(3\) layers with ReLU for the first two layers and Sigmoid for the last layer as the activation and the number of neurons are set to \(384\), \(100\), and \(1\) for each layer, respectively. When training MLP, we use the pre-trained model of DeiT to compute the loss value, and the learning rate is fixed to \(1\times 10^{-2}\). When fine tuning the whole network, the learning rate is \(1\times 10^{-3}\) and reduced by 10 times for every 40 epochs. For regularization, we set the weight decay of the optimizer to \(1\times 10^{-4}\) in both MLP training and fine tuning. Starting from publicly available pre-trained checkpoints and the pre-trained token selection module, we fine tune the DL-ViT-T/S variant models evolved from DeiT-T/S [29] for 100 epochs, where T/S refers to 3-head/6-head with 192-dimension/384-dimension implementation on 12 layers, respectively. We use 2 NVIDIA 3090 GPUs for training. ### Intuitive insight from statistics **Distribution of DL values:** In order to examine the rationality of our method intuitively, we visualize the distribution of all DL values in Fig. 3. We find that small DL values around \(0\) dominate the majority of the distribution, which reveals the fact that only a part of the image patches are significantly relevant to classification. So considerable tokens with smaller DL values can be discarded. Fig. 4 depicts the average DL value of the tokens at each patch resulting from DeiT-T [29] on the training set of ImageNet1K. We find that most semantically important patches are on the center of an image, the rest of which should fall into the outliers to be eliminated more frequently by our method. **Qualitative analysis.** Fig. 5 visualizes the masks on the ImageNet1K validation set resulting from DL-ViT, where the dark portion appearing mostly along the edge of an image are the less contributive patches for classification, namely, the outliers favored by the token filter to activate elimination. Sometimes, the token selection module eliminates not only the background of the image, but also the confusing portion that may cause classification errors. For example, the third image in the last row of Fig. 6 prefers eliminating the patches unrelated to the dog. ### Comparison to baselines We compare our method with the baselines in Table 1 in terms of efficiency and precision, where we set \(\rho\) to 0.002 and 0.001 for DL-ViT-T and DL-ViT-S, respectively. At the cost of sacrificing only \(0.3\%\) and \(0.2\%\) accuracy compared with that of the backbone, we cut down \(46\%\) and \(15\%\) FLOPs of DeiT-T and DeiT-S, respectively. Moreover our method performs best to make DeiT-T more efficient and more precise compared with the baselines, where the Floating-point Operations (FLOPs) metric is measured by FlopCountAnalysis[9]. As E-ViT is an exception that only reports comparison on ViT-S, we follow A-ViT [37] and Dynamic-ViT [26] to report the performance on both ViT-S and ViT-T. Regarding ViT-S, no method performs best on all metrics, where E-ViT runs faster but is inferior to DL-ViT on top-1 precision. Except for the highest top-1 precision on both benchmarks, DL-ViT promises the state-of-the-art (SOTA) performance in an overall sense if taking into account both benchmarks. Since E-ViT misses to compare with all the baselines on ViT-T, except for the performance, we compare it with DL-ViT in a methodological sense to allow a more comprehensive insight: (1) We evaluate token importance via delta loss while E-ViT leverages top-k attention Figure 5: The average of masks predicted by our token selection module on ImageNet1K validation set. Figure 3: Distribution of all the DL values on ImageNet1K Figure 4. Average DL of every image patch obtained from training set. weights as token importance; (2) We filter out irrelevant tokens from the very beginning but E-ViT does this gradually and preserve both important and less important tokens in the whole pipeline. That is, E-ViT cannot foresee the impact of each token on the final decision at the beginning but DL-ViT can. (3) E-ViT modifies the self-attention, and the whole pipeline has to be changed wherever attention is applied, so it has to train from scratch, which is too expensive compared with the fine tune as adopted in our framework. As we change nothing in ViT, the pseudo labeling is performed by using naive ViT without any training. Besides, MLP is a two-class classifier, whose training is not tough. In this sense, the change on the architecture is minor. (4) In DL-ViT, \(\rho\) controls the significance of pseudo labeling, where the heuristics to choose its value lies in the statistics of DL values as shown in Fig. 3. For E-ViT, determining k is not easy in that there is no explicit heuristic to foresee its impact on the overall performance, and every trail will lead to a new-round computation-intensive training from scratch. Besides, the layer-varying token importance accounts for why layer-wise token dropout has to be done gradually. In Fig. 8 and Fig. 8, we compare DL-ViT with the baselines under different settings. It is obvious that our model can achieve a good trade-off between efficiency and precision. In addition to FLOPs, we also evaluate the image throughput of our model on a single NVIDIA RTX 3090 GPU with batch size fixed to 64, and for GPU warming up, 512 forward passes are conducted. The experiment demonstrates that our DL-ViT can accelerate the inference by \(15\%\sim 41\%\). ### Ablation study In our method, the full configuration of a solution is subject to the following factors: The backbone for token importance evaluation, the threshold \(\rho\) to control the annotation on DL values, MLP, and the local/global feature applied to it. As shown in Table 2, a high threshold value of \(\rho\) can filter out more tokens, resulting in higher efficiency, but a too high one will cause degradation in precision. So, there is a compromise to determine the value of \(\rho\), where we let \(\rho=0.002\) for DL-ViT-T. Note that when \(\rho=0.002\), the accuracy of using only local features is even higher than that of DeiT-T [29], at the cost of sacrificing FLOPs. Yet, we incorporate global feature as our primary solution due to its promising overall performance. Note that both cases lead to varying accuracy when \(\rho\) changes from 0.001 to 0.003, but the FLOPs with global feature remain stably low. Table 3 shows that the proposed model degrades in terms of precision if replacing the pre-training of MLP with the random initialization, and the backbone based on random token filtering also leads to inferior performance. This indicates that our token filtering scheme does contribute to making the DeiT-T efficient while preserving its precision to the best extent. ## 5 Conclusions We develop an efficient vision transformer with token impact prediction such that token filtering can be deployed at the very beginning prior to self-attention, where the backbone Transformer is used as an agent/wrapper to rank the impact in terms of the difference of loss caused by masking a token of interest. It is the first time to develop a light-weighted model from a feature selection point of view with explicit insight into token's relevance to the decision. A MLP for token filtering is the only added module, which acts as a two-class classifier with minor change on the overall architecture, and its training is not tough. The present solution is a one-pass filter. In the future, we will investigate into the relevance of tokens at middle layers to the final decision to further improve the efficiency.
2310.10890
Variation of holonomy for projective structures and an application to drilling hyperbolic 3-manifolds
We bound the derivative of complex length of a geodesic under variation of the projective structure on a closed surface in terms of the norm of the Schwarzian in a neighborhood of the geodesic. One application is to cone-manifold deformations of acylindrical hyperbolic 3-manifolds.
Martin Bridgeman, Kenneth Bromberg
2023-10-16T23:47:12Z
http://arxiv.org/abs/2310.10890v2
Variation of holonomy for projective structures and an application to drilling hyperbolic 3-manifolds ###### Abstract We bound the derivative of complex length of a geodesic under variation of the projective structure on a closed surface in terms of the norm of the Schwarzian in a neighborhood of the geodesic. One application is to cone-manifold deformations of acylindrical hyperbolic 3-manifolds. _Dedicated to a great mathematician and friend on the occasion_ _of his sixtieth birthday: Francois Labourie._ ## 1 Introduction Let \(X\) be a Riemann surface or, equivalently, a hyperbolic surface and \(\gamma\) a closed geodesic on \(X\). A projective structure \(\Sigma\) on \(X\) determines a holonomy representation of \(\pi_{1}(X)\). If the holonomy of \(\gamma\) is not parabolic then \(\mathscr{L}_{\gamma}(\Sigma)\) is the _complex length_ of \(\gamma\) which is a complex number whose imaginary part is well defined modulo \(2\pi\). We let \(P(X)\) be the space of projective structures on \(X\) and \(P_{\gamma}(X)\) the subspace where the holonomy of \(\gamma\) is not parabolic or the identity. Then \(\mathscr{L}_{\gamma}\) is a smooth function on \(P_{\gamma}(X)\). The goal of this note is to gain quantitative control of the derivative of \(\mathscr{L}_{\gamma}\). We begin by describing a formula for the derivative. We first identify the universal cover \(\tilde{X}\) with the upper half plane \(\mathbb{U}\) normalized such that the imaginary axis is a lift of \(\gamma\). The projective structure \(\Sigma\) determines a developing map \(f\colon\mathbb{U}\to\widehat{\mathbb{C}}\) which we semi-normalize so that \(f(e^{\ell}z)=e^{\mathscr{L}_{\gamma}(\Sigma)}f(z)\) where \(\ell\) is the length of \(\gamma\) in \(X\). We define the holomorphic vector \(\mathbf{n}=z\frac{\partial}{\partial z}\) on \(\widehat{\mathbb{C}}\). Then the pull back \(f^{*}\mathbf{n}\) is a holomorphic vector field on \(\mathbb{U}\) that descends to a vector field on the annular cover \(X_{\gamma}\to X\) associated to \(\gamma\). We also denote this vector field on \(X_{\gamma}\) as \(f^{*}\mathbf{n}\). Note that \(f\) is only determined up to post-composition with an element of \(\mathsf{PSL}_{2}(\mathbb{C})\) that fixes \(0\) and \(\infty\). Since \(\mathfrak{n}\) is invariant under any such element we have that \(f^{*}\mathfrak{n}\) is well defined. The tangent space \(T_{\Sigma}P(X)\) is canonically identified with \(Q(X)\), the space of holomorphic quadratic differentials on \(X\). The pairing of a holomorphic vector field with a holomorphic quadratic differential \(\phi\) is a holomorphic \(1\)-form so \(f^{*}\mathfrak{n}\cdot\phi\) is a holomorphic (and hence closed) \(1\)-form on \(X_{\gamma}\). Our formula for the derivative of \(\mathscr{L}_{\gamma}\) is **Theorem 1.1**: _Given \(\Sigma\in P_{\gamma}(X)\) with normalized developing map \(f\) and \(\phi\in Q(X)\cong T_{\Sigma}(P_{\gamma}(X))\) we have_ \[d\mathscr{L}_{\gamma}(\phi)=-\int_{\gamma}\!f^{*}\mathfrak{n}\cdot\phi.\] In the special case when \(\Sigma\) is Fuchsian then \(\mathfrak{n}=f^{*}\mathfrak{n}\) and, as we will see below, the integral can be computed explicitly. In general, to estimate the the integral we need quantitive control over the difference between \(\mathfrak{n}\) and \(f^{*}\mathfrak{n}\). This is the content of our next result. Before stating our estimates we define some norms. If \(\Phi\) is a quadratic differential on \(X\) then \(|\Phi|\) is an area form so its ratio with the hyperbolic area form is a non-negative function. We let \(\|\Phi(\mathsf{z})\|\) be this function. It is a natural pointwise norm of \(\Phi\) and we let \(\|\Phi\|_{p}\) be the corresponding \(L^{p}\)-norms with respect to the hyperbolic area form. We also let \(\|\cdot\|\) be the hyperbolic length of vector fields. For all norms we write \(\|\cdot\|_{\gamma}\) to represent the sup norm over the curve \(\gamma\). With these definitions we can now state our estimate. **Theorem 1.2**: _Let \(\Phi\) be a quadratic differential on \(\mathbb{U}\) and assume that an \(r\)-neighborhood (in hyperbolic metric) of the geodesic \(\gamma\) given by the imaginary axis we have \(\|\Phi(z)\|\leq K\) with \(r<1/2\) and \(K/r<1/4\). Then there exists a locally univalent map \(f\colon\mathbb{U}\to\widehat{\mathbb{C}}\) such that_ * \(\Phi=Sf\)_;_ * \(f(\text{it})\to 0\) _as_ \(t\to 0\) _and_ \(f(\text{it})\to\infty\) _as_ \(t\to\infty\)_;_ * \(\|f^{*}\mathfrak{n}-\mathfrak{n}\|_{\gamma}\leq\frac{9K}{r}\) _and_ \(\|f_{*}\mathfrak{n}-\mathfrak{n}\|_{f^{\circ}\gamma}\leq\frac{9K}{r}\)__ _where \(\|\cdot\|_{\gamma}\) is the supremum of the hyperbolic length of the vector field along the curve \(\gamma\) in \(\mathbb{U}\) and \(\|\cdot\|_{f^{\circ}\gamma}\) is the supremum of the Euclidean length of the vector field along the curve \(f\circ\gamma\) in \(\mathbb{C}^{*}\)._ The proof Theorem 1.2 is more complicated then one might expect. It involves the use of _Epstein surfaces_ and estimates in \(3\)-dimensional hyperbolic space. An elementary consequence of Theorem 1.2 is the following approximation for \(d\mathscr{L}_{\gamma}\). **Theorem 1.3**: _Let \(\Sigma\) be a projective structure and \(\gamma\) a closed geodesic of length \(\ell\) such that \(\|\Phi\|\leq K\) in the \(r\)-neighborhood of \(\gamma\) with \(r\leq 1/2\) and \(K/r\leq 1/4\). Then_ \[\left|d\mathscr{L}_{\gamma}(\phi)+\int_{\gamma}\!\mathfrak{n}\cdot\phi\right| \leq\frac{9K\ell}{r}\|\phi\|_{\gamma}.\] **Proof:** We have by Theorems 1.1 and 1.2 \[\left|d\mathscr{L}\gamma(\phi)+\int_{\gamma}\mathbf{n}\cdot\phi\right| \leq \int_{\gamma}\left|\left(f^{*}\mathbf{n}-\mathbf{n}\right)\cdot \phi\right|\] \[\leq \left\|f^{*}\mathbf{n}-\mathbf{n}\right\|_{\gamma}\cdot\left\|\phi \right\|_{\gamma}\int_{\gamma}\left\|dz\right\|_{\mathbb{H}^{2}}\] \[= \frac{9K\ell}{r}\left\|\phi\right\|_{\gamma}\] where \(\left\|dz\right\|_{\mathbb{H}^{2}}\) is hyperbolic line element. \(\Box\) **Application to deformations of hyperbolic manifolds** Let \(N\) be an acylindrical hyperbolizable \(3\)-manifold with boundary \(S=\partial N\). Then for any (noded) conformal structure \(Y\) in the Weil-Petersson completion \(\overline{\mathscr{T}(S)}\) of Teichmuller space there is a unique geometrically finite hyperbolic structure \(M_{Y}\) on \(N\) with conformal boundary \(Y\). The hyperbolic structure \(M_{Y}\) determines a projective structure on \(Y\) with Schwarzian quadratic differential denoted \(\Phi_{Y}\). One question that naturally arises is if the \(L^{2}\)-norm of \(\Phi_{Y}\) is small, does this imply that the \(L^{\infty}\)-norm is also small and therefore the manifold \(M_{Y}\) has almost geodesic convex core boundary. This is not the case as the \(L^{2}\)-norm may be small but there are short curves where the \(L^{\infty}\)-norm is large. In [BBB] we analysed this problem. We showed that for \(Y\in\mathscr{T}(S)\) if the \(L^{2}\)-norm \(\left\|\Phi_{Y}\right\|_{2}\) is sufficiently small then there is a _nearby_\(\hat{Y}\in\overline{\mathscr{T}(S)}\) (in the Weil-Petersson metric) such that the \(L^{\infty}\)-norm \(\|\Phi_{\hat{Y}}\|_{\infty}\) is _small_. This \(\hat{Y}\) will generally be a noded surface pinched along curves where the \(L^{\infty}\)-norm in \(Y\) was not small. The bounds in [BBB] for _nearby_ and _small_ were linear in \(\left\|\Phi_{Y}\right\|_{2}^{\frac{1}{2n(S)+3}}\) where \(n(S)\) is the maximum number of disjoint geodesics, in particular for \(S\) closed connected of genus \(g\) then \(n(S)=3g-3\). Our application is to use our variation bound in Theorem 1.3 to significantly improve this estimate and replace the power \(\frac{1}{2n(S)+3}\) with the constant power \(\frac{1}{2}\). Thus the bounds become independent of the topology. Namely we prove: **Theorem 1.4**: _There exists \(K,C>0\) such that the following holds; Let \(N\) be an acylindrical hyperbolizable manifold with boundary \(S=\partial N\) and \(Y\in\mathscr{T}(S)\) with \(\|\Phi_{Y}\|_{2}\leq K\). Then there exists a \(\hat{Y}\in\overline{\mathscr{T}(S)}\) with_ 1. \(d_{\mathrm{WP}}(Y,\hat{Y})\leq C\sqrt{\|\Phi_{Y}\|_{2}}\); 2. \(\|\Phi_{\hat{Y}}\|_{\infty}\leq C\sqrt{\|\Phi_{Y}\|_{2}}\). Our proof also holds for _relative acylindrical_\(3\)-manifolds as in [BBB] but for simplicity we will restrict to the acylindrical case. ## 2 Variation of holonomy ### Preliminaries We consider \(X\) a Riemann surface structure on a surface \(S\) and \(P(X)\) the space of projective structures on \(X\). Associated to \(\Sigma\in P(X)\) is the holonomy representation \(\rho\in\operatorname{Hom}(\pi_{1}(S),\operatorname{PSL}_{2}(\mathbb{C}))\) (unique up to conjugacy). The space \(P(X)\) can be parametrized by \(Q(X)\) the space of holomorphic quadratic differentials by taking the Schwarzian derivative of its developing map \(f\colon\mathbb{U}\to\widehat{\mathbb{C}}\). As \(Q(X)\) is a vector space, it is its own tangent space, and a variation of \(\Sigma\) is given by a quadratic differential \(\phi\in Q(X)\). Throughout the paper we will use \(\phi\) to denote deformations of projective structures and \(\Phi\) (or \(\Phi_{t}\)) to denote the Schwarzian of projective structures. The advantage of the Schwarzian \(\Phi\) is that is uniquely determined by \(\Sigma\) while the holonomy representation \(\rho\) and developing map \(f\) are not. Given a smooth \(1\)-parameter family of projective structures \(\Sigma_{t}\) we get a smooth family \(\Phi_{t}\in Q(X)\) of Schwarzians. While the developing maps \(f_{t}\colon\mathbb{U}\to\widehat{\mathbb{C}}\) are not uniquely determined we can choose them to vary smoothly and it will be convenient to do this after making some normalizations. We will be interested in the _complex length_ of an element \(\gamma\in\pi_{1}(S)\) that represents a closed geodesic for the hyperbolic structure on \(X\). After fixing \(\gamma\), we identify \(\mathbb{U}\) with \(\tilde{X}\) so that the deck action of \(\gamma\) on \(\mathbb{U}\) is given by \(\gamma(z)=e^{\ell}z\) where \(\ell\) is the length of the geodesic representative of \(\gamma\). Under the covering map \(\mathbb{U}\to X\) the imaginary axis is taken to this geodesic. Our second normalization is to choose the developing map \(f\) and corresponding holonomy representation \(\rho\) such that \(\rho(\gamma)(z)=e^{\mathscr{L}}z\). Then \(\mathscr{L}\) is the complex length \(\mathscr{L}_{\gamma}(\Sigma)\). Note that this is only well defined modulo \(2\pi i\). Extending these normalizations to the \(1\)-parameter family \(\Sigma_{t}\) we get a smooth family of developing maps \(f_{t}\) and holonomy representations with \(\rho(\gamma)(z)=e^{\mathscr{L}_{t}}z\). Note that while \(\mathscr{L}\) is only well defined modulo \(2\pi i\), after this choice is made there is a unique choice of \(\mathscr{L}_{t}\) that makes the path continuous. In particular the time zero derivative \(\dot{\mathscr{L}}\) of \(\mathscr{L}_{t}\) is well defined. We also let \(v\) be the vector field on \(\mathbb{U}\) such that \(f_{*}v(z)\) is the tangent vector of the path \(f_{t}(z)\) at \(t=0\). The vector field \(v\) is not equivariant under the deck action but a standard computation gives \[v-\beta_{*}v=f^{*}\rho(\beta)\] for all \(\beta\in\pi_{1}(S)\) where \(\dot{\rho}(\beta)\in\mathfrak{sl}_{2}\mathbb{C}\) is the time zero of the \(\rho_{t}(\beta)\). As \(\rho_{t}(\gamma)(z)=e^{\mathscr{L}_{t}}z\) we have that \(\dot{\rho}(\gamma)=\dot{\mathscr{L}}z\frac{\partial}{\partial z}\) so \[v-\gamma_{*}v=f^{*}\left(\dot{\mathscr{L}}z\frac{\partial}{\partial z}\right). \tag{2.1}\] If \(U\) is an open neighborhood in \(\mathbb{U}\) where \(f_{t}\) is injective then \((U,f_{t})\) is a projective chart for \(\Sigma_{t}\). In the chart \((U,f)\) the vector field \(v\) is represented as \(g\frac{\partial}{\partial z}\) where \(g\) is a holomorphic function. In this chart the _Schwarzian derivative of \(v\) is \(g_{zz}dz^{2}\). This is a quadratic differential and a computation gives that it is \(\phi\) the time zero derivative of the path \(\Phi_{t}\). #### Model deformations We are now ready to prove Theorem 1.1. Here's the outline: * We first construct a model deformation \(\phi_{\lambda}\) on the projective structure on the annulus \(X_{\gamma}\). * The model deformation will be very explicit so that we can directly calculate the line integral of \(f^{*}\mathbf{n}\cdot\phi_{\lambda}\) over \(\gamma\). * We then find a specific \(\lambda\) so that the 1-form \(f^{*}\mathbf{n}\cdot\phi-f^{*}\mathbf{n}\cdot\phi_{\lambda}\) is exact. Then the line integrals over both 1-forms will be equal so our previous calculation will give the theorem. A minor issue with this outline is that the model deformation will only be defined on a sub-annulus of \(X_{\gamma}\). We begin with this difficulty. The set \(f^{-1}(\{0,\infty\})\) will be discrete in \(\widehat{\mathbb{C}}\) and \(\gamma\)-invariant so it will descend to a discrete set in \(X_{\gamma}\). Let \(\gamma^{\prime}\) be a smooth curve, homotopic to \(\gamma\), that misses this set and let \(A\) be an annular neighborhood of \(\gamma^{\prime}\) that is also disjoint from this set. Then \(\widetilde{A}\), the pre-image of \(A\) in \(\mathbb{U}\), we also be disjoint from \(f^{-1}(\{0,\infty\})\). As \(\widetilde{A}\) is simply connected we can choose a well defined function \(\log f\) on \(\widetilde{A}\) and define the vector field \(v_{\lambda}\) on \(\widetilde{A}\) by \(f_{*}v(z)=\lambda f(z)\log f(z)\frac{\partial}{\partial z}\). We let \(\phi_{\lambda}\) be the Schwarzian of \(v_{\lambda}\) and differentiating \(\lambda z\log z\) three times we see that \(\phi_{\lambda}=f^{*}\left(-\frac{\lambda}{z^{2}}dz^{2}\right)\). Then \(\phi_{\lambda}\) is our model deformation. Note the choice of \(\log\) defines \(\mathscr{L}\) as a complex number rather than just a number modulo \(2\pi\). We will use this in the rest of the proof. Next we compute the line integral of \(f^{*}\mathbf{n}\cdot\phi_{\lambda}\) over \(\gamma^{\prime}\): **Lemma 2.1**: \[\int_{\gamma^{\prime}}f^{*}\mathbf{n}\cdot\phi_{\lambda}=-\lambda\mathscr{L}\] **Proof:** Let \(\sigma\colon[0,1]\to\mathbb{U}\) be a smooth path that projects to \(\gamma^{\prime}\) in \(X_{\gamma}\). In particular \(\sigma(1)=\gamma(\sigma(0))\) and \(f(\sigma(1))=f(\gamma(\sigma(0))=e^{\mathscr{L}}f(\sigma(0))\). Then \[\int_{\gamma}f^{*}\mathbf{n}\cdot\phi_{\lambda} = \int_{\sigma}f^{*}\left(z\frac{\partial}{\partial z}\cdot-\frac{ \lambda}{z^{2}}dz^{2}\right)\] \[= -\lambda\int_{f\circ\sigma}\frac{dz}{z}\] \[= -\lambda(\log f(\sigma(1))-\log f(\sigma(0))\] \[= -\lambda\mathscr{L}.\] \(\Box\) Next we give a criteria for the form \(f^{*}\mathbf{n}\cdot\phi\) to be exact. **Lemma 2.2**: _Let \(v\) be a \(\gamma\)-equivariant holomorphic vector field on \(\,\tilde{A}\) with Schwarzian \(\phi\). Then \(f^{*}{\bf n}\cdot\phi\) is exact on \(A\)._ **Proof:** The vector field \(v\) is a section of the holomorphic tangent bundle \(T_{\mathbb{C}}X_{\gamma}\) so \(v_{z}\) is a function, \(v_{zz}\) is a 1-form, \(v_{zzz}=\phi\) is a quadratic differential. Therefore \(f^{*}{\bf n}\cdot v_{zz}-v_{z}\) is a holomorphic function on \(X_{\gamma}\) and we'll show that it is a primitive of \(f^{*}{\bf n}\cdot\phi\). We can do this calculation in a chart. Namely choose an open neighborhood \(U\) in \(\mathbb{U}\) such \(f\) is injective on \(U\). Then as above \((U,f)\) is a chart for the projective structure and in this chart \(v\) has the form \(g\frac{\partial}{\partial z}\) where \(g\) is a holomorphic function on \(f(U)\) and \(\phi\) is \(g_{zzz}dz^{2}\). It follows that on this chart \(v_{z}\) is \(g_{z}\), \(v_{zz}\) is \(g_{zz}dz\) and \(\phi=v_{zzz}\) is \(g_{zzz}dz^{2}\). As the derivative of \(zg_{z}-g_{z}\) is \(g_{zzz}\) this shows that \(f^{*}{\bf n}\cdot v_{zz}-v_{z}\) is a primitive for \(f^{*}{\bf n}\cdot\phi\) as claimed. \(\Box\) **Proof of Theorem 1.1:** Since \(f^{*}{\bf n}\cdot\phi\) is holomorphic it is closed and we have \[\int_{\gamma}f^{*}{\bf n}\cdot\phi=\int_{\gamma^{\prime}}f^{*}{\bf n}\cdot\phi.\] Next we calculate to see that \[v_{\lambda}-\gamma_{*}v_{\lambda}=f^{*}\left(\lambda\mathscr{L}z\frac{ \partial}{\partial z}\right)\] so if \(\lambda=\mathscr{L}/\mathscr{L}\) then by (2.1) we have that \(v-v_{\mathscr{L}/\mathscr{L}}\) is \(\gamma\)-invariant. Therefore by Lemma 2.2 the 1-form \(f^{*}{\bf n}\cdot(\phi-\phi_{\mathscr{L}/\mathscr{L}})\) is exact and \[\int_{\gamma^{\prime}}f^{*}{\bf n}\cdot\phi=\int_{\gamma^{\prime}}f^{*}{\bf n }\cdot\phi_{\mathscr{L}/\mathscr{L}}=-\mathscr{L}\] where the last equality comes from Lemma 2.1. Combining the first and last equality gives the theorem. \(\Box\) We conclude this section with a simple formula for the line integral when the projective structure is Fuchsian. For this, rather than represent the annulus as a quotient of the upper half plane \(\mathbb{U}\) for this calculation it is convenient to represent the annulus as an explicit subset of \(\mathbb{C}\). Namely, let \[A_{\ell}=\left\{z\in\mathbb{C}\ |\ e^{\frac{-\pi^{2}}{\ell}}<|z|<e^{\frac{ \pi^{2}}{\ell}}\right\}.\] This annulus is conformally equivalent \(X_{\gamma}\) when \(\ell=\ell_{\gamma}(X)\) and the circle \(|z|=1\) is the closed geodesic of length \(\ell\) in \(A_{\ell}\). For this representation of the annulus the vector field \({\bf n}\) is written as \({\bf n}=\frac{2\pi i}{\ell}z\frac{\partial}{\partial z}\) and we observe that \({\bf n}\) extends to a holomorphic vector field on all of \(\widehat{\mathbb{C}}\) that is \(0\) at \(z=0\) and \(z=\infty\). To decompose the quadratic differential \(\phi\) on \(A_{\ell}\) we let \(D^{+}\) and \(D^{-}\) be the disks in \(\widehat{\mathbb{C}}\) with \(|z|>e^{\frac{-\pi^{2}}{\ell}}\) and \(|z|<e^{\frac{\pi^{2}}{\ell}}\), respectively. We recall that any holomorphic function \(\psi\) on \(A_{\ell}\) can be written as \(\psi_{+}+\psi_{0}+\psi_{-}\) where \(\psi_{+}\) and extend to holomorphic functions on \(D^{+}\) and \(D^{-}\) that are zero at \(z=\infty\) and \(z=0\), respectively, and \(\psi_{0}\) is constant. Then \(\phi\) can be written as \(\phi=\frac{\psi}{z^{2}}dz^{2}\) so the decomposition of \(\psi\) gives \[\phi=\phi_{+}+\phi_{0}+\phi_{-} \tag{2.2}\] where \(\phi_{+}\) and \(\phi_{-}\) extend to holomorphic quadratic differentials on \(D^{+}\) and \(D^{-}\) with simple poles at \(z=\infty\) and \(z=0\), respectively, and \(\phi_{0}\) is a constant multiple of \(\frac{dz^{2}}{z^{2}}\). **Lemma 2.3**: _Let \(\phi\) be a holomorphic quadratic differential on \(X_{\gamma}\). Then_ \[\int_{\gamma}{\bf n}\cdot\phi=\int_{\gamma}{\bf n}\cdot\phi_{0}\] _and_ \[\left|\int_{\gamma}{\bf n}\cdot\phi\right|=\ell\|\phi_{0}\|_{\infty}\] **Proof:** In the annulus \(A_{\gamma}\) the line integral along \(\gamma\) is the line integral over the circle \(|z|=1\). Note that \({\bf n}\cdot\phi_{\pm}\) extends to a 1-form on \(D^{\pm}\) so \[\int_{|z|=1}{\bf n}\cdot\phi_{\pm}=0\] since by Cauchy's Theorem for any line integral of a closed curve over a holomorphic 1-form on a simply connected region is zero. Therefore \[\int_{|z|=1}{\bf n}\cdot\phi=\int_{|z|=1}{\bf n}\cdot\phi_{+}+{\bf n}\cdot\phi _{0}+{\bf n}\cdot\phi_{-}=\int_{|z|=1}{\bf n}\cdot\phi_{0}.\] We have the covering map \({\mathbb{U}}\to A_{\ell}\) given by \(z\to z^{\frac{2\pi i}{\ell}}\). Thus pushing forward the hyperbolic metric on \({\mathbb{U}}\) we have that the hyperbolic metric \(g_{A_{\ell}}\) on \(A_{\ell}\) is \[g_{A_{\ell}}=\left(\frac{\ell}{2\pi|z|\cos\left(\frac{\ell}{2\pi}\log|z| \right)}\right)^{2}|dz|^{2}.\] As \(\phi_{0}=\frac{\psi_{0}}{z^{2}}dz^{2}\) we therefore have \[\|\phi_{0}\|_{\infty}=\sup_{z}\frac{|\phi_{0}(z)|}{g_{A_{\ell}}(z)}=\frac{4 \pi^{2}|\psi_{0}|}{\ell^{2}}.\] Thus \[\int_{|z|=1}{\bf n}\cdot\phi_{0}=\frac{2\pi i\psi_{0}}{\ell}\int_{|z|=1}\frac {dz}{z}=\frac{-4\pi^{2}\psi_{0}}{\ell}\] and \[\left|\int_{|z|=1}{\bf n}\cdot\phi_{0}\right|=\ell\|\phi_{0}\|_{\infty}.\] \(\Box\) Derivative bounds on univalent maps Next we prove Theorem 1.2. The proof here is considerably more involved. We begin by reducing it to a more explicit statement. **Theorem 3.1**: _Let \(\Phi\) be a quadratic differential on \(\mathbb{U}\) and assume that on an \(r\)-neighborhood of the imaginary axis (in the hyperbolic metric) we have \(\|\Phi(z)\|\leq K\) with \(r<1/2\) and \(K/r<1/4\). Then there exists a locally univalent map \(f\colon\mathbb{U}\to\widehat{\mathbb{C}}\) such that_ * \(Sf=\Phi\)_;_ * \(f(\text{it})\to 0\) _as_ \(t\to 0\) _and_ \(f(\text{it})\to\infty\) _as_ \(t\to\infty\)_;_ * \(f(i)=i\)_,_ \(|f^{\prime}(i)-1|\leq\frac{9K}{r}\) _and_ \(|\frac{1}{f^{\prime}(i)}-1|\leq\frac{9K}{r}\)_._ We now see how this implies Theorem 1.3: **Proof of Theorem 1.2 assuming Theorem 3.1:** Given \(\text{ie}^{\prime}\in\mathbb{U}\) with \(t\in\mathbb{R}\) we let \(\Phi^{\prime}\) be the pull back of \(\Phi\) by the isometry \(\gamma_{t}(z)=e^{t}z\). As \(\gamma_{t}\) preserves the imaginary axis we still have that \(\|\Phi^{\prime}(z)\|\leq K\) for \(z\) in an \(r\)-neighborhood of the axis. Therefore by Theorem 3.1 we have a locally univalent map \(f^{t}\colon\mathbb{U}\to\widehat{\mathbb{C}}\) satisfying the 3 bullets. We now let \(f=f^{0}\). Since \(Sf^{t}=\Phi^{\prime}\) we have that \(S(f^{t}\circ\gamma_{-t})=\Phi\). Since \(f\) and \(f^{t}\circ\gamma_{-t}\) have the same Schwarzian they differ by post-composition with an element of \(\mathsf{PSL}_{2}(\mathbb{C})\). Since these two maps have the same behavior as \(\text{it}\to 0\) and \(\text{it}\to\infty\) this element of \(\mathsf{PSL}_{2}(\mathbb{C})\) must be of the form \(z\mapsto e^{\lambda}z\). We note that \(\mathfrak{n}\) is invariant under these maps (which includes the maps \(\gamma_{t}\)). In particular this implies that \[\left|\frac{1}{(f^{\prime})^{\prime}(i)}i-i\right|=\|(f^{t})^{*}(\mathfrak{n} (i))-\mathfrak{n}(i)\|_{g_{\mathbb{H}^{2}}}=\|f^{*}(\mathfrak{n}(f(\text{it})) )-\mathfrak{n}(\text{it})\|_{g_{\mathbb{H}^{2}}}\] so the inequality (3) gives \[\|f^{*}(\mathfrak{n}(f(\text{it}))-\mathfrak{n}(\text{it})\|_{\mathbb{H}^{2}} \leq\frac{9K}{r}.\] Similarly if \(g_{\text{euc}}\) is the Euclidean metric on \(\mathbb{C}^{*}\) then \[|(f^{\prime})^{\prime}(i)i-i|=|(f^{t})_{*}(\mathfrak{n}(i))-\mathfrak{n}(i)|= \left\|f_{*}(\mathfrak{n}(\text{it}))-\mathfrak{n}(f(\text{it}))\right\|_{g_{ \text{euc}}}\] \(\Box\) Here is a brief outline of the proof of Theorem 3.1: * Given the quadratic differential \(\Phi\) on \(\mathbb{U}\) there is an immersion \(f_{0}\colon\mathbb{U}\to\mathbb{H}^{3}\) such that composition of \(f_{0}\) with the _hyperbolic Gauss map_ is a map \(f\colon\mathbb{U}\to\widehat{\mathbb{C}}\) with \(Sf=\Phi\). * The surface \(f_{0}\colon\mathbb{U}\to\mathbb{H}^{3}\) is the _Epstein surface_ for \(\Phi\). In [Eps], Epstein gives formulas for the metric and shape operator of this surface in terms of the hyperbolic metric on \(\mathbb{U}\) and \(\Phi\). * We will use Epstein's formulas to show that the curve \(t\mapsto f_{0}(i\!e^{t})\) is nearly unit speed and has small curvature. This will imply that \(f_{0}(i\!t)\) limits to distinct points as \(t\to 0\) and \(t\to\infty\). We then normalize so that these limiting points are \(0\) and \(\infty\). * The proof is then completed by a calculation of the hyperbolic Gauss map using the Minkowski model for hyperbolic space. Before starting the proof of Theorem 3.1 we review the necessary facts about _Epstein surfaces_. These surfaces are defined for any conformal metric on \(\mathbb{U}\) and a holomorphic quadratic differential \(\Phi\). Here we will restrict to the hyperbolic metric. The _projective second fundamental form_ for the hyperbolic metric \(g_{\mathbb{H}^{2}}\) is \[\mathrm{II}=\Phi+\bar{\Phi}+g_{\mathbb{H}^{2}}.\] The _projective shape operator_ is given by the formula \[g_{\mathbb{H}^{2}}(\hat{B}v,w)=\mathrm{II}(v,w).\] We then define a _dual pair_ by \[g=\frac{1}{4}\left(\mathrm{id}+\hat{B}\right)^{*}g_{\mathbb{H}^{2}}\qquad B= \left(\mathrm{id}+\hat{B}\right)^{-1}\left(\mathrm{id}-\hat{B}\right). \tag{3.3}\] By inverting it follows also that \[g_{\mathbb{H}^{2}}=\left(\mathrm{id}+B\right)^{*}g\qquad\hat{B}=\left( \mathrm{id}+B\right)^{-1}\left(\mathrm{id}-B\right). \tag{3.4}\] We will also use two maps on the unit tangent bundle \(T^{1}\mathbb{H}^{3}\). First we have the projection \(\pi\colon T^{1}\mathbb{H}^{3}\to\mathbb{H}^{3}\). We also have the hyperbolic Gauss map \(\mathfrak{g}\colon T^{1}\mathbb{H}^{3}\to\widehat{\mathbb{C}}\) which takes each unit tangent vector to the limit of the geodesic ray starting from the vector. **Theorem 3.2** (Epstein, [Eps]): _Given any holomorphic quadratic differential \(\Phi\) on \(\mathbb{U}\) there exists a smooth map \(\hat{f}\colon\,\mathbb{U}\to T^{1}\mathbb{H}^{3}\) with_ 1. _Where_ \(B\) _is non-singular the map_ \(f_{0}=\pi\circ\hat{f}\) _is smooth,_ \(g=f_{0}^{*}g_{\mathbb{H}^{3}}\) _and_ \(B\) _is the shape operator for the immersed surface._ 2. _The map_ \(f=\mathfrak{g}\circ\hat{f}\) _is locally univalent with_ \(S\!f=\Phi\)_._ 3. _The eigenvalues of_ \(\hat{B}\) _are_ \(1\pm 2\|\Phi(z)\|\)_._ ### Geodesic curvature The path \(\gamma(t)=i\!e^{t}\) is a unit speed parameterization of the imaginary axis. We will now begin the computation of the curvature of \(\alpha=f_{0}\circ\gamma\). We begin with a few preliminaries. Let \(\hat{B}_{0}\) be the traceless part of \(\hat{B}\). Then \(\mathrm{II}_{0}(X,Y)=g_{\mathbb{H}^{2}}(\hat{B}_{0}X,Y)\) is the traceless part of \(\mathrm{II}\). We then have \(\hat{B}=\mathrm{id}+\hat{B}_{0}\) and \(\Pi_{0}=\Phi+\tilde{\Phi}\). We also need to relate the Riemannian connection \(\hat{\nabla}\) for \(g_{\mathbb{H}^{2}}\) and the Riemannian connection \(\nabla\) for \(g\). A simple calculation (see [KS, Lemma 5.2]) gives \[(\mathrm{id}+\hat{B})\nabla_{X}Y=\hat{\nabla}_{X}((\mathrm{id}+\hat{B})Y).\] **Lemma 3.3**: _Define the holomorphic function \(h\) by_ \[h=\Phi(\mathbf{n},\mathbf{n}).\] _Then_ \[\left(\mathrm{id}+\hat{B}\right)\nabla_{\hat{\gamma}}\dot{\gamma}=4\mathop{ \mathrm{Re}}\nolimits dh(\dot{\gamma})\mathbf{\bar{n}}\quad\text{and}\quad \|\nabla_{\hat{\gamma}}\dot{\gamma}\|_{g}=|dh(\dot{\gamma})|.\] **Proof:** We work in the complexified tangent bundle of \(\mathbb{U}\). We want to compute \(\nabla_{\hat{\gamma}}\dot{\gamma}\) where \(\dot{\gamma}\) is the tangent vector to the path \(\gamma\). As \(\left(\mathrm{id}+\hat{B}\right)\nabla=\hat{\nabla}\left(\mathrm{id}+\hat{B}\right)\) then \[\left(\mathrm{id}+\hat{B}\right)\nabla_{\hat{\gamma}}\dot{\gamma} = \hat{\nabla}_{\hat{\gamma}}\left(\mathrm{id}+\hat{B}\right)\dot{\gamma}\] \[= 2\hat{\nabla}_{\gamma}\dot{\gamma}+\hat{\nabla}_{\hat{\gamma}} \hat{B}_{0}\dot{\gamma}\] \[= \hat{\nabla}_{\hat{\gamma}}\hat{B}_{0}\dot{\gamma}\] since \(\hat{\nabla}_{\hat{\gamma}}\dot{\gamma}=0\) as \(\gamma\) is a geodesic in \(g_{\mathbb{H}^{2}}\). To compute this term we note \(\dot{\gamma}=\mathbf{n}+\bar{\mathbf{n}}\). Therefore, since \(\Pi_{0}\) is symmetric we have \[g_{\mathbb{H}^{2}}\left(\hat{B}_{0}\dot{\gamma},\mathbf{n}\right) = \Pi_{0}(\mathbf{n}+\bar{\mathbf{n}},\mathbf{n})\] \[= \Phi(\mathbf{n},\mathbf{n})=h.\] Using the compatibility of the metric with the Riemannian connection and that \(\mathbf{n}\) is parallel along \(\gamma\) we have \[dh(\dot{\gamma})=g_{\mathbb{H}^{2}}\left(\hat{\nabla}_{\gamma}\left(\hat{B}_{ 0}\dot{\gamma}\right),\mathbf{n}\right).\] A similar calculation gives \[d\bar{h}(\dot{\gamma})=g_{\mathbb{H}^{2}}\left(\hat{\nabla}_{\hat{\gamma}} \left(\hat{B}_{0}\dot{\gamma}\right),\bar{\mathbf{n}}\right).\] Let \[v=2dh(\dot{\gamma})\bar{\mathbf{n}}+2d\bar{h}(\dot{\gamma})\mathbf{n}.\] Note that we are extending \(g_{\mathbb{H}^{2}}\) to be \(\mathbb{C}\)-linear on \(T\mathbb{U}\otimes\mathbb{C}\) and therefore \[g_{\mathbb{H}^{2}}(\mathbf{n},\mathbf{n})=g_{\mathbb{H}^{2}}(\bar{\mathbf{n}},\bar{\mathbf{n}})=0\quad\text{and}\quad g_{\mathbb{H}^{2}}(\mathbf{n},\bar{ \mathbf{n}})=1/2.\] It follows that \[g_{\mathbb{H}^{2}}(v,\mathbf{n})=dh(\dot{\gamma})\quad\text{and}\quad g_{ \mathbb{H}^{2}}(v,\bar{\mathbf{n}})=d\bar{h}(\dot{\gamma}).\] As \(\mathbf{n}\) and \(\mathbf{\bar{n}}\) span the tangent space this implies \[\left(\mathrm{id}+\hat{B}\right)\nabla_{\hat{\gamma}}\dot{\gamma}=\hat{ \nabla}_{\hat{\gamma}}\hat{B}_{0}(\dot{\gamma})=v.\] Since \(\|v\|_{g_{\mathbb{H}^{2}}}=2|dh|\) this gives \[\|\nabla_{\hat{\gamma}}\hat{\gamma}\|_{g}=\frac{1}{2}\|\left(\mathrm{id}+\hat{B} \right)\nabla_{\hat{\gamma}}\hat{\gamma}\|_{g_{\mathbb{H}^{2}}}=\frac{1}{2}\| \hat{\nabla}_{\hat{\gamma}}\hat{B}_{0}\hat{\gamma}\|_{g_{\mathbb{H}^{2}}}=|dh( \hat{\gamma})|.\] \(\Box\) Next we use the Cauchy integral formula to bound \(dh\). **Lemma 3.4**: _Assume that \(r\leq 1/2\) and \(K<1\). If \(\|\Phi(z)\|\leq K\) for \(z\) in the \(r\)-neighborhood of \(\gamma\) then the geodesic curvature \(\kappa_{\gamma}\) of \(\gamma\) in the metric \(g\) satisfies_ \[\kappa_{\gamma}\leq\frac{5K}{4r(1-K)^{2}}.\] **Proof:** We have \[\kappa_{\gamma}=\frac{\|(\nabla_{\hat{\gamma}}\hat{\gamma})^{\bot}\|_{g}}{\| \hat{\gamma}\|_{g}^{2}}\leq\frac{\|\nabla_{\hat{\gamma}}\hat{\gamma}\|_{g}}{ \|\hat{\gamma}\|_{g}^{2}}\] where \((\nabla_{\hat{\gamma}}\hat{\gamma})^{\bot}\) is the component of \(\nabla_{\hat{\gamma}}\hat{\gamma}\) perpendicular to \(\hat{\gamma}\). To bound \(\|\nabla_{\hat{\gamma}}\hat{\gamma}\|_{g}\) we will work in the disk model for \(\mathbb{H}^{2}\) with \(\gamma\) the geodesic following the real axis and we will bound the curvature at the origin. With this normalization we have \(\mathbf{n}=\frac{1-z^{2}}{2}\frac{\partial}{dz}\) and therefore \[h(z)=\frac{\Phi(z)(1-z^{2})^{2}}{4}.\] (Here we are not distinguishing between \(\Phi\) as quadratic differential and \(\Phi\) as holomorphic function.) At zero, \(\hat{\gamma}=\frac{1}{2}\partial_{x}\) so by Lemma 3.3 we have \[\|\nabla_{\hat{\gamma}}\hat{\gamma}\|_{g}=|dh(\hat{\gamma})|=|h_{x}(0)|/2=|h_{ z}(0)|/2\] where the last equality uses that \(h\) is holomorphic. We now bound \(|h_{z}(0)|\) using the Cauchy integral formula. Given that \(\|\Phi(z)\|\leq K\) in the \(r\)-neighborhood of the origin (in the hyperbolic metric) we have \[|h(z)|\leq\frac{1}{4}|1-z^{2}|^{2}|\Phi(z)|\leq\frac{|1-z^{2}|^{2}}{(1-|z|^{2 })^{2}}K\] when \(|z|\leq R=\tanh(r/2)\). Therefore \[|h_{z}(0)| = \frac{1}{2\pi}\left|\int_{|z|=R}\frac{h(z)}{z^{2}}dz\right|\] \[\leq \frac{KR}{2\pi R^{2}(1-R^{2})^{2}}\int_{0}^{2\pi}\left|1-R^{2}e^{ 2i\theta}\right|^{2}d\theta\] \[= \frac{K}{2\pi R(1-R^{2})^{2}}\int_{0}^{2\pi}\left(1-2R^{2}\cos(2 \theta)+R^{4}\right)d\theta\] \[= \frac{K(1+R^{4})}{R(1-R^{2})^{2}}.\] If \(r\leq 1/2\) then, since \(\tanh(r/2)\leq r/2\), we have \(R\leq 1/4\). As the derivative of \(\tanh^{-1}R\) is \(1/(1-R^{2})\) when \(R\leq 1/4\) we have \(\tanh^{-1}R\leq 16R/15\) and \(\tanh(r/2)\geq 15r/32\). Combining this with our estimate on \(|h_{z}(0)|\) we have \[|h_{z}(0)|\leq\frac{32}{15r}\cdot\frac{\left(1+(1/4)^{4}\right)K}{\left(1-(1/ 4)^{2}\right)^{2}}\leq\frac{5K}{2r}\] and \[\|\nabla_{\gamma}\dot{\gamma}\|_{g}\leq\frac{5K}{4r}.\] We now obtain a lower bound on \(\|\dot{\gamma}\|_{g}\). For this we recall that by Theorem 3.2 the eigenvalues of \(\hat{B}\) at \(z\in\mathbb{H}^{2}\) are \(1\pm 2\|\Phi(z)\|\). If \(z\in\gamma\) then \(\|\Phi(z)\|\leq K\) giving \[\|\dot{\gamma}\|_{g}=\frac{1}{2}\|\left(\operatorname{id}+\hat{B}\right)\dot {\gamma}\|_{g_{\mathbb{H}^{2}}}\geq 1-K.\] Therefore \[\kappa_{\gamma}\leq\frac{\|\nabla_{\gamma}\dot{\gamma}\|_{g}}{\|\dot{\gamma} \|_{g}^{2}}<\frac{5K}{4r(1-K)^{2}}.\] \(\Box\) We can now bound the curvature of \(\alpha\) in \(\mathbb{H}^{3}\). **Lemma 3.5**: _Assume that \(r\leq 1/2\) and \(K<1\). Then if \(\|Sf(z)\|\leq K\) for all \(z\) in the \(r\)-neighborhood of \(\gamma\) then the geodesic curvature \(\kappa_{\alpha}\) of \(\alpha=f_{0}\circ\gamma\) in \(\mathbb{H}^{3}\) satisfies_ \[\kappa_{\alpha}\leq\frac{3K}{2r(1-K)^{2}}.\] **Proof:** To bound the geodesic curvature of the curve \(\alpha=f_{0}\circ\gamma\) we again need to bound the covariant derivative \(\bar{\nabla}_{\alpha}\dot{\alpha}\). This will have a component tangent to the immersed surface which is \((f_{0})_{*}\nabla_{\gamma}\gamma\) plus an orthogonal component whose length is \(\|B(\dot{\gamma})\|_{g}\). This last norm is bounded by the product of the maximal eigenvalue of \(B\) and \(\|\dot{\gamma}\|_{g}\). As \(B=(\operatorname{id}+\hat{B})^{-1}(\operatorname{id}-\hat{B})\) then eigenvalues of \(B\) at a point \(f(z)\) in the immersed surface are \(-\frac{\|\Phi(z)\|}{\|\Phi(z)\|\pm 1}\). If \(z\) is in \(\gamma\) we have \(\|\Phi(z)\|\leq K\) so the maximum eigenvalue is bounded above by \(K/(1-K)\). It follows that \[\kappa_{\alpha}\leq\sqrt{\kappa_{\gamma}^{2}+\left(\frac{K}{1-K}\right)^{2}} \leq\frac{K}{r(1-K)^{2}}\sqrt{(5/4)^{2}+(1/2)^{2}}\leq\frac{3K}{2r(1-K)^{2}}.\] \(\Box\) ### Curves with geodesic curvature \(\kappa_{g}<1\) It is well know that curves in \(\mathbb{H}^{n}\) with geodesic curvature \(\leq\kappa<1\) are quasi-geodesics. In particular, a bi-infinite path with curvature \(\leq\kappa\) will limit to distinct endpoints. We need a quantatative version of this statement. **Lemma 3.6**: _Let \(\mathbf{\alpha}\colon\mathbb{R}\to\mathbb{H}^{3}\) be smooth, bi-infinite curve with curvature at most \(\kappa<1\). Normalize \(\mathbf{\alpha}\) in the upper half space model of \(\mathbb{H}^{3}\) so that \(\mathbf{\alpha}(0)=(0,0,1)\) and \(\mathbf{\alpha}^{\prime}(0)=(0,0,\lambda)\) with \(\lambda>0\). Then there are distinct points \(z_{-},z_{+}\in\widehat{\mathbb{C}}\) with \(\lim\limits_{t\to\pm\infty}\mathbf{\alpha}(t)=z_{\pm}\) with_ \[|z_{\pm}|^{\mp 1}\leq\frac{1}{\kappa}\left(1-\sqrt{1-\kappa^{2}}\right)\leq \kappa.\] _Let \(P_{t}\) be the hyperbolic plane orthogonal to \(\mathbf{\alpha}\) at \(\mathbf{\alpha}(t)\). Then \(\partial P_{t}\to z_{\pm}\) at \(t\to\pm\infty\)._ **Proof:** Let \(H_{\mathbf{\delta}}\) be a convex region in \(\mathbb{H}^{3}\) bounded by a plane of constant curvature \(\delta<1\). If \(\mathbf{\alpha}^{\prime}(0)\) is tangent to \(\partial H_{\mathbf{\delta}}\) and \(\delta>\kappa\) then in neighborhood of zero the only intersection of \(\mathbf{\alpha}\) with \(H_{\mathbf{\delta}}\) will be at \(\mathbf{\alpha}(0)\). Now suppose there is some \(t_{0}\) with, say, \(t_{0}>0\) such that \(\mathbf{\alpha}(t_{0})\in H_{\mathbf{\delta}}\). Then by compactness of the interval \([0,t_{0}]\) there is a minimum \(\varepsilon>0\) such that \(\mathbf{\alpha}\) restricted \([0,t_{0}]\) is contained in the \(\varepsilon\)-neighborhood of \(H_{\mathbf{\delta}}\). This will be a convex region \(H_{\mathbf{\delta}^{\prime}}\) bounded by a plane of constant curvature \(\delta^{\prime}>\delta\) and \(\mathbf{\alpha}\) will be tangent to the boundary of this region. However, all of \(\mathbf{\alpha}\) restricted to \([0,t_{0}]\) will be contained in \(H_{\delta^{\prime}}\) contradicting that at the point of tangency the intersection of \(\mathbf{\alpha}\) and \(H_{\mathbf{\delta}^{\prime}}\) will be an isolated point. This implies that \(\mathbf{\alpha}(0)\) is the unique point where \(\mathbf{\alpha}\) intersects \(H_{\mathbf{\delta}}\) and \(\mathbf{\alpha}\) is disjoint from the interior of \(H_{\mathbf{\delta}}\). Now take the union of the interior of all regions \(H_{\mathbf{\delta}}\) with \(\delta>\kappa\) that are tangent to \(\mathbf{\alpha}^{\prime}(0)\) and let \(\mathscr{K}\) be its complement. Then the image of \(\mathbf{\alpha}\) will be contained in \(\mathscr{K}\). The intersection of the closure \(\mathscr{K}\) with \(\widehat{\mathbb{C}}=\partial\mathbb{H}^{3}\) will be two regions \(\mathscr{K}_{-}\) and \(\mathscr{K}_{+}\) with the accumulation set of \(\mathbf{\alpha}(t)\) as \(t\to\pm\infty\) contained in \(\mathscr{K}_{\pm}\). Then \(\mathscr{K}_{-}\) is the disk \(|z|\leq\frac{1}{\kappa}\left(1-\sqrt{1-\kappa^{2}}\right)\) (see Figure 1) and by symmetry \(\mathscr{K}_{+}\) is the region with \(1/|z|\leq\frac{1}{\kappa}\left(1-\sqrt{1-\kappa^{2}}\right)\) (including \(\infty\in\widehat{\mathbb{C}}\)). Let \(z_{-}\) be a point in the accumulation set of \(\mathbf{\alpha}(t)\) as \(t\to-\infty\) and let \(b_{-}\) be a Buseman function based at \(z_{-}\). We then observe that the angle between \(-\mathbf{\alpha}^{\prime}(t)\) Figure 1: In this 2-dimensional figure the union of the \(H_{\mathbf{\delta}}\) is the shaded region. In \(\mathbb{H}^{3}\) one rotates this region about the vertical axis. The relationship between the angle \(\mathbf{\theta}\) and a \(\kappa\) is given by \(\kappa=\sin(\theta)\). One then computes that \(\mathscr{K}_{-}\) is a disk of radius \(\frac{1}{\kappa}\left(1-\sqrt{1-\kappa^{2}}\right)\). and the gradient \(\nabla b_{-}\) is \(\leq\theta\). It is enough to do this calculation when \(t=0\). We then let \(\mathfrak{h}\) be the horosphere based at \(z_{-}\) that goes through \(\alpha(0)\). Then (along \(\mathfrak{h}\)) the gradient \(\nabla b_{-}\) is the inward pointing normal vector field to \(\mathfrak{h}\). Then angle will be greatest when \(z_{-}\) is in \(\partial\mathscr{K}_{-}\) and a direct computation gives that the angle in this case is \(\theta\). The bound on the angle implies that \(b_{-}(\alpha(t))\to\infty\) as \(t\in-\infty\) and therefore \(\gamma(t)\to z_{-}\) as \(t\to-\infty\). A similar argument shows that \(\alpha(t)\to z_{+}\) as \(t\to\infty\) for some \(z_{+}\in\mathscr{K}_{+}\). Let \(\beta\) be the geodesic in \(\mathbb{H}^{3}\) with endpoints \(z_{-}\) and \(z_{+}\). Let \(z\) be a point in \(\partial P_{t}\) and let \(\mathfrak{h}_{z}\) be the horosphere based at \(z\) that goes through \(\alpha(t)\) (and is therefore tangent to \(\alpha^{\prime}(t)\)). Let \(\mathfrak{h}_{z,\beta}\) be the horosphere based at \(z\) that is tangent to \(\beta\). We claim that the hyperbolic distance between these two horospheres is \(\leq\tanh^{-1}(\kappa)\). For this we can assume \(t=0\) and let \(\mathscr{H}\) be the convex hull of the regions \(\mathscr{K}_{\pm}\). Then \(P_{0}\cap\mathscr{H}\) is a disk \(D\) of radius \(\tanh^{-1}(\kappa)\) and \(\beta\) will be contained in \(\mathscr{H}\) and intersect \(D\). Now let \(\mathfrak{h}_{\pm}\) be the two horospheres based at \(z\) that are tangent to \(\mathscr{H}\). The distance between \(\mathfrak{h}_{z}\) and each of the \(\mathfrak{h}_{\pm}\) will be \(\tanh^{-1}(\kappa)\) and \(\mathfrak{h}_{z,\beta}\) will lie between the \(\mathfrak{h}_{\pm}\). This implies the distance bound. We now adjust the picture and so that \(z_{-}=0\) and \(z_{+}=\infty\) in the upper half space model for \(\mathbb{H}^{3}\). Assume that \(t_{i}\to-\infty\) and let \(z_{i}\) be points in \(\partial P_{t_{i}}\). We'll show that \(z_{i}\to 0\). Assume not. Then, after passing to a subsequence we can assume that \(|z_{i}|\) is bounded below away from zero. Then for each \(i\) we apply the isometry \(z\mapsto z/|z_{i}|\) to the points \(\alpha(t_{i})\) and the horospheres \(\mathfrak{h}_{z_{i}}\) and \(\mathfrak{h}_{z_{i},\beta}\) to get new objects \(\bar{\alpha}(t_{i})\), \(\bar{\mathfrak{h}}_{z_{i}}\) and \(\bar{\mathfrak{h}}_{z_{i},\beta}\). This isometry fixes \(\beta\) and as \(|z_{i}|\) is bounded below we still have that \(\bar{\alpha}(t_{i})\to 0\in\widehat{\mathbb{C}}\). Then horospheres \(\bar{\mathfrak{h}}_{z_{i},\beta}\) will have Euclidean radius \(1\) while the Euclidean radius of the \(\bar{\mathfrak{h}}_{z_{i}}\) will go to infinity, contradicting that the distance between these horospheres is bounded by \(\tanh^{-1}(\kappa)\). Therefore \(|z_{i}|\to 0\), proving the last claim. \(\Box\) ### Gauss Map If \(S\) is an immersed surface in \(\mathbb{H}^{3}\) let \[n\colon S\to T^{1}\mathbb{H}^{n}\] be the lift to the unit tangent bundle. The Gauss map for \(S\) is \[\mathfrak{g}_{S}\colon S\to\widehat{\mathbb{C}}\] with \(\mathfrak{g}_{S}=\mathfrak{g}\circ n\). The following lemma gives the derivative of \(\mathfrak{g}_{S}\). **Lemma 3.7**: _Let \(S\) be an oriented, immersed surface in \(\mathbb{H}^{3}\) with \(\hat{S}\) its lift to \(T^{1}\mathbb{H}^{3}\). Normalize so that \(p=(0,1)\in S\) the lift of this point to \(\hat{S}\) is the vector \(\partial_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The proof is a straightforward calculation that is simplest to do in the Minkowski model for \(\mathbb{H}^{n}\). Let \(\langle,\rangle\) be the inner product on \(\mathbb{R}^{n,1}\). Then \[\mathbb{H}^{n}=\{x\in\mathbb{R}^{n,1}\ |\ \langle x,x\rangle=-1\}\] with tangent space \[T_{x}\mathbb{H}^{n}=\{v\in\mathbb{R}^{n,1}\ |\ \langle x,v\rangle=0\}.\] Then the restriction of \(\langle,\rangle\) to each \(T_{x}\mathbb{H}^{n}\) is positive definite and gives a metric of constant sectional curvature equal to \(-1\). This is a model for hyperbolic space. In this model the sphere at infinity for \(\mathbb{H}^{n}\) is the projectivized light cone which we identify with the sphere unit sphere in the plane \(x_{n+1}=1\). In this model there is a very simple formula for the Gauss map. **Lemma 3.8** (Bryant, [Bry]): _For \(x\in\mathbb{H}^{n}\) and \(v\in T_{x}^{1}\mathbb{H}^{n}\) the hyperbolic Gauss map is given by_ \[\mathfrak{g}(x,v)=\frac{x+v}{\langle x+v,(0,\ldots,0,1)\rangle}.\] **Proof:** Note that this formula is clear when \(x=(0,\ldots,0,1)\). Then general case the follows from equivariance. \(\Box\) The formula for the Riemannian connection on \(\mathbb{H}^{n}\) is also very simple. The Minkowski connection \(\hat{\nabla}\) on \(\mathbb{R}^{3,1}\) is flat with \(\hat{\nabla}_{X}Y=X(Y)\). Thus if \(\nabla\) is the Riemannian connection on \(\mathbb{H}^{3}\) then by compatibility we have \[X(Y)=\nabla_{X}Y+\langle X(Y),N\rangle N.\] where \(N\) is the normal to \(\mathbb{H}^{3}\) in \(\mathbb{R}^{3,1}\). We note that \(N:\mathbb{H}^{3}\to\mathbb{R}^{3,1}\) is given by \(N(x)=x\). From this formula for \(\nabla\) we can also calculate the shape operator \(B\). We have \[BX=\nabla_{X}(n)=X(n)-\langle X(n),N\rangle N.\] As \(N(p)=p\) then \(X(N)=X\) giving \[\langle X(n),N\rangle=-\langle n,X(N)\rangle=-\langle n,X\rangle=0\] so \[BX=X(n).\] Note that tangent spaces in \(\mathbb{R}^{3,1}\cong\mathbb{R}^{4}\) are canonically identified and if \(p=(0,0,0,1)\) then the linear map \(G\colon T_{p}S\to T_{\mathfrak{g}(p)}\partial\mathbb{H}^{3}\) (from Lemma 3.7) is the identity map. Therefore, we need to show that \[(\mathfrak{g}_{S})_{*}(p)=\operatorname{id}+B.\] With these preliminaries done we can now prove the lemma. **Proof of Lemma 3.7:** By Lemma 3.8 we have \[\mathfrak{g}_{S}(x)=\frac{x+n(x)}{\langle x+n(x),(0,0,0,1)\rangle}.\] Note that for \(v\in T_{p}S\) the derivative of \(x\mapsto x+n(x)\) in the direction of \(v\) is \(v+Bv\) by the calculation above. As \(\langle v+Bv,(0,0,0,1)\rangle=0\) this implies that at \(p\) the derivative of the denominator above is zero which gives that \[(\mathfrak{g}_{S})_{*}(p)v=v+Bv=(\mathrm{id}+B)v\] as claimed. \(\Box\) Using the above, we prove the following bound on the derivative of \(f\). **Lemma 3.9**: _Let \(f:\mathbb{U}\to\hat{\mathbb{C}}\) be a locally univalent map with Epstein maps \(f_{0}\colon\mathbb{U}\to\mathbb{H}^{3}\) and \(\hat{f}_{0}\colon\mathbb{U}\to T^{1}\mathbb{H}^{3}\) normalized as follows:_ 1. \(f_{0}(i)=(0,1)\)_;_ 2. \(\hat{f}_{0}(i)=\partial_{y}\)_;_ 3. \((f_{0})_{*}(i)\partial_{y}=\lambda\partial_{t}\) _with_ \(\lambda=\|\partial_{y}\|_{g}\)_._ _Then_ \[|f^{\prime}(i)|=1\qquad\text{and}\qquad|f^{\prime}(i)-1|\leq 2\|\Phi(i)\|\] _where \(\Phi=Sf\) is the Schwarzian._ **Proof:** Let \(P\subset T_{(0,1)}\mathbb{H}^{3}\) be the image of \(T_{i}\mathbb{U}\) under the map \((f_{0})_{*}(i)\). We then have the following sequence of isometries: \[(T_{i}\mathbb{U},g_{\mathbb{H}^{2}})\smash{\mathop{\longrightarrow}\limits^{ \mathrm{id}+B}}(T_{i}\mathbb{U},g)\smash{\mathop{\longrightarrow}\limits^{(f_ {0})_{*}(i)}}(P,g_{\mathbb{H}^{3}})\smash{\mathop{\longrightarrow}\limits^{G }}(T_{i}\mathbb{U},g_{\mathbb{H}^{2}})\] and by Lemma 3.7 this composition is the map \(f_{*}(i)\). In particular \(f_{*}(i)\) is an isometry of \(T_{i}(U)\) to itself so \(|f^{\prime}(i)|=1\). As all the above maps are isometries we can do the computation in any of the metrics. For convenience we will do it in the \(g\)-metric. Note that \(|f^{\prime}(i)-1|\) is the distance between the vectors \(f^{\prime}(i)\partial_{x}\) and \(\partial_{x}\) in \(T_{i}\mathbb{U}\) or, since \(f\) is conformal, the distance between \(f^{\prime}(i)\partial_{y}\) and \(\partial_{y}\) with the hyperbolic metric. In the \(g\)-metric this is the distance between \((\mathrm{id}+B)\partial_{y}\) and \(\frac{\partial_{y}}{\lambda}\) (by our normalization (3)). That is \[|f^{\prime}(i)-1| = \left\|(\mathrm{id}+B)\partial_{y}-\frac{\partial_{y}}{\lambda} \right\|_{g}\] \[= \left\|(\lambda-1)\frac{\partial_{y}}{\lambda}+B\partial_{y} \right\|_{g}\] \[\leq |\lambda-1|\cdot\left\|\frac{\partial_{y}}{\lambda}\right\|_{g}+ \|B\partial_{y}\|_{g}\] \[= |\lambda-1|+\|B\partial_{y}\|_{g}.\] As \((\operatorname{id}+B)^{-1}=\frac{1}{2}(\operatorname{id}+\hat{B})\) and by Theorem 3.2 the eigenvalues of \(\hat{B}\) are \(1\pm 2\|Sf(i)\|\) we have \[\lambda=\|\partial_{y}\|_{g}=\frac{1}{2}\|(\operatorname{id}+\hat{B})\partial_{ y}\|_{g_{\mathbb{H}^{2}}}\leq 1+\|Sf(i)\|.\] Therefore \[|\lambda-1|\leq\|Sf(i)\|.\] Also \((\operatorname{id}+\hat{B})B=(\operatorname{id}-\hat{B})\) giving \[\|B\partial_{y}\|_{g}=\frac{1}{2}\|(\operatorname{id}+\hat{B})B\partial_{y}\| _{g_{\mathbb{H}^{2}}}=\frac{1}{2}\|(\operatorname{id}-\hat{B})\partial_{y}\|_ {g_{\mathbb{H}^{2}}}=\|Sf(i)\|.\] Combining these bounds, we obtain the result. \(\Box\) ### Proof of Theorem 3.1: Let \(f^{1}\colon\mathbb{U}\to\widehat{\mathbb{C}}\) be the map given by Lemma 3.9 and \(f^{1}_{0}\colon U\to\widehat{\mathbb{C}}\) the associated Epstein map. By our assumptions \(r<1/2\) and \(K/r<1/4\) so \(K<1/8\). Then Lemma 3.5 gives that \[\kappa_{\alpha}<\frac{2K}{r}<1\] where \(\kappa_{\alpha}\) is the curvature of \(\alpha=f^{1}_{0}\circ\gamma\) in \(\mathbb{H}^{3}\). The normalization (3) in Lemma 3.9 and the bound on \(\kappa_{\alpha}\) allows us to apply Lemma 3.6. Therefore we have \(z_{\pm}\in\widehat{\mathbb{C}}\) such that \(\lim_{t\to\pm\infty}f^{1}_{0}(it)=z_{\pm}\) and if \(P_{t}\) are the planes perpendicular to \(\alpha\) at \(\alpha(t)\) then \(\lim_{t\to\pm\infty}P_{t}=z_{\pm}\). Thus as \(f^{1}(it)\in\partial P_{t}\cap\widehat{\mathbb{C}}\) then \(\lim_{t\to\pm\infty}f^{1}(it)=z_{\pm}\). We let \(f=m\circ f^{1}\) where \[m(z)=i\cdot\frac{i-z^{+}}{i-z^{-}}\cdot\frac{z-z^{-}}{z-z^{+}}.\] Then \(m(z^{-})=0,m(z^{+})=\infty\) and \(m(i)=i\). Therefore \(f\) has the desired normalization and we are left to bound the derivative at \(i\) which will follow from Lemma 3.9 if we can bound the derivative of \(m\) at \(i\). Computing we have \[|m^{\prime}(i)-1|=\left|\frac{z^{-}}{i-z^{-}}-\frac{i/z^{+}}{i/z^{+}-1}\right| \leq\frac{2\kappa_{\alpha}}{1-\kappa_{\alpha}}.\] For the reciprocal we have \[\left|\frac{1}{m^{\prime}(i)}-1\right|=\left|\frac{(i-z_{-})^{2}}{i/z_{+}(z_{ -}/z_{+}-1)}-\frac{z_{-}}{i}\right|\leq\frac{\kappa_{\alpha}(1+\kappa_{\alpha })^{2}}{1-\kappa_{\alpha}^{2}}+\kappa_{\alpha}=\frac{2\kappa_{\alpha}}{1- \kappa_{\alpha}}.\] As \(\kappa_{\alpha}<2K/r\) and by our assumption that \(K/r<1/4\) then \(\kappa_{\alpha}<1/2\). Combining these we get that \[\left|m^{\prime}(i)^{\pm 1}-1\right|<\frac{2\kappa_{\alpha}}{1-\kappa_{\alpha}}< \frac{8K}{r}.\] By Lemma 3.9 we have \(|(f^{1})^{\prime}(i)|=1\) and \(|(f^{1})^{\prime}(i)-1|<2K\) giving \[\left|f^{\prime}(i)^{\pm 1}-1\right| = \left|((f^{1})^{\prime}(i)m^{\prime}(i))^{\pm 1}-1\right|\] \[< \left|(f^{1})^{\prime}(i)\right|^{\pm 1}\cdot\left|m^{\prime}(i)^{ \pm 1}-1\right|+\left|(f^{1})^{\prime}(i)^{\mp 1}-1\right|\] \[< \frac{8K}{r}+2K<\frac{9K}{r}.\] \(\Box\) ## 4 Application to hyperbolic three-manifolds We now prove our application to deformations of hyperbolic 3-manifolds given in Theorem 1.4 which we now restate. **Theorem 1.4:**_There exists \(K,C>0\) such that the following holds; Let \(N\) be an acylindrical hyperbolizable manifold with boundary \(S=\partial N\) and \(Y\in\mathscr{T}(S)\) with \(\|\Phi_{Y}\|_{2}\leq K\). Then there exists a \(\hat{Y}\in\overline{\mathscr{T}(S)}\) with_ 1. \(d_{\mathrm{WP}}(Y,\hat{Y})\leq C\sqrt{\|\Phi_{Y}\|_{2}}\); 2. \(\|\Phi_{\hat{Y}}\|_{\infty}\leq C\sqrt{\|\Phi_{Y}\|_{2}}\). The above result actually holds for _relative acylindrical 3-manifolds_ as in [BBB]. For simplicity we are restricting to the acylindrical setting here. At most points the proof is the same as in [BBB]. However, at one key point we will improve the estimate. We will restrict to proving that improved estimate and allow the reader to refer to [BBB] for the remainder of the proof. Let \(\mathscr{C}\) be a collection of essential simple closed curves on \(S\) and \(\hat{N}\) the 3-manifold obtained from removing the curves in \(\mathscr{C}\) from the level surface \(S\times\{1/2\}\) in a collar neighborhood \(S\times[0,1]\) of \(N\). Given a \(Y\in\mathscr{T}(S)\) there is a unique hyperbolic structure \(\hat{M}_{Y}\) on \(\hat{N}\) with conformal boundary \(Y\). Again there is a projective structure on \(Y\) with Schwarzian \(\Phi_{Y}\). Key to our bounds is choosing \(\mathscr{C}\) such that in the complement of the standard collars of \(\mathscr{C}\) we have bounds on the pointwise norm of \(\hat{\Phi}_{Y}\). It is at this stage that we improve our estimate. For a complete hyperbolic surface \(X\) let \(X^{<\varepsilon}\) be the set of points of injectivity radius \(<\varepsilon\). Recall that by the collar lemma ([Bus]) there is an \(\varepsilon_{2}>0\) such that \(X^{<\varepsilon_{2}}\) is a collection of standard collars about simple closed geodesics of length \(<2\varepsilon_{2}\) and cusps. Explicitly we choose \(\varepsilon_{2}=\sinh^{-1}(1)\), the Margulis constant. If \(\gamma\) is closed geodesic of length \(<2\varepsilon_{2}\) we let \(U_{\gamma}\) be this collar and for a collection of curves \(\mathscr{C}\) we let \(U_{\mathscr{C}}\) be the union of these standard collars. We can also choose \(\overline{\varepsilon}_{2}>0\) so that for any point in \(X^{<\overline{\varepsilon}_{2}}\) the disk of radius \(\mathrm{inj}_{X}(z)\) is contained in \(X^{<\varepsilon_{2}}\). Again, by elementary calculation, we can choose \(\overline{\varepsilon}_{2}=\sinh^{-1}(1/\sqrt{3})\). We denote by \(\hat{U}_{\gamma}\) and \(\hat{U}_{\mathscr{C}}\) the corresponding sub-annuli of \(U_{\gamma}\) and \(U_{\mathscr{C}}\). We note that if \(\ell_{\gamma}(X)<2\overline{\varepsilon}_{2}\) then \(d(\hat{U}_{\gamma},X-U_{\gamma})\geq\overline{\varepsilon}_{2}\). We can now state our improved estimate: **Theorem 4.1**: _There exists constants \(C_{0},C_{1}\) such that if \(\|\Phi_{Y}\|_{2}\leq C_{0}\) then exists a collection of curves \(\mathscr{C}\) such that \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\) and_ \[\big{\|}\dot{\Phi}_{Y}(z)\big{\|}\leq C_{1}\sqrt{\|\Phi_{Y}\|}\] _for \(z\in Y-\hat{U}_{\mathscr{C}}\)._ The proof of Theorem 1.4 follows by combining Theorem 4.1 and [BBB, Theorem 3.5]. We now briefly outline this and refer the reader to [BBB] for further details. **Proof of Theorem 1.4 using Theorem 4.1:** In [BBB, Theorem 3.5] we prove that we can find curves \(\mathscr{C}\) such that 1. \(d_{\text{WP}}(Y,\hat{Y})\leq C\sqrt{\ell_{\mathscr{C}}(Y)}\); 2. \(\|\Phi_{\hat{Y}}\|_{\infty}\leq C\sqrt{\ell_{\mathscr{C}}(Y)}\). In [BBB] we were only able to find \(\mathscr{C}\) such that \(\ell_{c}(Y)\leq\|\Phi_{Y}\|_{2}^{\frac{2}{2\pi(S)+3}}\) for all \(c\in\mathscr{C}\) which gave the genus dependent bound \(\ell_{\mathscr{C}}(Y)\leq n(S)\|\Phi_{Y}\|_{2}^{\frac{2}{2\pi(S)+3}}\) on the total length of \(\mathscr{C}\). Substituting this bound gave us our original result in [BBB]. To obtain the new bounds in Theorem 1.4 we apply [BBB, Theorem 3.5] with the improved bound \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\) from Theorem 4.1. \(\Box\) ### \(L^{2}\) and \(L^{\infty}\)-norms for quadratic differentials In order to prove our estimate, we will first need some results relating the \(L^{2}\) and \(L^{\infty}\)-norms for holomorphic quadratic differentials. We have the following which bounds the pointwise norm in terms of the \(L_{2}\)-norm of \(\phi\). **Lemma 4.2**: _Given \(\Phi\in Q(X)\) we have:_ * **(Teo, [Teo])** _If_ \(z\in X^{\geq\varepsilon}\) _then_ \[\|\Phi(z)\|\leq C(\varepsilon)\|\Phi\|_{2}\] _where_ \(C(x)=\frac{4\pi}{3}(1-\operatorname{sech}^{6}(x/2))^{-1/2}\)_._ * **(Bridgeman-Wu, [BW])** _If_ \(z\in\hat{U}_{\gamma}\) _where_ \(\gamma\) _is a closed geodesic of length_ \(<2\overline{\varepsilon}_{2}\) _then_ \[\|\Phi(z)\|\leq\frac{\|\Phi\|_{U_{\gamma}}\|_{2}}{\sqrt{\mathsf{inj}_{X}(z)}}.\] **Proof:** The first statement is the main result of [Teo]. The second statement doesn't appear explicitly in [BW] but is a simple consequence of it. By [BW, Proposition 3.3, part 4] we have for \(z\in\hat{U}_{\gamma}\) \[\|\Phi(z)\|\leq G(\mathsf{inj}_{X}(z))\;\|\Phi\|_{U_{\gamma}}\|_{2}\] for some explicit function \(G\). Then by direct computation in [BW, Equation 3.13] we prove that \(G(t)\leq 1/\sqrt{t}\) for \(t\leq\epsilon_{2}\). The result then follows. \(\Box\) For a sufficiently short closed geodesic \(\gamma\) on \(X\) we can lift \(\Phi\in Q(X)\) to the annular cover \(X_{\gamma}\). The covering map is injective on \(U_{\gamma}\) in \(X_{\gamma}\) so on \(U_{\gamma}\) we have our annular decomposition given in (2.2) \[\Phi=\Phi_{-}^{\gamma}+\Phi_{0}^{\gamma}+\Phi_{+}^{\gamma}.\] We will need the following \(L^{\infty}\)-bound in the thin part. In [Wol, Lemma 11]) Wolpert proved a bound which is qualitatively the same but we will need the following quantitative version which we derive independently. **Lemma 4.3**: _Let \(z\in\hat{U}_{\gamma}\). Then_ \[\|\Phi(z)\|\leq\|\Phi_{0}^{\gamma}\|_{\infty}+2C(\overline{\epsilon}_{2})\; \|\Phi|_{U_{\gamma}}\|_{2}\] **Proof:** This is again essentially also contained in [BW]. We have on \(\hat{U}_{\gamma}\) the splitting of \(\Phi\) into \[\Phi=\Phi_{0}^{\gamma}+\Phi_{+}^{\gamma}+\Phi_{-}^{\gamma}.\] Therefore \[\|\Phi(z)\|\leq\|\Phi_{0}^{\gamma}(z)\|+\|\Phi_{+}^{\gamma}(z)\|+\|\Phi_{-}^{ \gamma}(z)\|.\] We need to bound the norms \(\|\Phi_{0}^{\gamma}(z)\|\), \(\|\Phi_{+}^{\gamma}(z)\|\), and \(\|\Phi_{-}^{\gamma}(z)\|\). By definition \(\|\Phi_{0}^{\gamma}(z)\|\leq\|\Phi_{0}^{\gamma}\|_{\infty}\). By [BW, Proposition 3.3, part 3] \[\|\Phi_{\pm}^{\gamma}(z)\|\leq C(\overline{\epsilon}_{2})\;\|\Phi|_{U_{\gamma }}\|_{2}.\] Thus it follows that \[\|\Phi(z)\|\leq\|\Phi_{0}^{\gamma}\|_{\infty}+2C(\overline{\epsilon}_{2})\; \|\Phi|_{U_{\gamma}}\|_{2}\] \(\Box\) Our next estimate is a statement about holomorphic quadratic differentials that we will use to choose the curves \(\mathscr{C}\). **Lemma 4.4**: _Let \(\Phi\in Q(X)\) with \(\|\Phi\|_{2}<1/2\). Let \(\mathscr{C}\) be the collection of simple curves \(\gamma\) such that \(L_{\gamma}(X)\leq 2\overline{\epsilon}_{2}\) and \(\|\Phi|_{\hat{U}_{c}}\|_{\infty}\geq\sqrt{\|\Phi\|_{2}}\). Then_ \[L_{\mathscr{C}}(X)\leq 2\|\Phi\|_{2}.\] _Furthermore for \(z\in X-\hat{U}_{\mathscr{C}}\) then \(\|\Phi(z)\|\leq\sqrt{\|\Phi\|_{2}}\)._ **Proof:** For each \(\gamma\in\mathscr{C}\) choose \(z_{\gamma}\in\hat{U}_{\gamma}\) with \(\|\Phi(z_{\gamma})\|\geq\sqrt{\|\Phi\|_{2}}\). Squaring the second bound from Lemma 4.2 gives the bound \[\|\Phi\|_{2}\leq\|\Phi(z_{\gamma})\|^{2}\leq\frac{\left\|\Phi|_{U_{\gamma}} \right\|_{2}^{2}}{\mbox{inj}_{X}(z_{\gamma})}.\] Using that \(\operatorname{inj}_{X}(z_{\gamma})\geq\ell_{\gamma}(X)/2\) and summing gives \[\frac{\|\Phi\|_{2}}{2}\sum_{\gamma\in\mathscr{C}}\ell_{\gamma}(X)\leq\sum_{ \gamma\in\mathscr{C}}\|\Phi\|_{U_{\gamma}}\|_{2}^{2}\leq\|\Phi\|_{2}^{2}\] which rearranges to give \[\ell_{\mathscr{C}}(X)=\sum_{\gamma\in\mathscr{C}}\ell_{\gamma}(X)\leq 2\|\Phi\|_ {2}.\] By the first bound in Lemma 4.2 for \(z\in X^{\geq\overline{\epsilon}_{2}}\) we have as \(C(\overline{\epsilon}_{2})\leq 1.1\) \[\|\phi(z)\|\leq C(\overline{\epsilon}_{2})\|\Phi\|_{2}\leq\sqrt{\|\Phi\|_{2}}\] where the second inequality uses that \(\|\Phi\|_{2}\leq 1/2\). If \(z\in X^{<\overline{\epsilon}_{2}}\) but \(z\not\in\hat{U}_{\gamma}\) for some \(\gamma\in\mathscr{C}\) we have \[\|\Phi(z)\|\leq\sqrt{\|\Phi\|_{2}}\] be the definition of \(\mathscr{C}\). These two inequalities give the second bound. \(\Box\) ### Drilling The proof of Theorem 4.1 uses the deformation theory of hyperbolic cone-manifolds developed by Hodgson and Kerkhoff in [HK] and generalized by the second author in [Brm, Bro]. In particular if \(\mathscr{C}\) is sufficiently short in \(M_{Y}\) (or in \(Y\)) then there will be a one-parameter family of hyperbolic cone-manifolds \(M_{t}\) where the complement of the singular locus is homeomorphic \(\hat{N}\), the conformal boundary of \(M_{t}\) is \(Y\) and the cone angle is \(t\). The \(M_{t}\) will also determine projective structures on \(Y\) with Schwarzian quadratic differentials \(\Phi_{t}\) and derivative \(\phi_{t}=\dot{\Phi}_{t}\). Then when \(t=2\pi\) we have \(M_{Y}=M_{2\pi}\), \(\Phi_{Y}=\Phi_{2\pi}\) and when \(t=0\) we have \(\dot{M}_{Y}=M_{0}\). Motivated by the Lemma 4.4 above, we will fix \(\mathscr{C}\) to be the collection of closed geodesics \(\gamma\) on \(Y\) with \(\ell_{\gamma}(Y)\leq 2\overline{\epsilon}_{2}\) and \(\|\,\Phi_{Y}|_{\partial_{\gamma}}\|_{\infty}\geq\sqrt{\|\Phi_{Y}\|_{2}}\). Then by the above \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\). We first will need to apply the following theorem. **Theorem 4.5** (Bridgeman-Bromberg, [BB]): _There exists an \(L_{0}>0\) and \(c_{\rm drill}>0\) such that the following holds. Let \(M\) be a conformally compact hyperbolic 3-manifold and \(\mathscr{C}\) a collection of simple closed geodesics in \(M\) each of length \(\leq L_{0}\). Let \(M_{t}\) be the unique hyperbolic cone-manifold with cone angle \(t\in(0,2\pi]\) about \(\mathscr{C}\) and \(\Sigma_{t}\) the projective structure on the boundary. If \(\Phi_{t}\) is the Schwarzian of the uniformization map for \(\Sigma_{t}\) and \(\phi_{t}=\dot{\Phi}_{t}\) then_ \[\|\phi_{t}\|_{2}\leq c_{\rm drill}\sqrt{L_{\mathscr{C}}(M)}.\] By Bers inequality, \(L_{\mathscr{C}}(M)\leq 2\ell_{\mathscr{C}}(Y)\leq 4\|\Phi_{Y}\|_{2}\). Thus we obtain **Theorem 4.6**: _If \(N\) is an acylindrical hyperbolic manifold with \(\|\Phi_{Y}\|_{2}<L_{0}/4\) then there exists a family of curves \(\mathscr{C}\) such that \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\) and_ \[\|\phi_{t}\|_{2}\leq 2c_{drill}\sqrt{\|\Phi_{Y}\|_{2}}\] _for \(t\in(0,2\pi]\)._ We now promote the \(L^{2}\)-bound to a pointwise bound in the complement of the collars of \(\mathscr{C}\). We first need the following lemma. **Lemma 4.7**: _There exists a universal constants \(D>0\) such that if \(\|\Phi_{Y}\|_{2}<L_{0}/4\) and \(\|\Phi_{t}(z)\|\leq 1/36\) for all \(z\in Y-\hat{U}_{\mathscr{C}}\) then for \(z\in Y-\hat{U}_{\mathscr{C}}\) we have_ \[\|\phi_{t}(z)\|\leq D\sqrt{\|\Phi_{Y}\|_{2}}.\] **Proof:** We have the \(L^{2}\)-bound \(\|\phi_{t}\|_{2}\leq 2c_{drill}\sqrt{\|\Phi_{Y}\|_{2}}\). We need to convert this to a pointwise bound. For \(z\in Y^{\geq\overline{\epsilon}_{2}}\) we have \[\|\phi_{t}(z)\|\leq C(\overline{\epsilon}_{2})\|\phi_{t}\|_{2}\leq 2c_{drill} C(\overline{\epsilon}_{2})\sqrt{\|\Phi_{Y}\|_{2}}.\] Now assume \(\ell_{Y}(Y)\leq 2\overline{\epsilon}_{2}\) but \(\gamma\not\in\mathscr{C}\). For \(z\in\hat{U}_{\gamma}\) by Lemma 4.3 we have \[\|\phi_{t}(z)\|\leq\left\|(\phi_{t})_{0}^{\gamma}\right\|_{\infty}+2C( \overline{\epsilon}_{2})\|\phi_{t}\|_{2}\leq\left\|(\phi_{t})_{0}^{\gamma} \right\|_{\infty}+4c_{drill}C(\overline{\epsilon}_{2})\sqrt{\|\Phi_{Y}\|_{2}}.\] We let \(D_{0}=4c_{drill}C(\overline{\epsilon}_{2})\). Since the above holds for all \(z\in\hat{U}_{\gamma}\) we also have \[\|\phi_{t}\|_{\gamma}\leq\left\|(\phi_{t})_{0}^{\gamma}\right\|_{\infty}+D_{0 }\sqrt{\|\Phi_{Y}\|_{2}}. \tag{4.5}\] To finish the proof we need to bound \(\left\|(\phi_{t})_{0}^{\gamma}\right\|_{\infty}\) by a constant multiple of \(\sqrt{\|\Phi_{Y}\|_{2}}\). By definition, for any geodesic \(\beta\) with \(\ell_{\beta}(Y)\leq 2\overline{\epsilon}_{2}\) then \(d(\hat{U}_{\beta},Y-U_{\beta})\geq\overline{\epsilon}_{2}\). Therefore \(d(\gamma,\hat{U}_{\mathscr{C}})\geq 2\overline{\epsilon}_{2}>1/2\) and the \(1/2\) neighborhood of \(\gamma\) is in \(Y-\hat{U}_{\mathscr{C}}\). By our assumptions \(\|\Phi_{t}(z)\|\leq 1/36\) for \(z\in Y-\hat{U}_{\mathscr{C}}\) and therefore by Theorem 1.3 we have \[\left|\frac{\hat{\mathscr{L}}_{\gamma}(t)}{\ell_{\gamma}(Y)}+\frac{1}{\ell_{ \gamma}(Y)}\int_{\gamma}\mathbf{n}\cdot\phi_{t}\right|\leq\frac{9\cdot(1/36)} {1/2}\|\phi_{t}\|_{\gamma}=\frac{1}{2}\|\phi_{t}\|_{\gamma}.\] Applying Lemma 2.3 to the integral and rearranging and then applying (4.5) gives \[\left|\frac{\hat{\mathscr{L}}_{\gamma}(t)}{\ell_{\gamma}(Y)}\right|\geq\left\| (\phi_{t})_{0}^{\gamma}\right\|_{\infty}-\frac{1}{2}\|\phi_{t}\|_{\gamma}\geq \frac{1}{2}\left\|(\phi_{t})_{0}^{\gamma}\right\|_{\infty}-\frac{D_{0}}{2} \sqrt{\|\Phi_{Y}\|_{2}}. \tag{4.6}\] Next we bound the ratio on the left. In [Bro], the second author analysed the change in length of geodesics under cone deformations. We now apply a number of results from [Bro]. By [Bro, Equation 4.6] we have \[|\hat{\mathscr{L}}_{\gamma}(t)|\leq 4L_{\mathscr{C}}(t)L_{\gamma}(t). \tag{4.7}\] We need to bound the two terms on the right. By [Bro, Proposition 4.1] we have that \(L_{\mathscr{C}}(t)\leq L_{\mathscr{C}}(2\pi)\) for \(t\in(0,2\pi]\). When \(t=2\pi\) the manifold is nonsingular so we can apply the Bers inequality to see that \(L_{\mathscr{C}}(2\pi)\leq 2\ell_{\mathscr{C}}(Y)\). Since \(|\dot{L}_{\gamma}(t)|\leq|\dot{\mathscr{L}}_{\gamma}(t)|\) we can combine (4.7) with the bound on \(L_{\mathscr{C}}(t)\) to get \[\left|\frac{L_{\gamma}(t)}{L_{\gamma}(t)}\right|\leq\left|\frac{\dot{\mathscr{ L}}_{\gamma}(t)}{L_{\gamma}(t)}\right|\leq 8\ell_{\mathscr{C}}(Y)\] which integrates to give \[L_{\gamma}(t)\leq L_{\gamma}(2\pi)e^{8\pi\ell_{\mathscr{C}}(Y)}\leq 2\ell_{ \gamma}(Y)e^{16\pi\|\Phi_{Y}\|_{2}}.\] Here the second inequality comes from applying the Bers inequality to \(\gamma\) in \(M=M_{2\pi}\) and the bound \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\). As we have assumed a universal bound \(\|\Phi_{Y}\|_{2}\leq L_{0}/4\) these two inequalities give \[\left|\frac{\dot{\mathscr{L}}_{\gamma}(t)}{\ell_{\gamma}(Y)}\right|\leq 16e^{16 \pi\|\Phi_{Y}\|_{2}}\ell_{\mathscr{C}}(Y)\leq D_{1}\|\phi_{Y}\|_{2}. \tag{4.8}\] where \(D_{1}=16e^{4\pi L_{0}}\) is a universal constant. Combining (4.6) and (4.8) gives \[\left\|(\phi_{I})_{0}^{\gamma}\right\|_{\infty}\leq(D_{0}+2D_{1})\sqrt{\|\Phi _{Y}\|_{2}}.\] Finally by equation 4.5 we have for \(z\in\hat{U}_{\gamma}\) \[\|\phi_{I}(z)\|\leq D\sqrt{\|\Phi_{Y}\|_{2}}\] where \(D=2D_{0}+2D_{1}=32e^{4\pi L_{0}}+8c_{drill}C(\overline{\xi_{2}})\). \(\Box\) Finally we can now prove Theorem 4.1 which we restate. **Theorem 4.1** _There exists constants \(C_{0},C_{1}\) such that if \(\|\Phi_{Y}\|_{2}\leq C_{0}\) then exists a collection of curves \(\mathscr{C}\) such that for each \(\gamma\in\mathscr{C}\) we have \(\ell_{\mathscr{C}}(Y)\leq 2\|\Phi_{Y}\|_{2}\) and_ \[\left\|\hat{\Phi}_{Y}(z)\right\|\leq C_{1}\sqrt{\|\Phi_{Y}\|}\] _for \(z\in Y-\hat{U}_{\mathscr{C}}\)._ **Proof:** We let \(C_{0}=\min\{(1/64)^{2},1/(128\pi D)^{2},L_{0}/4\}\) and \(C_{1}=2\pi D+1\). If \(\|\Phi_{Y}\|_{2}<C_{0}\) then by Lemma 4.4 for \(z\in Y-\hat{U}_{\mathscr{C}}\) we have \[\|\Phi_{Y}(z)\|\leq\sqrt{\|\Phi_{Y}\|_{2}}\leq\frac{1}{64}.\] We now show that \(\|\Phi_{I}(z)\|\leq 1/36\) for \(z\in Y-\hat{U}_{\mathscr{C}}\) and \(t\in(0,2\pi]\). If not, then by continuity there is a \(t_{0}>0\) be such that \(\|\Phi_{I}(z)\|=1/36\) and \(\|\Phi_{I}(z)\|<1/36\) for \(t\in(t_{0},2\pi]\). For \(t\geq t_{0}\) as \(\|\Phi_{Y}\|_{2}<L_{0}/4\) by Lemma 4.7 above, we have if \(z\in Y-\hat{U}_{\mathscr{C}}\) then \(\|\phi_{I}(z)\|\leq D\sqrt{\|\Phi_{Y}\|_{2}}\). Therefore integrating we have \[\frac{1}{64}\leq\|\Phi_{t_{0}}(z)-\Phi_{Y}(z)\|\leq\int_{t_{0}}^{2\pi}\|\phi_ {I}(z)\|dt<2\pi D\sqrt{\|\Phi_{Y}\|_{2}}\leq\frac{2\pi D}{128\pi D}\leq\frac{ 1}{64}.\] This gives our contradiction. Thus we can apply Lemma 4.7 and integrate to get \[\|\hat{\Phi}_{Y}(z)-\Phi_{Y}(z)\|\leq\int_{0}^{2\pi}\|\hat{\Phi}_{Y}(z)\|dt\leq 2 \pi D\sqrt{\|\Phi_{Y}\|_{2}}.\] Thus \[\|\hat{\Phi}_{Y}(z)\|\leq\|\Phi_{Y}(z)\|+2\pi D\sqrt{\|\Phi_{Y}\|_{2}}\leq(2\pi D +1)\sqrt{\|\Phi_{Y}\|_{2}}.\] \(\Box\)
2302.01554
Emergence of cosmic space with Barrow entropy, in non-equilibrium thermodynamic conditions
Recently, Barrow accounts for the quantum gravitational effects to the black hole surface. Thus the conventional area-entropy relation has modified, $S=(A/A_{0})^{1+\Delta/2},$ with an exponent $\Delta$, ranges $0\le\Delta\le1$, quantifies the amount of quantum gravitational deformation effect to the black hole surface. In recent literature, this horizon entropy has been extended to the cosmological context. Following this, we consider an n+1 dimensional non-flat universe with an apparent horizon as the boundary with appropriate temperature and associated entropy is Barrow entropy. We derived the modified form of the law of emergence from the equilibrium and non-equilibrium thermodynamic principles. Later studied the entropy maximization condition due to the modified law of emergence. On distinguishing the obtained result, it speculates that in order to hold the energy-momentum conservation, the universe with Barrow entropy as the horizon entropy should have non-equilibrium behaviour with an additional entropy production. However, the additional entropy production rate decreases over time, so the system eventually approaches equilibrium. Because of this, the constraint relation for entropy maximization looks similar for both equilibrium and non-equilibrium approaches.
Nandhida Krishnan. P, Titus K Mathew
2023-02-03T05:23:53Z
http://arxiv.org/abs/2302.01554v1
# Emergence of cosmic space with Barrow entropy, in non-equilibrium thermodynamic conditions ###### Abstract Recently, Barrow accounts for the quantum gravitational effects to the black hole surface. Thus the conventional area-entropy relation has modified, \(S=(A/A_{0})^{1+\Delta/2}\), with an exponent \(\Delta\), ranges \(0\leq\Delta\leq 1\), quantifies the amount of quantum gravitational deformation effect to the black hole surface. In recent literature, this horizon entropy has been extended to the cosmological context. Following this, we consider an n+1 dimensional non-flat universe with an apparent horizon as the boundary with appropriate temperature and associated entropy is Barrow entropy. We derived the modified form of the law of emergence from the equilibrium and non-equilibrium thermodynamic principles. Later studied the entropy maximization condition due to the modified law of emergence. On distinguishing the obtained result, it speculates that in order to hold the energy-momentum conservation, the universe with Barrow entropy as the horizon entropy should have non-equilibrium behaviour with an additional entropy production. However, the additional entropy production rate decreases over time, so the system eventually approaches equilibrium. Because of this, the constraint relation for entropy maximization looks similar for both equilibrium and non-equilibrium approaches. ## 1 Introduction The theory of black hole mechanics reveals the remarkable connection between gravity and thermodynamics. Bekenstein has conjectured that black hole exhibits entropy, which is proportional to the area of their horizon [1]. Meantime, adopting a semi-classical approach, Hawking has shown that black holes can be endowed with a temperature proportional to their surface gravity and behave like a thermal object, which emits radiation [2, 3]. The thermodynamic properties of the horizon of highly gravitating objects like black holes have become much interest [4, 5, 6, 7, 8]. However, the thermodynamic properties of the horizon are not a mere unique property of black hole spacetime but are more general. Davies and Unruh found that a uniformly accelerating observer in the Minkowski spacetime can attribute a temperature \(\mathcal{T}=a/2\pi\), to the resulting horizon, where \(a\) is the acceleration of the observer [9, 10]. Later Jacobson [11] derived the Einstein's field equation from Clausius relation \(\delta Q=\mathcal{T}\delta S\) together with the area-entropy relation for local Rindler observer. Here \(\mathcal{T}\) is the Unruh temperature [10], and \(\delta Q\) is energy flux crossing the horizon, perceived by an accelerated observer just inside the horizon. Subsequently, Padmanabhan showed that it is possible to express the Einstein field equation as the thermodynamics relation \(TdS=dE+PdV\) for a spherically symmetric horizon, where \(P\) as the radial pressure and \(T\) is the dynamical temperature of the horizon[12, 13]. Later Paranjape et al. extended this result to Gauss-Bonnet and more general Lancos-Lovelock gravity theories [14]. Inspired by this, Gibbons and Hawking [15] argued that there is a vivid possibility for extending this analogy to the cosmological context for a local Rindler horizon. Such that, the cosmological event horizon in de Sitter space can be associated with a temperature, \(1/2\pi r\), where \(r\) is the radius of the cosmological horizon and entropy as given by the Bekenstein formula, \(S=A/(4G)\). In the same spirit, Cai and Kim[16] derived the Friedmann equations from the Clausius relation by assuming Bekenstein entropy for the horizon and temperature as \(\mathcal{T}=1/(2\pi\tilde{r}_{{}_{A}})\), in the context of Einstein's gravity. Here \(\tilde{r}_{{}_{A}}\) is the radius of the apparent horizon [17]. This has been extended to the Gauss-Bonnet and Lovelock gravity theories using the respective entropy relations. Later, the Friedmann equation has been obtained from the unified first law \(dE=TdS+WdV\) proposed by Hayward [7, 18, 19], in the context of Einstein, Gauss-Bonnet and Lovelock gravity theories [20]. Here \(E\) is the Misner-Sharp energy inside the apparent horizon of volume \(V\), the temperature is given by \(T=\kappa/(2\pi)(\kappa\) is the surface gravity)[21], and \(W=(\rho-P)/2\), is the work density, in which \(\rho\) and \(P\) are the total density and pressure of the matter inside the horizon, respectively. All the above works have indicates the intriguing connection between gravity and thermodynamics, which motivates more exciting speculations of the quantum nature of gravity, that one can interpret gravity as an emerging phenomenon with metric, curvature, etc., as the macroscopic variables which are being emerged from some underlying degrees of freedom. By trailing this notion, Verlinde [22] interpreted gravity as an entropic force caused by the information change associated with the positions of material bodies in the universe. In this regard, the author successfully derived Newton's gravitational equation. Meantime, Padmanabhan arrived at the same result, through a different root, by employing the equipartition law of energy for the horizon degrees of freedom along with the thermodynamic relation connecting entropy S and active gravitational mass E, \(S=E/2T\)[23]. Here E is the Komar energy, the source of gravitational acceleration, the temperature T determined from the surface gravity at the horizon, and the pressure P is provided by the source of Einstein's equations[24]. Following the emergent gravity paradigm, Padmanabhan speculated that spacetime itself could be of emerging nature. However, it is challenging to consider time being emerged from some underlying degrees of freedom. However, this difficulty will be erased in cosmology due to the existence of a cosmic time for all inertial observers. This, in turn, motivates to propose that the cosmic space can be considered as emerging as cosmic time progresses [25]. This notion leads to the proposal of the law of emergence, which explains that the expansion of the universe is driven by the difference between the degrees of freedom, \(N_{sur}-N_{bulk}\), that is, \[\frac{dV}{dt}=L_{p}^{2}\left(N_{surf}-N_{bulk}\right). \tag{1}\] Here \(V\) is the volume of the apparent horizon of the flat universe, \(L_{p}\) is the Planck length, in natural units \(\hbar=1=C\) and hence \(L_{p}^{2}=G\). \(N_{sur}\) and \(N_{bulk}\) are the degrees of freedom on the horizon and that within the bulk enclosed by the horizon, respectively. According to this, the universe is evolving towards a state that satisfies the holographic equipartition condition, \(N_{sur}=N_{bulk}\), corresponds to the convention de Sitter epoch. From this law, Padmanabhan derived the Friedmann equations of a 3+1 dimensional flat universe in the context of Einstein's gravity. Cai has considered this law for an n+1 dimensional flat universe in Einstein, Gauss-Bonnet, and more general Lovelock gravity theories, by properly modifying the surface degrees of freedom and the volume increase [26]. Later, The law of emergence has been extended for an n+1 dimensional non-flat universe with apparent horizon \(\tilde{r}_{{}_{A}}\) as the boundary [27], \[\alpha\frac{dV}{dt}=L_{p}^{n-1}\frac{\tilde{r}_{{}_{A}}}{H^{-1}}\left(N_{surf }-N_{bulk}\right), \tag{2}\] where \(\alpha=\frac{n-1}{2(n-2)}\), \(V=\Omega_{n}\tilde{r}_{{}_{A}}^{n}\) is the volume with \(\Omega_{n}\) is the volume of unit n-sphere, and they derived the Friedmann equation for the universe with any spatial curvature. Here \(\tilde{r}_{{}_{A}}\) is the radius of the apparent horizon, and \(H\) is the Hubble parameter. The result has been extended to Gauss-Bonnet and Lovelock theories of gravity in the context of an n+1 dimensional universe. More generalizations for the law of emergence in different approaches are discussed in [28, 29, 30, 31, 32]. In exploring the connection with thermodynamics, the expansion law proposed in [26, 27], has been derived from the unified first law of thermodynamics for Einstein, Gauss-Bonnet, and Lovelock gravity theories [33]. Later, the motivation for choosing the volume as the areal volume in formulating the law of emergence has been explained and ratified from the thermodynamic point of view, is discussed in [34]. Recently, Pavon and Radicella assert that the entropy of the universe, with Hubble horizon as the boundary, evolves like that of an ordinary macroscopic system, approaching a state of maximum entropy [35]. Along this line, the authors [36] showed that the holographic equipartition is equivalent to a state of maximum entropy and the law of emergence describes a universe that evolves towards an end state, the de Sitter epoch of maximum entropy. The same authors extended this result to an n+1 dimensional Einstein, Gauss-Bonnet, and Lovelock gravity theories [37, 38] as well. In the above works, the cosmological evolution in Einstein's gravity was studied using the Bekenstein entropy for the apparent horizon. However, recently, Barrow argued that quantum-gravitational effects might deform the geometry of the black hole horizon, such that it can have an intricate fractal structure[39]. Accordingly, the area-entropy relation for the horizon gets modified as, \[S=\left(\frac{A}{A_{0}}\right)^{1+\Delta/2}, \tag{3}\] which is later referred to as Barrow entropy. Here \(A\) and \(A_{0}\) are the area of the black hole horizon and Planck area, respectively, and \(\Delta\), known as Barrow exponent, which quantifies the amount of quantum gravitational deformation effect on the black hole surface. The exponent constraint in the range \(0\leq\Delta\leq 1.\) For \(\Delta=0\), the Barrow entropy relation reduces to the Bekenstein entropy, and the most intricate fractal structure corresponds to the maximum value of \(\Delta\). Although the expression for Barrow entropy resembles the non-extensive Tsallis entropy relation, the underlying principles in formulating these two are entirely different [40, 41]. By defining the Barrow holographic dark energy, the cosmological implications of Barrow entropy have been studied in reference [42]. Since then, the cosmological aspects using Barrow entropy have become a potential area of research. Later, using Barrow holographic dark energy models with different length scales as infrared cut-off, the Barrow exponent \(\Delta\) has been a constraint, by confronting with observational data [43, 44, 45]. The generalized second law of thermodynamics is validated for the Barrow entropy associated with the horizon[46]. From the idea of the connection between gravity and thermodynamics, the author in [47] obtained the modified Friedmann equations from thermodynamic relation \(\mathcal{T}dS=-d\tilde{E}\) by using Barrow entropy instead of the usual Hawking-Bekenstein one. Meantime Sheykhi also obtained the dynamical equations from the unified first law of thermodynamics \(dE=TdS+WdV\) and studied the entropy evolution in the model[48]. Recent studies on Barrow entropy and its applications in the cosmological framework have been scrutinized in[49, 50, 51, 52, 53, 54, 55]. All these shows that Barrow entropy can replace the Bekenstein entropy in describing the evolution of the universe. As we mentioned, an alternate description of cosmology is by using the law of emergence. In the present paper, our aim is to concentrate on this aspect, using Barrow entropy for the horizon instead of Bekenstein entropy. The first attempt along this direction is given in [48], where the author proposed a specific form of law of emergence by assuming an effective area \(\hat{A}=A^{1+\Delta/2}\), for the horizon having Barrow entropy as the horizon entropy. From which the modified Friedmann equation in 3+1 dimension was derived, is identical to the Friedmann equation obtained from thermodynamic principles[48]. In the present work, we aim to derive the law of emergence from the thermodynamic laws by taking the entropy of the horizon as Barrow entropy. We have obtained the law of emergence for an n+1 dimensional universe from the unified first law and the Clausius relation. The result has been reduced to the corresponding 3+1 Einstein's gravity with \(n=3.\) In contrast to the reference [48], we have first restructured the Barrow entropy into a convenient form, \(S=\frac{A_{eff}}{4G}\), which is similar in form to the Bekenstein entropy. Due to this, the law obtained in the present work is of relatively simple form. Recently, a turn occurred in the thermodynamic analysis when one uses non-standard entropies like Barrow entropy. The authors of [56], while trying to derive the field equation from the equilibrium Clausius relation corresponding to Barrow entropy, came to know that in order to satisfy the energy-momentum conservation, the equilibrium Clausius relation should be replaced by an entropy balance relation contain an additional entropy change, which makes the thermodynamics of the system non-equilibrium. In fact, this has been pointed out earlier by Eling et.al[57], that any curvature correction to the entropy demands a non-equilibrium thermodynamic treatment. Therefore, the Clausius relation is to be modified with an additional entropy production term \(dS_{p}\), that \(\delta Q/\mathcal{T}=dS+dS_{p}\). This additional entropy production term is determined by imposing the energy-momentum conservation \(\nabla^{\mu}T_{\mu\nu}=0\). Following this non-equilibrium thermodynamic analysis in the context of Barrow entropy, the authors [56] studied the \(\sigma 8\) tension and showed that the problem could be resolved to a certain extent. A similar analysis with Tsallis modified gravity is carried out in [58] and claimed a slight alleviation of the \(\sigma 8\) tension. Following Eiling et al., many works have appeared in which a non-equilibrium thermodynamic approach has been adopted in gravity theories, which consists of higher-order curvature corrections. The authors in [59] have shown that the Friedmann equation corresponding to the \(f(R)\) gravity can be formulated from the non-equilibrium thermodynamic relation of the form \(dE=TdS+WdV+TdS_{p}\). Likewise, such non-equilibrium thermodynamic perspectives are considered for scalar-tensor, \(f(T)\) gravity theories [60, 61, 62], to obtain the corresponding Friedmann equations. Similarly, the authors in [63] obtained the law of emergence for a Kodama observer from non-equilibrium thermodynamic principles, the \(f(R)\), and scalar-tensor gravity theories. While deriving the field equations when entropy associated with the horizon is of Barrow entropic form, Asghari and Sheykhi have shown that higher order curvature corrections arise, which in turn necessitates the use of non-equilibrium thermodynamic treatment. Motivated by these works, we obtain the law of emergence from the modified first law of thermodynamics, which take account of the additional entropy production when one uses the Barrow entropy for the horizon. In this process, we found that the law of emergence in the context of non-equilibrium thermodynamics has a relatively simple form in terms of areal volume, in contrast to its form in the corresponding equilibrium perspective, where it adopts an effective volume to explain the emergence of cosmic space. Further, we have shown that the law of emergence in this context with Barrow entropy follows the generalized second law of thermodynamics and satisfies the entropy maximization condition so that the entropy is bounded at the end stage of evolution. The paper is organized as follows, in section 2, the law of emergence is derived from the equilibrium thermodynamic relations for an n+1 dimensional non-flat universe. Section 3 addresses the necessity of the non-equilibrium treatment for non-standard entropies like Barrow entropy. This way, we restructured the form of entropy and derived the law of emergence from the non-equilibrium description of the unified first law and Clausius relation. Next section in 4. We checked the entropy maximization condition directly from the Barrow entropy relation and from the modified law of emergence obtained in both equilibrium and non-equilibrium perspectives. A comparative analysis of resulting laws and entropy maximization constraint relations are emphasized in the last section and conclude the results. Throughout the paper, we use the natural units \(k_{B}=\) c = \(\hbar\) = 1 for the sake of briefness. ## 2 Law of emergence from the equilibrium thermodynamic principle This section, we will obtain the law of emergence for an n+1 dimensional non-flat universe from the unified first law of thermodynamics, with Barrow entropy for the horizon. Let us consider a homogeneous and isotropic n+1 dimensional Friedmann-Leima\(\ddot{u}\)tre-Robertson-Walker (FLRW) universe with background metric given by, \[dS^{2}=h_{\mu\nu}dx^{\mu}dx^{\nu}+\tilde{r}^{2}d\Omega^{2}. \tag{4}\] Where \(\tilde{r}=a(t)r\) with \(a(t)\) as the scale factor at time \(t\), \(\mu\), and \(\nu\) take values 0 and 1 such that, \((x^{0},x^{1})=(t,r)\) and \(h_{\mu\nu}=diag[-1,a^{2}/1-kr^{2}]\) represents the two dimensional metric with the spatial curvature parameter \(k\), which can take the values \([-1,0,+1]\) corresponding to closed, flat and open universe respectively. Also, and \((r,\theta,\phi)\) are the co-moving coordinates. The apparent horizon of the universe satisfies the relation \(h^{\mu\nu}\partial_{\mu}\vec{r}\partial_{\nu}\vec{r}=0\), which in turn gives the expression for the radius of the apparent horizon as[19, 18], \[\tilde{r}_{{}_{A}}=\frac{1}{\sqrt{H^{2}+\frac{k}{a^{2}}}}, \tag{5}\] and its time rate is, \[\dot{\tilde{r}}_{{}_{A}}=-H\tilde{r}_{{}_{A}}^{3}(\dot{H}-\frac{k}{a^{2}}). \tag{6}\] Here \(H=\dot{a}/a\) is the Hubble parameter, and the over dot represents the derivative with respect to cosmic time. For a flat universe, the apparent horizon becomes the Hubble horizon with radius, \(\tilde{r}_{{}_{A}}=1/H\). The temperature associated with the apparent horizon is, \(T=\kappa/2\pi\)[20], where the surface gravity, \(\kappa=\frac{1}{2\sqrt{-h}}\partial_{a}(\sqrt{-h}h^{ab}\partial_{b}r)\), which takes the form[16, 64], \[\kappa=-\frac{1}{\tilde{r}_{{}_{A}}}\left(1-\frac{\dot{\tilde{r}}_{{}_{A}}}{ 2H\tilde{r}_{{}_{A}}}\right). \tag{7}\] Thus the horizon temperature becomes, \[T=-\frac{1}{2\pi\tilde{r}_{{}_{A}}}\left(1-\frac{\dot{\tilde{r}}_{{}_{A}}}{ 2H\tilde{r}_{{}_{A}}}\right). \tag{8}\] But in the case of an expanding universe, \(\frac{\dot{\tilde{r}}_{{}_{A}}}{2H\tilde{r}_{{}_{A}}}<1\), [63], hence the temperature reduces to, \[\mathcal{T}=-\frac{1}{2\pi\tilde{r}_{{}_{A}}}. \tag{9}\] Hayward obtained the unified first law of equilibrium thermodynamics for a black hole horizon as [7, 18], \[dE=A\psi+WdV, \tag{10}\] which can be applied to a cosmological apparent horizon [65, 66] as well. Here, \(E=\rho V\) is the Misner-Sharp energy contained within the apparent horizon of volume, \(V=\Omega_{n}\tilde{r}_{{}_{A}}^{n}\), with \(\Omega_{n}=\frac{\pi^{n/2}}{\Gamma(\frac{n}{2}+1)}\), the volume of an n-sphere with unit radius, and \(\rho\) is the density of the cosmic fluid enclosed by the horizon. Here the scalar invariant \(W\) is the work density, defined by the relation \(W=-\frac{1}{2}T^{ab}h_{ab}\), which for a universe with perfect fluid assumes the form, \[W=\frac{1}{2}(\rho-P), \tag{11}\] where \(P\) is the pressure of the fluid in the universe. The vector invariant, \(\psi\) is interpreted as the energy flux density, which for a universe with perfect fluid can be expressed as, [67], \[\psi=\psi^{t}+\psi^{\tilde{r}_{{}_{A}}}=-(\rho+P)H\tilde{r}_{{}_{A}}dt+\frac{ 1}{2}(\rho+P)d\tilde{r}_{{}_{A}}, \tag{12}\] where \(\psi^{t}\) is the flux crossing the horizon per unit area during the infinitesimal interval of time \(dt\), during which the horizon is stationary and \(\psi^{\tilde{r}_{{}_{A}}}\) is the flux density crossing the dynamic horizon. The heat energy \(\delta Q\), flowing through the horizon in an infinitesimal interval of time, \(dt\), during which the horizon is considered to be stationary, is given by the Clausius relation, \(\delta Q=\mathcal{T}dS\), where \(\mathcal{T}\) is the temperature of the stationary horizon, with fixed radius and \(dS\) is the entropy change. The corresponding energy change, \(-d\tilde{E}\), inside the horizon satisfies, \(\delta Q=-d\tilde{E}\). This energy change is, in turn, related to the flux density as \(d\tilde{E}=A\psi^{t}.\) Hence we have, \[-d\tilde{E}=\mathcal{T}dS=-A\psi^{t}. \tag{13}\] Taking account of these, the unified first law in equation (10) can then be rewritten as, \[dE=\frac{1}{2\pi\tilde{r}_{{}_{A}}}dS+A\psi^{\tilde{r}_{{}_{A}}}+WdV. \tag{14}\] The above equation can be multiplied throughout by \(\left(1-\dot{\tilde{r}}_{{}_{A}}/2H\tilde{r}_{{}_{A}}\right)\), and using conservation equation, it become, \[dE=-TdS+\frac{n\Omega_{n}\tilde{r}_{{}_{A}}^{n-1}\dot{\tilde{r}}_{{}_{A}}dt}{2 }(\rho+P). \tag{15}\] Here \(T\) is the dynamic temperature and the above equation can be expressed for \(dS\) as, \[dS=-2\pi\Omega_{n}\tilde{r}_{{}_{A}}^{n+1}d\rho. \tag{16}\] This equation gives the change in entropy of the horizon in relation to the size of the horizon and the change in the density of the cosmic fluid within the horizon. Now we assume Barrow entropy for the horizon as given in equation (3), and obtain the change of entropy \(dS\), on the left-hand side of the above equation. Accordingly, we obtained it as, \[dS=\left(\frac{n\Omega_{n}}{A_{0}}\right)^{1+\Delta/2}(n-1)(1+\Delta/2)\quad \tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)-1}d\tilde{r}_{{}_{A}}. \tag{17}\] Substituting this in to equation (16), along with \(d\rho\) by using conservation equation, \(\dot{\rho}+nH(\rho+P)=0\), we finally get, after a suitable integration, as \[\tilde{r}_{{}_{A}}^{\frac{(n-1)\Delta-4}{2}}=\frac{16\pi G}{n(n-1)}\left(\frac {A_{0}}{n\Omega_{n}}\right)^{\Delta/2}\frac{(4-(n-1)\Delta)}{2+\Delta}\rho. \tag{18}\] Here we have set the integration constant to zero. It may be noted that, the above equation, in comparison with equation (5), is equivalent the Friedmann equation[48]. However, our aim is to obtain the law of emergence. Multiply the above equation with \(a^{2}\) and differentiate with respect to cosmic time \(t\), the result can then be simplified using continuity equation, we arrive at, \[\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)}\,\tilde{r}_{{}_{A}}=\left( \frac{2}{4-(n-1)\Delta}\right)\times\\ H\tilde{r}_{{}_{A}}\left[2\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)}+ \frac{2\pi\Omega_{n}}{(n-1)(2+\Delta)}\left(\frac{A_{0}}{n\Omega_{n}}\right)^ {1+\Delta/2}\tilde{r}_{{}_{A}}^{(n+1)}\left(4-(n-1)\Delta\right)((n-2)\rho+nP )\right]. \tag{19}\] To express the above equation into a form equivalent to the law of emergence, let us now restructure the Barrow entropy relation in equation (3), into form, \(S=\frac{A_{eff}}{4G}\), similar to the standard Bekenstein entropy. Here the effective area \(A_{{}_{eff}}\) will assumes the form, \[A_{{}_{eff}}=\frac{(n\Omega_{n})^{1+\Delta/2}}{A_{0}^{\Delta/2}}\tilde{r}_{{ }_{A}}^{(n-1)(1+\Delta/2)}. \tag{20}\] Following this, the corresponding effective volume change can be obtained as [26], \[\frac{dV_{{}_{eff}}}{dt}=\frac{\tilde{r}_{{}_{A}}}{n-1}\frac{dA_{{}_{eff}}}{ dt}=\left(\frac{2+\Delta}{2}\right)\frac{(n\Omega_{n})^{1+\Delta/2}}{A_{0}^{ \Delta/2}}\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)}\tilde{r}_{{}_{A}}. \tag{21}\] In view of this, multiply equation (19) through out with a factor \(\left(\frac{2+\Delta}{2}\right)\frac{(n\Omega_{n})^{1+\Delta/2}}{A_{0}^{\Delta /2}}\), then left hand side of the resulting equation become the time rate of the effective volume. Taking account of this we modified the equation (19) as, \[\alpha\frac{dV_{{}_{eff}}}{dt}=\frac{G\tilde{r}_{{}_{A}}}{H^{-1}}\left[\frac{ 2+\Delta}{4-(n-1)\Delta}\frac{n-1}{n-2}\frac{(n\Omega_{n})^{1+\Delta/2}}{GA_{ 0}^{\Delta/2}}\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)}+\frac{4\pi\Omega_{n}}{n -2}\tilde{r}_{{}_{A}}^{(n+1)}\left((n-2)\rho+nP\right)\right]. \tag{22}\] The above equation is the law of emergence with Barrow entropy as the horizon entropy, for an n+1 dimensional non-flat universe with surface degrees of freedom \(N_{surf}\) and bulk degrees of freedom \(N_{bulk}\) are as follows, \[N_{surf}=\frac{2+\Delta}{4-(n-1)\Delta}\frac{n-1}{n-2}\frac{(n\Omega_{n})^{1+ \Delta/2}}{GA_{0}^{\Delta/2}}\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)},\quad N_{ bulk}=-\frac{4\pi\Omega_{n}}{n-2}\tilde{r}_{{}_{A}}^{(n+1)}\left((n-2)\rho+nP \right). \tag{23}\] Thus the equation (22) acquires the conventional form of law of emergence as, \[\alpha\frac{dV_{{}_{eff}}}{dt}=\frac{G\tilde{r}_{{}_{A}}}{H^{-1}}\left(N_{surf }-N_{bulk}\right). \tag{24}\] Now we will show that, the same form as given in the above equation can also be derived from the alternative thermodynamic principle, the Clausius equation, which connects the energy flux through the horizon and the corresponding entropy change. Let us consider the Clausius relation as given in equation (13). With appropriate horizon temperature (9) along with \(\psi^{t}\) from the flux density equation (12), this equation can be rewritten as, \[\frac{1}{2\pi\tilde{r}_{{}_{A}}}dS=\Omega_{n}\tilde{r}_{{}_{A}}^{n}nH(\rho+P). \tag{25}\] On using the horizon entropy change in equation (17), the above equation become, \[\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)-(n+2)}d\tilde{r}_{{}_{A}}=-\frac{4\pi \Omega_{n}}{(n-1)(2+\Delta)}\left(\frac{A_{0}}{n\Omega_{n}}\right)^{1+\Delta/2}d\rho, \tag{26}\] on suitable integration, we get, \[\tilde{r}_{{}_{A}}^{\frac{(n-1)\Delta-4}{2}}=\frac{2\pi\Omega_{n}}{(n-1)(2+ \Delta)}\left(\frac{A_{0}}{n\Omega_{n}}\right)^{1+\Delta/2}\left(4-(n-1)\Delta \right)\rho. \tag{27}\] Further simplification using the same algebra for equation (19) leads to, \[\alpha\frac{dV_{{}_{eff}}}{dt}=\frac{G\tilde{r}_{{}_{A}}}{H^{-1}}\left[\frac{2 +\Delta}{4-(n-1)\Delta}\frac{n-1}{n-2}\frac{(n\Omega_{n})^{1+\Delta/2}}{G \Delta_{0}^{\Delta/2}}\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)}+\frac{4\pi\Omega _{n}}{n-2}\tilde{r}_{{}_{A}}^{(n+1)}\left((n-2)\rho+nP\right)\right]. \tag{28}\] With the suitable forms of \(N_{bulk}\) and \(N_{surf}\) this will become exactly similar to the form equation(24). It may be noted that, we obtained the law of emergence in terms of time rate of an effective volume (see equation (24)) instead of the areal volume, which will reduces to the conventional areal volume when \(\Delta=0\) and the resulting law takes the form identical to that proposed by Sheykhi[27] for an n+1 dimensional non-flat universe. Further for \(n=3\) the law will in turn reduces to the original form proposed by Padmanabhan for a 3+1 dimensional flat universe. ## 3 Law of emergence from the non-equilibrium thermodynamic principle In the previous section we obtain the law of emergence, by taking the Barrow entropy for the horizon, from an equilibrium thermodynamic point of view. To facilitate it, we re-arranged Barrow entropy relation in a form similar to Bekenstein entropy by defining an effective area, \(A_{eff}\). Assuming the validity of the conventional conservation law, we have obtained the law of emergence in terms of an effective volume \(V_{eff}\), using the first law of thermodynamics. However, when one adopt non-Bekenstein type of entropy for the horizon, there arise issues with the validity of the equilibrium thermodynamic approach. Eiling, Guedens and Jacobson[57] have shown that, while one try to derive the field equation with non-Bekenstein type entropy for the horizon, the equilibrium Clausius relation is no longer be valid due to the fact that the energy momentum conservation, \(\nabla_{\mu}T_{\mu}^{\mu}=0\), does not hold in such cases. They resolve this issue by replacing the conventional clausius relation with an entropy balance relation [57, 68, 56], \[\delta S=\frac{\delta Q}{\mathcal{T}}+dS_{p}. \tag{29}\] Where \(dS_{p}\) is the corresponding additional entropy produced in the system due to the non-equilibrium thermodynamic process. This procedure has been adopted by Asghari and Sheykhi [56], for a universe with Barrow entropy for the horizon, and derived the corresponding modified Einstein field equation(see equation.24 in [56]) as given below (in a re-arranged form), \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=\frac{8\pi A_{0}^{1+\Delta/2}}{4 A^{\Delta/2}}\left[T_{\mu\nu}-\frac{\Delta}{(2+\Delta)}T_{\mu\nu}+\frac{A_{0}^{ -(1+\Delta/2)}}{2\pi}\left(\nabla_{\mu}\nabla_{\nu}A^{\Delta/2}-\Box A^{ \Delta/2}g_{\mu\nu}\right)\right]. \tag{30}\] The above equation can be conveniently expressed in a more suitable form as, \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=8\pi G_{{}_{eff}}T_{\mu\nu}^{{}^{ (eff)}}, \tag{31}\] where \(G_{eff}\) is the effective gravitational coupling strength, \[G_{{}_{eff}}=\frac{A_{0}^{1+\Delta/2}}{4A^{\Delta/2}}, \tag{32}\] and \(T_{\mu\nu}^{{}^{(eff)}}\) is the effective energy-momentum tensor, which is defined as the entire term present inside the square bracket on the right hand side of equation (30). The \(T_{\mu\nu}^{{}^{(eff)}}\) comprises of the energy density and the pressure of all the physical matter present in the universe and also the curvature part that arises due to the modified gravity. Assuming that entire components in the effective energy-momentum tensor are of perfect fluid type, then it can be expressed in a metric independent form as, \[T_{\nu}^{\mu(eff)}=Diag[-\rho_{{}_{eff}},P_{{}_{eff}},P_{{}_{eff}},P_{{}_{eff}}], \tag{33}\] where \(\rho_{{}_{eff}}\) and \(P_{{}_{eff}}\) is the effective energy density and the corresponding pressure. Following the Bianchi identities, the covariant derivative of the equation (31) holds the generalized energy momentum conservation, \(\nabla_{\mu}G_{\nu}^{\mu}=0=8\pi\nabla_{\mu}\left(G_{{}_{eff}}T_{\nu}^{\mu(eff)}\right)\). This implies the generalized continuity equation as [67], \[\dot{\rho}_{{}_{eff}}+nH(\rho_{{}_{eff}}+P_{{}_{eff}})=-\frac{\dot{G}_{{}_{eff}} }{G_{{}_{eff}}}\rho_{{}_{eff}}. \tag{34}\] The right hand side of the above equation, \(-\frac{\bar{G}_{eff}}{\bar{G}_{eff}}\rho_{eff}\) has the dimension of effective energy density flow, \(\dot{\rho}_{ eff}\) which balances the energy flow due to the non-equilibrium situation. The corresponding energy can then be defined as, \[\mathcal{E}=-\Omega_{n}\vec{r}_{ A}^{n}\frac{\bar{G}_{ eff}^{ \cdot}}{\bar{G}_{eff}}\rho_{eff}dt, \tag{35}\] which can be taken as the energy dissipated in the system due to non-equilibrium thermal evolution. Having the field equation and energy momentum tensor, one can obtain the Friedmann equations and for an n+1 dimensional universe with curvature parameter \(k\) these equations take the form, \[\left(\dot{H}-\frac{k}{a^{2}}\right)=\frac{-8\pi G_{ eff}}{(n-1)}(\rho_{ eff}+P_{ eff})\quad;\quad\left(H^{2}+\frac{k}{a^{2}}\right)=\frac{16\pi G_{ eff}}{n(n-1)}\rho_{ eff}. \tag{36}\] In reference [67], the above equations are obtained from the principles of thermodynamics, the Clausius rule and the unified first law, by assuming an effective entropy, \[\tilde{S}=\frac{A}{4G_{ eff}}, \tag{37}\] with an effective coupling strength \(G_{ eff}\). Such a definition of entropy is feasible due to form of the field equation (31), in terms of \(G_{ eff}\) as given in equation(32). We adopt this definition of the entropy for deriving the law of emergence from the non-equilibrium modified unified law and also form the entropy balance relation for an n+1 dimensional universe with Barrow entropy for the horizon. ### From the corrected unified first law due to non-equilibrium behaviour In this section we are deriving the law of emergence from the unified first law of thermodynamics, which takes account of the energy dissipation due to the additional entropy production. Let us consider the law with an extra term, \(\mathcal{E}\), which arise due to the energy dissipation of the non-equilibrium situation, \[dE_{ eff}=A\psi_{ eff}+W_{ eff}dV+\mathcal{E} \tag{38}\] Where \(E_{ eff}=\rho_{ eff}V\) with \(\rho_{ eff}\) as an effective energy density and volume \(V=\Omega_{n}\vec{r}_{ A}^{n}\) with \(\Omega_{n}\) being the volume of a n-sphere with unit radius, \(dV=n\Omega_{n}\vec{r}_{ A}^{n-1}d\vec{r}_{ A}\). The effective flux energy density \(\psi_{ eff}\), can be defined in terms of effective energy density and the corresponding pressure as, \[\psi_{ eff}=\psi_{ eff}^{t}+\psi_{ eff}^{\hat{x}_{ A}}=-(\rho_{ eff}+P_{ eff})H\tilde{r}_{ A}dt+\frac{1}{2}(\rho_{ eff}+P_{ eff})d\tilde{r}_{ A}, \tag{39}\] and the effective work energy density is, \(W_{ eff}=(\rho_{ eff}-P_{ eff})/2\). Here \(\psi_{ eff}^{t}\) and \(\psi_{ eff}^{\hat{x}_{ A}}\) is the effective flux crossing the horizon per unit area during the infinitesimal interval of time \(dt\), during which the horizon is stationary and the effective flux density crossing the dynamic horizon respectively. The energy dissipation within the system, causes an additional entropy production, and thus the Clausius relation will be modified as [67] \[\mathcal{T}d\tilde{S}+\mathcal{T}d\tilde{S}_{p}=-d\tilde{E}_{ eff}. \tag{40}\] Here \[d\tilde{E}_{ eff}=A\psi_{ eff}^{t}+\mathcal{E}, \tag{41}\] is the flux energy across the apparent horizon during the interval of time \(dt\)[67]. We can then write \[\mathcal{T}d\tilde{S}+\mathcal{T}d\tilde{S}_{p}=-(A\psi_{ eff}^{t}+\mathcal{E}). \tag{42}\] The unified first law in (38), can then be rewritten as, \[dE_{ eff}=-\mathcal{T}d\tilde{S}-\mathcal{T}d\tilde{S}_{p}+A \psi_{ eff}^{\hat{x}_{ A}}+W_{ eff}dV. \tag{43}\] Let us substitute for the work energy density \(W_{ eff}\) and \(\psi_{ eff}^{\hat{x}_{ A}}\) in the above equation and multiply throughout by a factor \(\left(1-\dot{\tilde{r}}_{ A}/2H\tilde{r}_{ A}\right)\), then the resulting equation(43) can be simplified by using the continuity equation yields an alternative form for unified first law due to non-equilibrium, \[dE_{ eff}=-T(d\tilde{S}+d\tilde{S}_{p})+(\rho_{ eff}-P_{ eff})\frac{n\Omega_{n}\vec{r}_{ A}^{n-1}\dot{\tilde{r}}_{ A}}{2}dt+\mathcal{E}\frac{\dot{\tilde{r}}_{ A}}{2H\tilde{r}_{ A}}. \tag{44}\] Here \(T\) is the dynamical temperature of the horizon as given in equation(8). Substituting for \(\mathcal{E}\) and after suitable rearrangements, we can express the entropy production term, \(Td\tilde{S}_{p}\) due to the non-equilibrium behaviour as, \[Td\tilde{S}_{p}=-Td\tilde{S}-\frac{n\Omega_{n}\vec{r}_{ A}^{n-1}d \tilde{r}_{ A}}{2}\left[(\rho_{ eff}+P_{ eff})+\frac{\rho_{ eff}}{nH}\frac{\dot{G}_{ eff}}{\bar{G}_{ eff}}\right]+\Omega_{n}\vec{r}_{ A}^{n}\left[nH(\rho_{ eff}+P_{ eff})+\frac{\dot{G}_{ eff}}{\bar{G}_{ eff}}\rho_{ eff}\right]dt. \tag{45}\] Now we substitute \(d\tilde{S}\) in the above equation directly using the Barrow entropy (37), which is after feeding back the horizon area \(A=n\Omega_{n}\tilde{r}_{A}^{n-1}\) is, \[d\tilde{S}=\frac{n\Omega_{n}}{4}\left[\frac{(n-1)}{G_{eff}}\tilde{r}_{A}^{n-2} \dot{\tilde{r}}_{A}-\tilde{r}_{A}^{n-1}\frac{\dot{G}_{eff}}{G_{eff}^{2}}dt \right]. \tag{46}\] Hence we have, \[Td\tilde{S}_{p}=\left(1-\frac{\dot{\tilde{r}}_{A}}{2H\dot{r}_{A}}\right)\frac{ n(n-1)\Omega_{n}\tilde{r}_{A}^{n-3}}{4\pi G_{eff}}\left[\dot{\tilde{r}}_{A}+ \frac{\tilde{r}_{A}}{4}\left(1-\frac{2}{n-1}\right)\frac{\dot{G}_{eff}}{G_{eff }}\right]dt. \tag{47}\] On substituting the temperature \(T\) from equation(8), we get the rate of additional entropy as, \[\frac{d\tilde{S}_{p}}{dt}=-\frac{n(n-1)\Omega_{n}\tilde{r}_{A}^{n-2}}{2G_{eff }}\left[\dot{\tilde{r}}_{A}+\frac{\tilde{r}_{A}}{4}\left(1-\frac{2}{n-1} \right)\frac{\dot{G}_{eff}}{G_{eff}}\right]. \tag{48}\] For n=3, the entropy production rate became, \[\frac{d\tilde{S}_{p}}{dt}=-\frac{4\pi\tilde{r}_{A}\dot{\tilde{r}}_{A}}{G_{eff }}. \tag{49}\] From the Friedmann equations (36) together with the conservation relation (34), time rate of the radius of the apparent horizon is always positive, i.e., \[\dot{\tilde{r}}_{A}=\frac{n}{2}H\tilde{r}_{A}(1+\omega_{{}_{eff}})\geq 0, \tag{50}\] where \(\omega_{{}_{eff}}=P_{{}_{eff}}/\rho_{{}_{eff}}\) is the effective equation of state parameter, such that for an accelerating universe which evolves toward an end de Sitter epoch, it satisfies \(\omega_{{}_{eff}}\geq-1.\) Hence, the additional entropy production due to non-equilibrium process behaves as a decreasing function of time. Now combining the unified first law (44) with equation (47), we get, \[\rho_{{}_{eff}}dV+Vd\rho_{{}_{eff}} = \frac{1}{2\pi\tilde{r}_{A}}\left(1-\frac{\dot{\tilde{r}}_{A}}{2H \tilde{r}_{A}}\right)\left\{\frac{n(n-1)\Omega_{n}\tilde{r}_{A}^{n-2}d\tilde{r }_{A}}{4G_{{}_{eff}}}-\frac{n\Omega_{n}\tilde{r}_{A}^{n-1}}{4}\frac{\dot{G}_{ {}_{eff}}}{G_{eff}^{2}}\right.\] \[\left.-\,\frac{n(n-1)\Omega_{n}\tilde{r}_{A}^{n-2}}{2G_{eff}} \left[\dot{\tilde{r}}_{A}+\frac{\tilde{r}_{A}}{4}\left(1-\frac{2}{n-1}\right) \frac{\dot{G}_{{}_{eff}}}{G_{eff}}\right]\right\}dt+\frac{(\rho_{{}_{eff}}-P_{ {}_{eff}})}{2}dV-\Omega_{n}\tilde{r}_{A}^{n}\frac{\dot{G}_{{}_{eff}}^{{}_{eff}} }{G_{eff}}\rho_{{}_{eff}}\frac{\dot{\tilde{r}}_{A}}{2H\tilde{r}_{A}}dt.\] This equation can be simplified to a form, \[\frac{d\tilde{r}_{A}}{\tilde{r}_{A}^{3}}=\frac{8\pi G_{{}_{eff}}}{n-1}H(\rho_ {{}_{eff}}+P_{{}_{eff}})dt. \tag{52}\] Further, using the continuity equation, \(\rho_{{}_{eff}}+P_{{}_{eff}}=\frac{1}{nH}\left[-\frac{\dot{G}_{{}_{eff}}}{G_{ eff}}\rho_{{}_{eff}}-\dot{\rho}_{{}_{eff}}\right]\) the above equation can be written as, \[-\frac{d\tilde{r}_{A}}{\tilde{r}_{A}^{3}}=\frac{8\pi}{n(n-1)}\frac{d}{dt} \left(\rho_{{}_{eff}}G_{{}_{eff}}\right). \tag{53}\] Which on integration gives, \[\frac{1}{\tilde{r}_{A}^{2}}=\frac{16\pi}{n(n-1)}\rho_{{}_{eff}}G_{{}_{eff}}, \tag{54}\] where we had neglected the integration constant. Multiply with \(a^{2}\) and differentiate with respect to time \(t\), we get \[2a\dot{a}\left(\frac{1}{\tilde{r}_{A}^{2}}-\frac{\dot{\tilde{r}}_{A}}{H\tilde{r }_{A}^{3}}\right)=\frac{16\pi}{n(n-1)}\left[2a\dot{a}G_{{}_{eff}}\rho_{{}_{eff}} +a^{2}\dot{G}_{{}_{eff}}\rho_{{}_{eff}}+a^{2}G_{{}_{eff}}\dot{\rho}_{{}_{eff}} \right]. \tag{55}\] On multiplying with \(H\tilde{r}_{A}^{3}/2a\dot{a}\) and employing the relation \(\dot{G}_{{}_{eff}}\rho_{{}_{eff}}=-G_{{}_{eff}}\dot{\rho}_{{}_{eff}}-nHG_{{}_{ eff}}(\rho_{{}_{eff}}+P_{{}_{eff}})\), we get \[-\dot{\tilde{r}}_{A}=-H\tilde{r}_{A}-\frac{16\pi}{n(n-1)}nH\tilde{r}_{A}^{3}G_ {{}_{eff}}(\rho_{{}_{eff}}+P_{{}_{eff}})+\frac{16\pi}{n(n-1)}H\tilde{r}_{A}^{3}G_ {{}_{eff}}\rho_{{}_{eff}}. \tag{56}\] This equation can be suitably rearranged as an expression for the rate of change of the volume of the horizon, by multiplying it through out with \(n\Omega_{n}\tilde{r}_{A}^{n-1}.\) Thus we arrive at, \[\alpha\frac{dV}{dt}=\frac{G_{{}_{eff}}\tilde{r}_{A}}{H^{-1}}\left[\frac{n-1}{2(n -2)}\frac{n\Omega_{n}\tilde{r}_{A}^{n-1}}{G_{{}_{eff}}}+\frac{4\pi\Omega_{n} \tilde{r}_{A}^{n+1}}{n-2}\left((n-2)\,\rho_{{}_{eff}}+nP_{{}_{eff}}\right) \right], \tag{57}\] with \(\alpha=\frac{n-1}{2(n-2)}\). On identifying and the number of degrees of freedom corresponding to the surface \(N_{surf}\), and that of bulk \(N_{bulk}\) as, \[N_{surf}=\frac{n-1}{2(n-2)}\frac{n\Omega_{n}\tilde{r}_{{}_{A}}^{n-1}}{G_{{}_{ eff}}}\ \ \ \ ;\ \ \ \ N_{bulk}=-\frac{4\pi\Omega_{n}\tilde{r}_{{}_{A}}^{n+1}}{(n-2)}\left((n-2) \,\rho_{{}_{eff}}+nP_{{}_{eff}}\right), \tag{58}\] respectively, the law of emergence takes the form, \[\alpha\frac{dV}{dt}=\frac{G_{{}_{eff}}\tilde{r}_{{}_{A}}}{H^{-1}}\left[N_{surf }-N_{bulk}\right]. \tag{59}\] For n=3, \(\alpha\) became unity and the corresponding number of surface degrees of freedom, \(N_{surf}=4\pi\tilde{r}_{{}_{A}}^{2}/G_{{}_{eff}}\) and that of bulk \(N_{bulk}=-(16\pi\tilde{r}_{{}_{A}}^{4}/3)(\rho_{{}_{eff}}+3P_{{}_{eff}})\). ### From the Entropy balance relation due to the non-equilibrium behaviour In this section we derived the modified form of law of emergence from the clausius relation in non-equilibrium perspective. Let us now consider the clausius relation termed as the entropy balance relation which arises due to the non-equilibrium behaviour in(40), which can expressed after substituting the energy flux change though the constant apparent horizon given in equation(41), as, \[\mathcal{T}d\tilde{S}+\mathcal{T}d\tilde{S}_{p}=-(A\psi_{{}_{eff}}^{t}+ \mathcal{E}) \tag{60}\] By substituting the Barrow entropy change (46), \(\psi_{{}_{eff}}^{t}\) from equation(39) and the energy dissipation from equation(35), we get the time rate of additional entropy produced due to the non-equilibrium behaviour as, \[\frac{d\tilde{S}_{p}}{dt}=-\frac{n(n-1)\Omega_{n}\tilde{r}_{{}_{A}}^{n-2}}{2G _{{}_{eff}}}\left[\dot{\tilde{r}}_{{}_{A}}+\frac{\tilde{r}_{{}_{A}}}{4}\left( 1-\frac{2}{n-1}\right)\frac{\dot{G}_{{}_{eff}}}{G_{{}_{eff}}}\right] \tag{61}\] It may be noted that, the above equation is similar to the additional entropy production rate obtained, in the previous section, (refer equation (48)), using the unified first law. The clausius relation can now be written as, \[\frac{-n(n-1)\Omega_{n}}{8\pi G_{{}_{eff}}}\tilde{r}_{{}_{A}}^{n-3 }\ddot{\tilde{r}}_{{}_{A}}dt+\frac{n(n-1)\Omega_{n}}{16\pi}\tilde{r}_{{}_{A}}^ {n-2}\frac{\dot{G}_{{}_{eff}}}{G_{{}_{eff}}^{2}}dt+\frac{n(n-1)\Omega_{n}}{4 \pi G_{{}_{eff}}}\tilde{r}_{{}_{A}}^{n-3}d\tilde{r}_{{}_{A}}\\ =n\Omega_{n}\tilde{r}^{n-1}(\rho_{{}_{eff}}+P_{{}_{eff}})H\tilde{r }_{{}_{A}}dt+\Omega_{n}\tilde{r}_{{}_{A}}^{n}\frac{\dot{G}_{{}_{eff}}}{G_{{}_{ eff}}}\rho_{{}_{eff}}dt \tag{62}\] Which reduces to the simple form, \[\frac{\dot{\tilde{r}}_{{}_{A}}}{H\tilde{r}_{{}_{A}}^{3}}dt=-\frac{8\pi G_{{}_ {eff}}}{n-1}\left[\rho_{{}_{eff}}+P_{{}_{eff}}\right]dt \tag{63}\] Substitute from the continuity equation, \(\rho_{{}_{eff}}+P_{{}_{eff}}=\frac{1}{nH}\left[-\frac{G_{{}_{eff}}}{G_{{}_{eff} }}\rho_{{}_{eff}}-\dot{\rho}_{{}_{eff}}\right]\) and integrate the resulting equation (neglecting the integration constant), we get, \[-\frac{d\tilde{r}_{{}_{A}}}{\tilde{r}_{{}_{A}}^{3}}=\frac{8\pi}{n(n-1)}\frac{d} {dt}\left(\rho_{{}_{eff}}G_{{}_{eff}}\right), \tag{64}\] which is also similar to that of equation which we obtained in the previous section. Hence, following the same steps in the previous section, we get the same law of emergence as is equation(59) with same form the degrees of freedom as given equation (58.). It is to be noted that, even though the over-all forms of law of emergences in equilibrium and non-equilibrium perspectives, given in equations (24) and (59) respectively, are similar, there exists much differences between them. The volume, the rate of change of which, appearing on the left hand side of the law, is turn out to be the effective volume in the case of equilibrium evolution, while it is the simple areal volume of the apparent horizon for non-equilibrium case. This is because of the restructured form of Barrow entropy in non-equilibrium scenario resembles the conventional Bekenstein entropy relation with the horizon normal area. More over there occurs substantial changes in the expressions for the degrees of freedoms between equilibrium and non-equilibrium situations, see equations (23) and (58). In which the surface degrees of freedom obtained in non-equilibrium approach seems in more convenient form. An additional difference is the appearance of the gravitational constant, such that it appeared as an effective term, \(G_{{}_{eff}}\) as defined in equation (32) in the case of non-equilibrium thermodynamic situations. On analysing the result, For the energy-momentum conservation is to be valid, we can argue that the universe with Barrow entropy as the horizon entropy preferably behaves as a non-equilibrium thermodynamic system with some additional entropy production. Maximization of horizon entropy In the cosmological context, based on the Hubble expansion history, the universe behaves as an ordinary macroscopic system that evolves towards a maximum entropy state, such that it satisfies [35, 69, 70, 71], \[\dot{S}\geq 0\quad\text{ for \ always \ \ \ ; \ \ \ }\ddot{S}<0\quad\text{ for \ at least \ later \ time of \ evolution.}\] Where the over-dot represents the derivative with respect to cosmic time, here, the first inequality expresses the increase in entropy of the universe as it expands; that is the generalized second law of thermodynamics. The second one implies that the entropy function satisfies the convexity condition such that the entropy will be maximum at the end stage of the evolution. The total entropy comprises the entropy of all the cosmic components present inside the horizon and that of the horizon. Since the horizon entropy is much larger than that of the cosmic components present inside, the total entropy can be approximately taken to be equal to the entropy of the horizon [72]. Hence, here we restrict to analyse the evolution of the horizon entropy. We will first obtain the maximization condition directly using the Barrow entropy for the horizon. Alternatively, the same constraint relation will be obtained from the modified law of emergence derived in the context of both equilibrium and non-equilibrium thermodynamics. For this we adopt the approach described in reference [36, 37, 38]. Now, taking Barrow entropy for the horizon (3), time rate of entropy (given in equation 17) can be expressed as, \[\dot{S}=\left(\frac{n\Omega_{n}}{A_{0}}\right)^{1+\Delta/2}(n-1)(1+\Delta/2) \quad\ddot{r}_{{}_{A}}^{(n-1)(1+\Delta/2)-1}\ddot{\tilde{r}}_{{}_{A}}. \tag{65}\] Since, \(\dot{\tilde{r}}_{A}\geq 0\) always (refer equation (50) ), the constraint relation \(\dot{S}\geq 0\) holds during the entire evolution of the universe. Thus, the generalised second law of thermodynamics is satisfied. To check the validity of the convexity condition, the second derivative of entropy can be obtained as, \[\ddot{S}=\frac{n\Omega_{n}}{4G_{{}_{eff}}}(n-1)(1+\Delta/2)\ddot{r}_{{}_{A}}^{ n-3}\left[((n-2)+(n-1)\Delta/2)\,\dot{\tilde{r}}_{{}_{A}}^{2}+\tilde{r}_{{}_{A}} \ddot{\tilde{r}}_{{}_{A}}\right]. \tag{66}\] For the condition \(\ddot{S}<0\) is to be valid, the quantity inside the square bracket on the right-hand side of the above equation must be less than zero, at least at the end stage of evolution. From equation (50), the second derivative of the radius of the apparent horizon can take the form, \[\ddot{\tilde{r}}_{{}_{A}}=\frac{n}{2}\tilde{r}_{{}_{A}}\left[\frac{n}{2}H^{2} (1+\omega_{{}_{eff}})^{2}+\dot{H}(1+\omega_{{}_{eff}})+H\dot{\omega}_{{}_{eff} }\right]. \tag{67}\] In this the effective equation of state \(\omega_{{}_{eff}}\), will decreases as the universe expand. In the asymptotic limit, the universe evolves towards a de Sitter stage, for which \(\omega_{{}_{eff}}\rightarrow-1\), hence the terms containing \(1+\omega_{{}_{eff}}\) goes to zero. Since \(\omega_{{}_{eff}}\) is a decreasing function of cosmic time, \(\dot{\omega}_{{}_{eff}}\) will be always negative. This implies that, at the end stage of the evolution, \(\ddot{\tilde{r}}_{{}_{A}}<0\). This, in turn, implies that the convexity condition is satisfied if, \[\left|((n-2)+(n-1)\Delta/2)\,\dot{\tilde{r}}_{{}_{A}}^{2}\right|<\left|\tilde{ r}_{{}_{A}}\ddot{\tilde{r}}_{{}_{A}}\right|, \tag{68}\] is true, at least at the end stage of evolution. Since \(\omega_{{}_{eff}}\rightarrow-1\) asymptotically for the later time, \(\ddot{\tilde{r}}_{{}_{A}}\) in the left side of the inequality relation goes to zero (refer equation (50)). Hence the above relation holds asymptotically, which implies that the convexity condition is satisfied. Thus, the horizon entropy will be maximized at least at the end stage of evolution. For 3+1 dimension the above inequality will be reduced to, \[|(1+\Delta)\ddot{\tilde{r}}_{{}_{A}}^{2}|<|\tilde{r}_{{}_{A}}\ddot{\tilde{r}}_ {{}_{A}}|. \tag{69}\] ### Maximization condition from the law of emergence in equilibrium approach In this section, we have obtained the constraint relation from the modified law of emergence in the context of equilibrium thermodynamics. We consider the generalized form of modified law of emergence derived from the equilibrium thermodynamics is given in equation(24), and substitute the left hand side of that equation as, \(\frac{dV_{eff}}{dt}=\frac{4G\tilde{r}}{n-1}\dot{S}.\) Here we have used equation (21), in which \(\dot{\tilde{r}}_{{}_{A}}\) is written in terms of \(\dot{S}\) (refer equation (17)). Hence we get, \[\dot{S}=\frac{n-2}{2}H\left[N_{surf}-N_{bulk}\right], \tag{70}\] where \(N_{surf}\) and \(N_{bulk}\) is given in equation(23). In general, for an expanding universe, the rate of change of cosmic volume is always positive, so the holographic discrepancy, \(\left[N_{surf}-N_{bulk}\right]\geq 0\) always. Substitute the degrees of freedom corresponding to surface and bulk from equation (23) into the above expression, then \(\dot{S}\) becomes, \[\dot{S}=\frac{(n-1)(2+\Delta)}{2}\left(\frac{n\Omega_{n}}{A_{0}}\right)^{(1+ \Delta/2)}\ddot{r}_{{}_{A}}^{(n-1)\Delta/2+n-2}\dot{\tilde{r}}_{{}_{A}}. \tag{71}\] This is same as equation (65) in the previous section. Since \(\ddot{\tilde{r}}_{{}_{A}}\geq 0\) for an expanding universe, then the horizon entropy will never decrease and hence the generalised second law of thermodynamics is satisfied. To check the maximization condition, we find the second derivative of equation (70), and is, \[\ddot{S}=\frac{n-2}{2}\dot{H}(N_{surf}-N_{bulk})+\frac{n-2}{2}H\frac{d}{dt}(N_{ surf}-N_{bulk}). \tag{72}\] By substituting \(N_{surf}\) and \(N_{bulk}\), \[\ddot{S}=\frac{(n-1)(2+\Delta)}{2}\left(\frac{n\Omega_{n}}{A_{0}}\right)^{(1+ \Delta/2)}\tilde{r}_{{}_{A}}^{(n-1)(1+\Delta/2)-2}\left[((n-2)+(n-1)\Delta/2) \,\dot{\tilde{r}}_{{}_{A}}^{2}+\tilde{r}_{{}_{A}}\ddot{\tilde{r}}_{{}_{A}} \right]. \tag{73}\] It is to be noted that, in the asymptotic limit \(\ddot{\tilde{r}}_{{}_{A}}<0\) (refer equation (67)), hence the last product term inside the square bracket on the right hand side will be negative asymptotically. Hence the condition for the validity of the entropy maximization during the last stage of the universe is, \[\left|((n-2)+(n-1)\Delta/2)\,\dot{\tilde{r}}_{{}_{A}}^{2}\right|<\left|\tilde{ r}_{{}_{A}}\ddot{\tilde{r}}_{{}_{A}}\right|, \tag{74}\] which is exactly similar to the constraint relation that we directly obtained from the Barrow entropy relation (refer the section 4). ### Maximization condition from the law of emergence in non-equilibrium approach Here we obtained the constraint relation for entropy maximization from the modified form of law of emergence due to non-equilibrium behaviour. We now adopt the same procedure as in the last section. Let us consider the generalized form of law of emergence obtained in the non-equilibrium scenario, (59) in which, \(dV/dt=n\Omega_{n}\tilde{r}_{A}^{n-1}\dot{\tilde{r}}_{A}\), \(V=\Omega_{n}\tilde{r}_{A}^{n}\) being the volume, (\(\tilde{r}_{{}_{A}}\) is from equation (17)). We then have, \[\dot{\tilde{S}}=\frac{(n-2)(2+\Delta)}{4}H\left[N_{surf}-N_{bulk}\right]. \tag{75}\] By substituting \(N_{surf}\) and \(N_{bulk}\) from equation (58) along with equation the Friedmann equations (36), we then get, \[\dot{\tilde{S}}=\frac{n\Omega_{n}}{4G_{{}_{eff}}}(n-1)(1+\Delta/2)\tilde{r}_{{ }_{A}}^{n-2}\dot{\tilde{r}}_{{}_{A}} \tag{76}\] The above equation holds the condition \(\dot{\tilde{S}}\geq 0\), since \(\dot{\tilde{r}}_{{}_{A}}\geq 0\), always, implies that the generalised second law of thermodynamics is satisfied. Now, in order to check the consistency of entropy maximization condition, the second derivative of equation (75) is obtained, and is, \[\ddot{\tilde{S}}=\frac{(n-2)(2+\Delta)}{4}\left[\dot{H}\left(N_{surf}-N_{bulk }\right)+H\frac{d}{dt}\left(N_{surf}-N_{bulk}\right)\right] \tag{77}\] On substituting the degrees of freedoms, we get, \[\ddot{\tilde{S}}=\frac{n\Omega_{n}}{G_{{}_{eff}}}(n-1)(1+\Delta/2)\tilde{r}_{{ }_{A}}^{n-3}\left[((n-2)+(n-1)\Delta/2)\,\dot{\tilde{r}}_{{}_{A}}^{2}+\tilde{ r}_{{}_{A}}\ddot{\tilde{r}}_{{}_{A}}\right] \tag{78}\] To get the entropy to be maximized, \(\ddot{\tilde{S}}<0\) in the long run, for which, as argued in the last subsection, \(\ddot{\tilde{r}}_{{}_{A}}<0\) at least for far future and which in turn implies, \[\left|((n-2)+(n-1)\Delta/2)\,\dot{\tilde{r}}_{{}_{A}}^{2}\right|<\left|\tilde{ r}_{{}_{A}}\ddot{\tilde{r}}_{{}_{A}}^{2}\right|, \tag{79}\] should be satisfied. This is exactly is similar to the constraint relation obtained from the equilibrium description. Since we have \(\ddot{\tilde{r}}_{{}_{A}}<0\) and \(\dot{\tilde{r}}_{{}_{A}}=0\) at least for the later time of evolution (refer equations 50 and 67), the negativity of \(\ddot{\tilde{S}}\) is obvious at least for the later time of evolution and the entropy maximization condition is ensured. It is to be noted from the previous discussion that, the constraint relations for entropy maximization obtained for the equilibrium, non-equilibrium perspective and directly from the Barrow entropy relation are the same. The reason for this identical nature of the constraints, especially between equilibrium and non-equilibrium situations, can be understood as follows. The rate of the production of additional entropy is negative (refer equations (48) and (61)), and hence the generation of the additional entropy decreases as the universe evolves, and it finally approach a state of equilibrium. Discussion and Conclusion The profound connection between gravity and thermodynamics led to the exciting speculations on gravity that, it could be an emergent phenomenon. Following this, Padmanabhan put forward the notion that the expansion of the universe can be considered as the emergence of cosmic space with cosmic time. Accordingly Padmanabhan proposed the law of emergence, which explains that cosmic expansion is driven by the holographic discrepancy. Later, the law of emergence was derived from the fundamental principles of thermodynamics and have shown that it implies the maximization of entropy at the end stage of the evolution. In these works the entropy of the apparent horizon is assumed to be of the Bekenstein entropy form. In the present work we have first derived the modified law of emergence for an (n+1) dimensional universe under equilibrium conditions, from the thermodynamic principles (the first law of thermodynamics and also the Clausius relation), by assuming Barrow entropy for the horizon. The derived law is different in its exact form from the corresponding form proposed (not derived) in a recent paper [73]. For deriving the law we have rearrangement of the Barrow entropy in a more suitable form, \(S=A_{ eff}/4G\). For an appropriate limit, \(\Delta=0\) and \(n=3\), the resulting law reduces to the conventional law in (3=1) dimension, in which the left-hand side contains the time rate of the areal volume within the apparent horizon. We have analysed the maximisation of the horizon entropy and obtained the constraint relation directly from the Barrow entropy relation and also from the modified law in the equilibrium condition and are found to be the same as expected. We have also analysed the generalised second law of thermodynamics and found that it is satisfied throughout the evolution of the universe. Despite the analysis under equilibrium conditions, it is to be noted that, while deriving the Einstein field equation from the Clausius relation, with Barrow entropy for the horizon. it is found that the equilibrium Clausius relation is not valid due to the generation of additional entropy during the evolution. This demand a non-equilibrium treatment and hence the equilibrium Clausius relation is replaced with an entropy balance relation, which incorporates the additional entropy. The corresponding modified field equation, by restructuring Barrow entropy \(S=A/4G_{ eff}\), analogues to the Bekenstein formula, leads to an effective energy-momentum tensor and an effective gravitational coupling strength. Following this we obtained the modified form of the law of emergence for an n+1 dimensional non-flat universe with the time rate of the areal volume in the left-hand side and the an effective coupling parameter along with the holographic discrepancy on the right hand side. Comparing the resulting law of emergence in the non-equilibrium with that obtained for the equilibrium perspective, the following may be noted. Even though the overall appearance of the general form are alike, there seem to have many differences. In the law derived from the non-equilibrium thermodynamics, the surface degrees of freedom naturally arised as the conventional form proposed Sheykhi in an n+1 dimensional non-flat universe and reduced to the standard form proposed by Padmanabhan for a 3+1 dimensional flat universe. We checked the consistency of the modified law of emergence derived from the non-equilibrium description of the thermodynamic principle and found that the constraint relation corresponding to the maximisation condition is the same as that of the constraint obtained in the equilibrium approaches as well as the one extracted directly from the entropy relation. The reason for this similarity can be understood by analysing the form of additional entropy production, which makes the system out of equilibrium. That is, the additional entropy production rate decreases over time, and in the asymptotic limit, it will approach to zero. This guarantees that, the universe approaches equilibrium behaviour, at the end stage of the evolution. In summary, on treating the universe with Barrow entropy for the horizon, make it behaves as a thermodynamic system that is out of equilibrium, with an additional entropy production, but it evolves towards an equilibrium scenario at the end stage of evolution by maximising the entropy. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability No data was used for the research described in the article. ## Acknowledgment We are thankful to Dheepika M, Vishnu A Pai and Hassan Basari V T for their valuable suggestions and discussions. Nandhida Krishnan P is thankful to CUSAT and Government of Kerala for financial assistance.
2303.15608
Overcoming Probabilistic Faults in Disoriented Linear Search
We consider search by mobile agents for a hidden, idle target, placed on the infinite line. Feasible solutions are agent trajectories in which all agents reach the target sooner or later. A special feature of our problem is that the agents are $p$-faulty, meaning that every attempt to change direction is an independent Bernoulli trial with known probability $p$, where $p$ is the probability that a turn fails. We are looking for agent trajectories that minimize the worst-case expected termination time, relative to competitive analysis. First, we study linear search with one deterministic $p$-faulty agent, i.e., with no access to random oracles, $p\in (0,1/2)$. For this problem, we provide trajectories that leverage the probabilistic faults into an algorithmic advantage. Our strongest result pertains to a search algorithm (deterministic, aside from the adversarial probabilistic faults) which, as $p\to 0$, has optimal performance $4.59112+\epsilon$, up to the additive term $\epsilon$ that can be arbitrarily small. Additionally, it has performance less than $9$ for $p\leq 0.390388$. When $p\to 1/2$, our algorithm has performance $\Theta(1/(1-2p))$, which we also show is optimal up to a constant factor. Second, we consider linear search with two $p$-faulty agents, $p\in (0,1/2)$, for which we provide three algorithms of different advantages, all with a bounded competitive ratio even as $p\rightarrow 1/2$. Indeed, for this problem, we show how the agents can simulate the trajectory of any $0$-faulty agent (deterministic or randomized), independently of the underlying communication model. As a result, searching with two agents allows for a solution with a competitive ratio of $9+\epsilon$, or a competitive ratio of $4.59112+\epsilon$. Our final contribution is a novel algorithm for searching with two $p$-faulty agents that achieves a competitive ratio $3+4\sqrt{p(1-p)}$.
Konstantinos Georgiou, Nikos Giachoudis, Evangelos Kranakis
2023-03-27T21:32:45Z
http://arxiv.org/abs/2303.15608v1
# Overcoming Probabilistic Faults in Disoriented Linear Search + ###### Abstract We consider search by mobile agents for a hidden, idle target, placed on the infinite line. Feasible solutions are agent trajectories in which all agents reach the target sooner or later. A special feature of our problem is that the agents are \(p\)-faulty, meaning that every attempt to change direction is an independent Bernoulli trial with known probability \(p\), where \(p\) is the probability that a turn fails. We are looking for agent trajectories that minimize the worst-case expected termination time, relative to the distance of the hidden target to the origin (competitive analysis). Hence, searching with one \(0\)-faulty agent is the celebrated linear search (cow-path) problem that admits optimal \(9\) and \(4.59112\) competitive ratios, with deterministic and randomized algorithms, respectively. First, we study linear search with one deterministic \(p\)-faulty agent, i.e., with no access to random oracles, \(p\in(0,1/2)\). For this problem, we provide trajectories that leverage the probabilistic faults into an algorithmic advantage. Our strongest result pertains to a search algorithm (deterministic, aside from the adversarial probabilistic faults) which, as \(p\to 0\), has optimal performance \(4.59112+\epsilon\), up to the additive term \(\epsilon\) that can be arbitrarily small. Additionally, it has performance less than \(9\) for \(p\leq 0.390388\). When \(p\to 1/2\), our algorithm has performance \(\Theta(1/(1-2p))\), which we also show is optimal up to a constant factor. Second, we consider linear search with two \(p\)-faulty agents, \(p\in(0,1/2)\), for which we provide three algorithms of different advantages, all with a bounded competitive ratio even as \(p\to 1/2\). Indeed, for this problem, we show how the agents can simulate the trajectory of any \(0\)-faulty agent (deterministic or randomized), independently of the underlying communication model. As a result, searching with two agents allows for a solution with a competitive ratio of \(9+\epsilon\) (which we show can be achieved with arbitrarily high concentration) or a competitive ratio of \(4.59112+\epsilon\). Our final contribution is a novel algorithm for searching with two \(p\)-faulty agents that achieves a competitive ratio \(3+4\sqrt{p(1-p)}\), with arbitrarily high concentration. Keywords:Linear Search Probabilistic Faults Mobile Agents ## 1 Introduction Linear search refers to the problem of searching for a point target which has been placed at an unknown location on the real line. The searcher is a mobile agent that can move with maximum speed \(1\) and is starting the search at the origin of the real line. The goal is to find the target in minimum time. This search problem provides a paradigm for understanding the limits of exploring the real line and has significant applications in mathematics and theoretical computer science. In the present paper we are interested in linear search under a faulty agent which is disoriented in that when it attempts to change direction not only it may fail to do so but also cannot recognize that the direction of movement has changed. More precisely, for some \(0\leq p\leq 1\), a successful turn occurs with probability \(1-p\) but the agent will not be able to recognize this until it has visited an anchor, a known, preassigned point, placed on the real line. Despite this faulty behaviour of the agent it is rather surprising that it is possible to design algorithms which outperform the well-known zig-zag algorithm whose competitive ratio is \(9\). ### Related Work Search by a single agent on the real line was initiated independently by Bellman [9] and Beck [6, 7, 8] almost 50 years ago; the authors prove the well known result that a single searcher whose max speed is 1 cannot find a hidden target placed at an initial distance \(d\) from the searcher in time less than \(9d\). These papers gave rise to numerous variants of linear search. Baeza-Yates et. al. [3, 4] study search problems by agents in other environments, e.g. in the plane or starting at the origin of \(w\) concurrent rays (also known as the "Lost Cow" problem). Group search was initiated in [11] where evacuation (a problem similar to search but one minimizing the time it takes for the last agent to reach the target) by multiple agents that can communicate face-to-face was studied. An extension to the problem, where one tries to minimize the weighted average of the evacuation times was studied in [17]. There is extensive literature on this topic and [12] provides a brief survey of more recent topics on search. Linear search with multiple agents some of which may be faulty, Crash or Byzantine, was initiated in the work of [15] and [13], respectively. In this theme, one uses the power of communication in order to overcome the presence of faults. For three agents one of which is Byzantine, [21] shows that the proportional schedule presented in [15] can be analyzed to achieve an upper bound of \(8.653055\). Recently, [14] gives a new class of algorithms for \(n\) agents when the number of Byzantine faulty among them is near majority, and the best known upper bound of \(7.437011\) on an infinite line for three agents one of which is Byzantine. The present paper focuses on probabilistic search. The work of Bellman [9] and Beck [6, 7, 8], also mentioned above, has probabilistic focus. In addition numerous themes on probabilistic models of linear search can be found in the book [2] of search games and rendezvous, as well as in [1, 20]. Search which takes into account the agent's turning cost is the focus of [2][Section 8.4] as well as the paper [16]. Search with uncertain detection is studied in [2][Section 8.6]. According to this model the searcher is not sure to find the target when reaching it; instead it is assumed that the probability the searcher will find it on its \(k\)-th visit is \(p_{k}\), where \(\sum_{k\geq 0}p_{k}=1\). A particular case of this is search with geometric detection probability [2][Section 8.6.2] in which the probability of finding the target in the \(k\)-th visit is \((1-p)^{k-1}p\). [19] investigates searching for a one-dimensional random walker and [5] is concerned with rendezvous search when marks are left at the starting points. In another result pertaining to different kind of probabilistic faults,[10] studies the problem on the half-line (or 1-ray), where detecting the target exhibits faults, i.e. every visitation is an independent Bernoulli trial with a known probability of success \(p\). Back to searching the infinite line, a randomized algorithm with competitive ratio \(4.59112\) for the cow path problem can be found in [18] and is also shown to be optimal. In a strong sense, the results in this work are direct extensions of the optimal solutions for deterministic search in [3] and for randomized search in [18]. To the best of our knowledge the linear search problem considered in our paper has never been investigated before. We formally define our problem in Section 2. Then in Section 3 we elaborate further on the relevance of our results to [3] and [18]. ## 2 Model & Problem Definition (\(p\)-Pdls) We introduce and study the so-called Probabilistically Disoriented Linear Search problem (or \(p\)-PDLS, for short), associated with some probability \(p\). We generalize the well studied linear search problem (also known as cow-path) where the searcher's trajectory decisions exhibit probabilistic faults. The value \(p\) will quantify a notion of probabilistic failure (disorientation). In \(p\)-PDLS, an agent (searcher) can move at unit speed on an infinite line, where any change of direction does not incur extra cost. On the line there are two points, _distinguishable_ from any other point. Those points are the _origin_, i.e. the agent's starting location, and the _target_, which is what the agent is searching for and which can be detected when the agent walks over it. The agents have a faulty behaviour. If the agent tries to change direction (even after stopping), then with known probability \(p\) the agent will fail and she will still move towards the same direction. Consequent attempts to change direction are independent Bernoulli trials with probability of success \(1-p\). Moreover the agent is _oblivious_ to the result of each Bernoulli trial, i.e. the agent is not aware if it manages to change direction. We think of this probabilistic behaviour as a co-routine of the agent that fails to be executed with probability \(p\). An agent which satisfies this property for a given \(p\) is called \(p\)-faulty. Moreover we assume, for the sake of simplicity, that at the very beginning the \(p\)-faulty agent starts moving to a specific direction, without fault. The agent's faulty behaviour is compensated by that it can utilize the origin and the target to recover its perception of orientation. Indeed, suppose that the agent passes over the origin and after time \(1\) it decides to change direction. In additional time \(1\), the agent has either reached the origin, in which case it realizes it turned successfully, or it does not see the origin, in which case it detects that it failed to turn. We elaborate more on this idea later. A solution to \(p\)-PDLS is given by the agent's trajectory (or agents' trajectories), i.e. instructions for the agent(s) to turn at specific times which may depend on previous observations (e.g. visitations of the origin and when they occurred). A _feasible trajectory_ is a trajectory in which every point on the infinite line is visited (sooner or later) with probability \(1\) (hence a \(1\)-faulty agent admits no feasible trajectory). For a point \(x\in\mathbb{R}\) (target) on the line, we define the termination cost of a feasible trajectory in terms of competitive analysis. Indeed, if \(E(x)\) denotes the expected time that target \(x\) is reached by the last agent (so by the only agent, if searching with one agent), where the expectation is over the probabilistic faults or even over algorithmic randomized choices, then the termination cost for target (input) \(x\) is defined as \(E(x)/|x|\). The competitive ratio of the feasible trajectory is defined then as \(\limsup_{x}E(x)/|x|\). For the sake of simplicity, our definition deviates from the standard definition of the competitive ratio for linear search in which the performance is defined as the supremum over \(x\) with absolute value bounded by a constant \(d\), usually \(d=1\). However it can be easily seen that the two measures differ by at most \(\epsilon\), for any \(\epsilon>0\) using a standard re-scaling trick (see for example [17]) that shows why the value of \(d\) is not important, rather what is important is that \(d\) is known to the algorithm. _Specifications when searching with two agents:_ When searching with two \(p\)-faulty agents, the value of \(p\) is common to both, as well as the probabilistic faults they exhibit are assumed to be independent Bernoulli trials. The search by two \(p\)-faulty agents can be done either in the wireless or the face-to-face model. In the former model, we assume that agents are able to exchange messages instantaneously, whereas in the face-to-face model messages can be exchanged only when the agents are co-located. In either communication model, we assume that the two agents can detect that (and when) they meet. As a result, we naturally assume that upon a meeting, a \(p\)-faulty agent can also act as a _distinguished point_ (same as the origin), hence helping the other agent to turn. Later, we will call an agent who facilitates the turn _Follower_, and the agent who performs the turn _Leader_. As long as the agents have a way to resolve the two roles (which will be built in to our algorithms), we also assume that the Leader moving in any direction can "pick up" another faulty agent she meets so that the two continue moving in that direction. This means that two agents meeting at a point can continue moving to the direction of the leader (with probability \(1\)) even if the non-leader is idle. This property is motivated by that, effectively, the leader does not change direction, and hence there is no risk to make a mistake. Finally we note that two \(p\)-faulty agents can move at speed at most \(1\), independently of each other, as well as any of them can slow down or even stay put, complying still with the faulty turn specifications. ### Notes on Algorithmic and Adversarial Randomness The algorithms (feasible trajectories) that we consider are either deterministic or randomized, independently of the randomness induced by the faultiness. In particular, the efficiency measure is defined in the same way, where any expectations are calculated over the underlying probability space (induced by the combination of probabilistic faults and the possible randomized algorithmic choices). Moreover, if the algorithm is randomized then an additional random mechanism is used that is independent of the faulty behaviour. For example, a randomized agent (algorithm) can choose a number between \(0\) and \(1\) uniformly at random. A \(p\)-faulty agent that has access to a random oracle will be called _randomized_, and _deterministic_ otherwise (but both exhibit probabilistically failed turns). It follows by our definitions that \(0\)-PDLS with one deterministic agent is the celebrated linear search problem (cow-path) which admits a provably optimal trajectory of competitive ratio \(9\)[3]. In the other extreme, \(1\)-PDLS does not admit a feasible solution, since the agent moves indefinitely along one direction. In a similar spirit we show next in Lemma 1 that the problem is meaningful only when \(p<1/2\), see Section A for proof. **Lemma 1**.: _No trajectory for \(p\)-PDLS has bounded competitive ratio when \(p\geq 1/2\)._ It is essential to note that in our model, the probabilistic faulty turns of a agent do hinder the control of the trajectory, but also introduce uncertainty of the algorithmic strategy for the adversary. As a result, the probabilistic movement of the agent (as long as \(p>0\)), even though it is not controlled by the algorithm, it can be interpreted as an algorithmic choice that is set to stone. Therefore, the negative result of [18] implies the following lower bound for our problem. Corollary 1 ().: _For any \(p\in(0,1/2)\), no solution for \(p\)-\(\mathrm{PDLS}\) with one agent (deterministic or randomized) has competitive ratio lower than 4.59112._ ## 3 Contributions' Outline and Some Preliminary Results Our main results in this work pertain to upper bounds for the competitive ratio that (one or two) faulty agents can achieve for \(p\)-\(\mathrm{PDLS}\). ### Results Outline for Searching with One Faulty Agent We start our exposition with search algorithms for one \(p\)-faulty agent. In Section 4 we analyze the performance of a deterministic search algorithm, whose performance is summarized in Theorem 2 on page 4. The section serves as a warm-up for the calculations we need for our main result when searching with one randomized \(p\)-faulty agent. Indeed, in Section 5 we present a _randomized_ algorithm whose performance is summarized in Theorem 3 on page 4. The reader can see the involved formulas in the formal statements of the theorems, so here we summarize our results graphically in Figure 1. Some important observations are in place. _Comments on the expansion factors:_ We emphasize that both our algorithms are adaptations of the standard zig-zag algorithms of [3] and [18] which are optimal for the deterministic and randomized model, respectively, when searching with a \(0\)-faulty agent. The zig-zag algorithms are parameterized by the so-called _expansion factor_\(g\) that quantifies the rate by which the searched space is increased in each iteration that the searcher changes direction. In our case however, we are searching with \(p\)-faulty agents, where \(p>0\), and as a result, the algorithmic choice for how the searched space expands cannot be fully controlled (since turns are subject to probabilistic faults). This is the reason that, in our case, the analyses of these algorithms are highly technical, which is also one of our contributions. On a relevant note, the optimal expansion factor for the optimal deterministic \(0\)-faulty agent is \(2\), while the expansion factors \(g=g(p)\) we use for the deterministic \(p\)-faulty agent, \(p\in(0,1/2)\), are decreasing in \(p\). As \(p\to 0\) Figure 1: Graphical summary of the positive results pertaining to searching with one \(p\)-faulty agent, \(p\in(0,1/2)\). we use expansion factor \(1+\sqrt{2}\), and the expansion factor drops to \(2\), for all \(p\geq 0.146447\). When it comes to our randomized algorithm, the chosen expansion factor is again decreasing in \(p\), starting from the same choice as for the optimal randomized \(0\)-faulty agent of [18], and being equal to \(2\) for all \(p\geq 0.241516\).3 The expansion factors are depicted in Figure 0(b). Footnote 3: For simplicity, we only give numerical bounds on \(p\). All mentioned bounds of \(p\) in this section have explicit algebraic representations that will be discussed later. _Comments on the established competitive ratios:_ By the proof of Lemma 1 it follows that as \(p\to 1/2\), the optimal competitive ratio for \(p\text{-}\mathrm{PDLS}\) is of order \(\Omega(1/(1-2p)\). Hence for the sake of better exposition, we depict in Figure 0(a) the established competitive ratios scaled by \(1/2-p\). Moreover, the results are optimal up to a constant factor when \(p\to 1/2\). It is also interesting to note that for small enough values of \(p\), the established competitive ratios are better than the celebrated optimal competitive ratio \(9\) for \(0\text{-}\mathrm{PDLS}\). This is because our algorithms leverage the probabilistic faults to their advantage, making the adversarial choices weaker. Indeed, algebraic calculations show that the competitive ratios of the deterministic and the randomized algorithms are less than \(9\) when \(p\leq 0.390388\) and when \(p\leq 0.436185\), respectively. Moreover, when searching with one \(p\)-faulty agent and \(p\to 0\), the competitive ratio of our deterministic algorithm tends to \(6.82843<9\) and of our randomized algorithm to the provably optimal competitive ratio of \(\frac{1}{W\left(\frac{1}{\varepsilon}\right)}+1\approx 4.59112\), where \(W(\cdot)\) is the Lambert W-Function. It is important to also note that for deterministic algorithms only, there is no continuity at \(p=0\), since when the agent exhibits no faults, the adversary has certainty over the chosen trajectory and hence is strictly more powerful. Values of \(p\) close to \(1/2\) give rise to interesting observations too. Indeed, it is easy to see in Figure 0(a) that the derived competitive ratios are of order \(\Theta\left(1/(1-2p)\right)\). More interestingly, the difference of the established competitive ratios of the deterministic and the randomized algorithms is \(\Theta\left(1/(1-2p)\right)\) too, when \(p\to 1/2\). Hence the improvement when utilizing controlled (algorithmic) randomness is significant. We ask the following critical question: _"Does access to a random oracle provide an advantage for \(p\text{-}\mathrm{PDLS}\) with one agent?"_ Somehow surprisingly, we answer the question in the _negative_! More specifically, we show, for all \(p\in(0,1/2)\) and using the probabilistic faults in our advantage, how a deterministic \(p\)-faulty agent (deterministic algorithm) can simulate a randomized \(p\)-faulty agent (randomized algorithm). Hence our improved upper bound of Theorem 3 which is achieved by a randomized algorithm can be actually simulated by a deterministic \(p\)-faulty agent. The proof of this claim relies on that our randomized algorithm assumes access to a randomized oracle that samples only from uniform distributions a finite number of times (in fact only \(2\)). The main idea is that a deterministic agent can stay arbitrarily close to the origin, sampling arbitrarily many random bits using it's faulty turns, allowing her to simulate queries to a random oracle. The details of the proof of the next theorem appear in Section B. Theorem 3.1: _For any \(p\in(0,1/2)\), let \(c\) be the competitive ratio achieved by a randomized faulty agent for \(p\text{-}\mathrm{PDLS}\), having access (finite many times) to an oracle sampling from the uniform distribution. Then for every \(\varepsilon>0\), there is a deterministic \(p\)-faulty agent for the same problem with competitive ratio at most \(c+\varepsilon\)._ ### Results Outline for Searching with Two Faulty Agents We conclude our contributions in Section 6 where we study \(p\text{-}\mathrm{PDLS}\) with two agents. First we show how two (deterministic) \(p\)-faulty agents (independently of the underlying communication model) can simulate the trajectory of any (one) \(0\)-faulty agent. As an immediate corollary, we derive in Theorem 3.1 on page 3.1 a method for finding the target with two \(p\)-faulty agents that has competitive ratio \(9+\varepsilon\), for every \(\varepsilon>0\). Most importantly as long as \(p>0\), the result holds not only in expectation, but with arbitrarily large concentration. Motivated by similar ideas, we show in Theorem 3.1 on page 3.1 how two deterministic \(p\)-faulty agents can simulate the celebrated optimal randomized algorithm for one agent for \(0\text{-}\mathrm{PDLS}\), achieving competitive ratio arbitrarily close to \(4.59112\). The result holds again regardless of the communication model. However, in this case we cannot guarantee a similar concentration property as before. Finally, we study the problem of searching with two _wireless_\(p\)-faulty agents. Here we are able to show in Theorem 3.1 on page 3.1 how both agents can reach any target with competitive ratio \(3+4\sqrt{p(1-p)}\), in expectation. The performance is increasing in \(p<1/2\), ranging from \(3\) (the optimal competitive ratio when searching with two wireless \(0\)-fault agents) to \(5\). However, as before we can control the concentration of the performance, making it arbitrarily close to \(3+4\sqrt{p(1-p)}\) with arbitrary confidence. Hence, this gives an advantage over Theorem 3.1 for small values of \(p\), i.e. when the derived competitive ratio is smaller than \(4.59112\). In this direction, we show that when \(p>0.197063\) the competitive ratio exceeds \(4.59112\), and hence each of the results described above are powerful in their own right. ## 4 Searching with One Deterministic \(p\)-Faulty Agent We start by describing a deterministic algorithm for searching with a \(p\)-faulty agent, where \(p\in(0,1/2)\). Our main result in this section reads as follows. Theorem 4.1: \(p\)_-\(\mathrm{PDLS}\) _with one agent admits a deterministic algorithm with competitive ratio equal to \(2(\sqrt{2}+2)\) when \(0<p\leq\frac{1}{4}(2-\sqrt{2})\), and equal to \(6-4p+\frac{1}{1-2p}\) when \(\frac{1}{4}(2-\sqrt{2})<p<1/2\)._ We emphasize that having \(p>0\) will be essential in our algorithm. This is because the probabilistic faults introduce uncertainty for the adversary. For this reason, it is interesting but not surprising that we can in fact have competitive ratio \(2(\sqrt{2}+2)\approx 6.82843<9\) for all \(p<\frac{1}{8}(\sqrt{17}-1)\approx 0.390388\). First we give a verbose description of our algorithm, that takes as input \(p\in(0,1)\), and chooses parameter \(g=g(p)\) which will be the intended expansion rate of the searched space. In each iteration of the algorithm, the agent will be passing from the origin with the intention to expand up to \(g^{i}\) in a certain direction. When distance \(g^{i}\) is covered, the agent attempts to return to the origin. After additional time \(g^{i}\) the agent knows if the origin is reached, in which case she expands in the opposite direction with intended expansion \(g^{i+1}\). If not, the agent knows she did not manage to turn, and proceeds up to point \(g^{i+1}\) (this is why we require that \(g\geq 2\)). Then, she makes another attempt to turn. This continues up to the event that the agent succeeds in turning at \(g^{j}\), for some \(j>i\), and then the agent attempts to expand in the opposite direction with intended expansion length \(g^{j+1}\). The algorithm starts by searching towards an arbitrary direction, say right, with intended expansion \(g^{0}\). Later on, it will become clear that the termination time of the algorithm converges only if \(g<1/p\). In order to simplify the exposition (and avoid repetitions), we introduce first subroutine Algorithm 1 which is the baseline of both algorithms we present for searching with one agent. Here, this is followed by Algorithm 2 which is our first search algorithm. ``` 0:\(g,p,i,d\) 1:repeat 2:repeat 3: Move towards point \(dg^{i}\) with speed 1 4:until The target is found \(\lor\)\(g^{i}\) time has passed 5:while The origin is ~found \(\wedge\) the target is ~found do 6: Set \(d\leftarrow-d\) {This changes the direction but it can fail with probability \(p\)} 7: Set \(i\gets i+1\) 8:repeat 9: Move towards point \(dg^{i}\) with speed 1 10:until Either the target is found or \(g^{i}-g^{i-1}\) time has passed or the origin is found 11:endwhile 12:until The target is found ``` **Algorithm 1** Baseline Search As we explain momentarily, Theorem 2 follows directly by the following technical lemma, whose proof appears in Section C. Lemma 2: _Fix \(p\in(0,1/2)\) and \(g\in[2,1/p)\). If the target is placed in \((g^{t},g^{t+1}]\), then the competitive ratio of Algorithm 2 is at most_ \[\frac{(1-p)g}{(1-gp)}\left(\frac{1+(1-2p)g}{g-1}+\left(1-2p\right)^{t+1}\right) +1.\] Taking the limit when \(t\to\infty\) of the expression of Lemma 2 shows that the competitive ratio of Algorithm 2 is \[f_{p}^{\textsc{det}}(g):=\frac{1-g(2g(p-2)p+g+2)}{(1-g)(1-gp)}\] for all \(p,g\) complying with the premise.4 Footnote 4: If one wants to use the original definition of the competitive ratio, then by properly re-scaling the searched space (just by scaling the intended turning points), one can achieve competitive ratio which is additively off by at most \(\epsilon\) from the achieved value, for any \(\epsilon>0\). Next we optimize function \(f_{p}^{\textsc{det}}(g)\). When \(p\leq\frac{1}{4}\left(2-\sqrt{2}\right)\), the optimal expansion factor (optimizer of \(f_{p}^{\textsc{det}}(g)\)) is \[g_{0}(p):=\frac{-\left(\sqrt{2}+2\right)p+\sqrt{2}+1}{1-2p^{2}}.\] It is easy to see that \(2\leq g_{0}(p)<1/p\), for all \(p\leq\frac{1}{4}\left(2-\sqrt{2}\right)\) (in fact the strict inequality holds for all \(0<p<1\)). In this case the induced competitive ratio becomes \(2\left(\sqrt{2}+2\right)\approx 6.82843\). When \(p\geq\frac{1}{4}\left(2-\sqrt{2}\right)\), the optimal expansion factor (at least \(2\)) is \(g=2\), in which case the competitive ratio becomes \(6-4p+\frac{1}{1-2p}\). Interestingly, the competitive ratio becomes at least \(9\) for \(p\geq\frac{1}{8}\left(\sqrt{17}-1\right)\approx 0.390388\). In other words, \(0.390388\) is a threshold for the probability associated with the agent's faultiness for which, at least for the proposed algorithm, that probabilistic faultiness is useful anymore towards beating the provably optimal deterministic bound of \(9\). On a relevant note, recall that by Lemma 1, there is a threshold probability \(p^{\prime}\) such that any algorithm (even randomized) has competitive ratio at least \(9\) when searching with a \(p\)-faulty agent, when \(p\geq p^{\prime}\). ## 5 Searching with One Randomized (Improved Deterministic) \(p\)-Faulty Agent In this section we equip the \(p\)-faulty agent with the power of randomness, and we show the next positive result. Note that due to Theorem 1, the results can be simulated by a deterministic \(p\)-faulty agent too, up to any precision. Theorem 5.1: _Let \(p_{0}=\frac{1}{8}\left(5-\sqrt{1+\ln^{2}(2)+6\ln(2)}-\ln(2)\right)\approx 0.241516\). For each \(p<p_{0}\), let also \(g_{p}\) denote the unique root, no less than \(2\), of_ \[h_{p}(g):=(1-gp)(1+g(1-2p))-g(1-p)\ln g. \tag{1}\] _Then for each \(p\in(0,1/2)\), \(p\)-PDLS with one agent admits a randomized algorithm with competitive ratio at most_ \[\left\{\begin{array}{ll}g_{p}\left(\frac{1-p}{1-g_{p}p}\right)^{2}+1,&p\in( 0,p_{0}]\\ (1-p)\frac{1+2(1-2p)}{(1-2p)\ln(2)}+1,&p\in(p_{0},1/2).\end{array}\right.\] Elementary calculations show that the competitive ratio of Theorem 3 remains at most \(9\), as long as \[p\leq\frac{1}{8}\left(7+\sqrt{1+256\ln^{2}(2)-96\ln(2)}-16\ln(2)\right)\approx 0.436185.\] First we argue that the premise regarding \(g_{p}\) of Theorem 3 is well defined. Indeed, consider \(h_{p}(g)\) as in (1). We have that \(\partial h_{p}(g)/\partial p=4g^{2}p-g^{2}-3g+g\ln g\leq-3g+g\ln g<0\), for all \(p\leq 1/4\). Therefore, \(h_{p}(2)=8p^{2}-10p+2p\ln(2)+3-2\) is decreasing in \(p\), and hence \(h_{p}(2)>h_{p_{0}}(2)=0\), by the definition of \(p_{0}\). Also \(h_{p}(4)=32p^{2}-28p+4p\ln(4)+5-4\ln 4\) is decreasing in \(p\) too, and therefore \(h_{p}(4)\leq h_{0}(4)=5-4\ln 4\approx-0.545177<0\). This means that \(h_{p}(g)\) has indeed a root, with respect to \(g\), in \([2,4)\), for all \(p\in(0,p_{0}]\). Next we show that this root is unique. Indeed, we have \(\partial h_{p}(g)/\partial g=4gp^{2}-2gp+p\ln g-\ln g-2p\leq-\ln g<0\), for \(p\in[0,1/2]\). Therefore, for all \(p\leq p_{0}<1/2\), function \(h_{p}(g)\) is decreasing in \(g\), and hence any root is unique, that is \(g_{p}\) is indeed well-defined and lies in the interval \([2,1/p)\), for all \(p\in(0,1/2)\). Next, we observe that as \(p\to 0\), the competitive ratio promised by Theorem 3 when searching with a \(p\)-faulty agent (i.e. a nearly non-faulty agent) is given by \(g_{0}\) being the root of \(g-g\ln g+1\), i.e. \(g_{0}=\frac{1}{W(\frac{1}{2})}\approx 3.59112\), where \(W(\cdot)\) is the Lambert W-Function.5 Moreover, the induced competitive ratio is \(1+g_{0}\approx 4.59112\), which is exactly the competitive ratio of the optimal randomized algorithm for the original linear search problem due to Kao et al [18]. We view this also as a sanity check regarding the correctness of our calculations. Footnote 5: The Lambert W-Function is the inverse function of \(L(x)=xe^{x}\). Not surprisingly, our algorithm that proves Theorem 3 is an adaptation of the celebrated optimal randomized algorithm for linear search with a \(0\)-faulty agent of [18]. As before, the search algorithm, see Algorithm 3 below, is determined by some expansion factor \(g\), that represents the (intended, in our case, due to faults) factor by which the searched space is extended to, each time the direction of movement changes. The randomized algorithm makes two queries to a random oracle. First it chooses a random bit, representing an initial random direction to follow. Second, the algorithm samples random variable \(\epsilon\) that takes a value in \([0,1]\), uniformly at random. Variable \(\epsilon\) quantifies a random scale to the intended turning points. It is interesting to note that setting \(\epsilon=0\) in Algorithm 3 deterministically, and removing the initial random direction choice, gives rise to the previous deterministic Algorithm 2. ``` 0:\(g\geq 2\) and \(0<p<1/2\) 1: Choose \(\epsilon\) from \([0,1)\) uniformly at random 2: Set \(i\to\epsilon\) and \(d\gets 1\) [\(d\in\{1,-1\}\) represents direction (1 going right and \(-1\) going left)] 3: Run Algorithm 1 with parameters \(g,p,i,d\). ``` **Algorithm 3** Faulty Randomized Linear Search We show next how Theorem 3 follows by the following technical lemma, whose proof appears in Section D. Lemma 3: _For any \(p\in(0,1/2)\) and any \(g\in[2,1/p)\), if the target is placed in the interval \((g^{t},g^{t+1}]\), then the competitive ratio of Algorithm 3 is at most_ \[1+\frac{1}{(1-gp)\ln g}\left(\begin{array}{c}(1-p)(1-g(-1+2p))(1+(-1+2p)^{t })\\ +\frac{2(1-p)}{g^{t}}+2g(1-p)(p+g^{t}(-1+p)(-1+2p)^{t})\end{array}\right).\] Recall that \(p\in(0,1/2)\), so taking the limit of the expression of Lemma 3 when \(t\to\infty\), and after simplifying algebraically the expression, shows that the competitive ratio of Algorithm 3 is at most \[f_{p}^{\textsc{rand}}(g):=1+\frac{(1-p)(1+g(1-2p))}{(1-gp)\ln g},\] for all \(p,g\) complying with the premise. Next we optimize \(f_{p}^{\textsc{rand}}(g)\) for all parameters \(p\in(0,1/2)\), under the constraint that \(2\leq g<1/p\). In particular, we show that the optimizers of \(f_{p}^{\textsc{rand}}(g)\) are \(g_{p}\), if \(p\leq p_{0}\), and \(g=2\) otherwise resulting in the competitive ratios as described in Theorem 3. First we compute \(\frac{\partial}{\partial g}f_{p}^{\text{\tiny{rand}}}(g)=\frac{1-p}{g(1-gp)^{2} \ln^{2}g}h_{p}(g)\), where \(h_{p}(g)\) is the same as (1). As already proven below the statement of Theorem 3, we have that \(h_{p}(g)\) has a unique root in the interval \([2,4)\). Next we show that \(f_{p}^{\text{\tiny{rand}}}(g)\) is convex. Indeed, we have that \[\frac{\partial^{2}}{\partial g^{2}}f_{p}^{\text{\tiny{rand}}}(g)=\frac{1-p}{g^ {2}(1-gp)^{3}\ln^{3}(g)}s_{p}(g),\] where \(s_{p}(g)=\sum_{i=0}^{3}\alpha_{i}p^{i}\) is a degree \(3\) polynomial in \(p\) with coefficients \[\alpha_{3} =2g+g(-\ln g)+\ln(g)+2\] \[\alpha_{2} =2g\left(-2g+g\ln^{2}g-\ln g-4\right)\] \[\alpha_{1} =g^{2}\left(2g-2\ln^{2}g+g\ln g+3\ln g+10\right)\] \[\alpha_{0} =-2g^{3}(\ln g+2)\] As a result, it is easy to verify that \(s_{p}(g)\) remains positive for all \(p\in(0,1/2)\), condition on that \(g\in[2,4]\) (in fact the optimizers as described in Theorem 3 do satisfy this property). We conclude that \(f_{p}^{\text{\tiny{rand}}}(g)\) is convex in \(g\). Together with our previous observation, this means that, under constraint \(g\geq 2\), function \(f_{p}^{\text{\tiny{rand}}}(g)\) is minimized at the unique root of \(\frac{\partial}{\partial g}f_{p}^{\text{\tiny{rand}}}(g)\) when \(p\leq p_{0}\), and at \(g=2\) when \(p\in[p_{0},1/2)\). These are exactly the optimizers described in Theorem 3, where in particular the competitive ratio \(f_{p}^{\text{\tiny{rand}}}(g)\) is simplified taking into consideration that for the chosen value of \(g\) we have that \(s_{p}(g)=0\), for all \(p\leq p_{0}\). Lastly, it remains to argue that all optimizers of \(f_{p}^{\text{\tiny{rand}}}(g)\) are indeed at most \(1/p\). For this, it is enough to show that the unique root \(g_{p}\) of \(h_{p}(g)\) is at most \(1/p\), for all \(p\leq p_{0}\) (since for larger values of \(p\) we use expansion factor \(g=2\)). For this, and since \(g\geq 2\), we have \(h_{p}(g)\leq 2g^{2}p^{2}-g^{2}p-3gp+gp\ln 2+g-g\ln 2+1\). The latest expression is a polynomial in \(p\) of degree \(2\), which has only one of its roots positive, namely \[\frac{-\sqrt{(-3p+p\ln 2+1-\ln 2)^{2}-4\left(2p^{2}-p\right)}+3p+p(-\ln 2)-1+ \ln 2}{2p\left(2p-1\right)}.\] Simple calculations then can show that the latter expression is at most \(1/p\) for all \(p\in(0,1/2)\). In fact one can show that the expression above, multiplied by \(p\), is strictly increasing in \(p\) and at \(p=1/4>p_{0}\) becomes \[\frac{1}{4}\left(1-\ln 8+\sqrt{9+(\ln 8-2)\ln 8}\right)\approx 0.486991<1/2.\] That shows that for each \(p<p_{0}\) we have that for the unique root \(g_{p}\) of \(h_{p}(g)\) the inequality \(g_{p}<1/2p<1/p\) is valid, as desired. ## 6 Searching with Two \(p\)-Faulty Agents In this section we present algorithms for \(p\)-\(\mathrm{PDLS}\) for two faulty agents, for all \(p\in(0,1/2)\). Central to our initial results is the following subroutine that, at a high level, will be used by a \(p\)-faulty agent, the _Leader_, in order to make a "forced" turn, with the help of a _Follower_, which can be either a distinguished immobile point, e.g. the target or the origin, or another \(p\)-faulty agent. In this process the \(p\)-faulty agent who undertakes the role of the Follower may need to either slow down or even halt for some time, still complying with the probabilistic faulty turns (once halted, she can continue moving in the previous direction, but changing it is subject to a fault). It will be evident, in the proof of Lemma 4 below, that Algorithm 4 will allow an agent to change direction arbitrarily close to an intended turning point, and with arbitrary concentration (both controlled by parameter \(\gamma\)). The next lemma refers to a task that two \(p\)-faulty agents can accomplish independently of the underlying communication model. At a high level, the lemma establishes that two \(p\)-faulty agents can bypass the probabilistic faults at the expense of giving up the independence of the searchers' moves. ``` 0:\(\gamma\) small real number, Follower either mobile or immobile. 1:repeat 2: Change direction {This fails with probability \(p\)} 3: Move for time \(\gamma\) if Follower is mobile, and \(\gamma-2\gamma\) if Follower is immobile. 4:until You meet with follower 5: Communicate to Follower the Leader's direction ``` **Algorithm 4** Force Change Direction (Instructions for a Leader) Lemma 4.: _For every \(p\in[0,1/2)\), two \(p\)-faulty agents can simulate the trajectory of a deterministic \(0\)-faulty agent within any precision (and any probability concentration)._ Proof.: Consider two \(p\)-faulty agents that are initially collocated at the origin. We show how the agents can simulate (at any precision) a deterministic trajectory. For this we need to show how the two agents can successfully make a turn at any point without deviating (in expectation) from that point. Indeed, consider a scheduled turning point and consider the two \(p\)-faulty agents approaching that point. For some \(\gamma>0\) small enough, at time \(2\gamma\) before the agents arrive at the point, the two agents undertake two distinguished roles, that of a Leader and that of a Follower. The roles can remain invariant throughout the execution of the algorithm. The Follower instantaneously slows down so that when the distance of the Leader and the Follower becomes \(2\gamma\), the Follower is \(\gamma^{\prime}(1-p)\) before the turning point (which is strictly more than \(\gamma\) and strictly less than \(2\gamma\)), and as a result the Follower has passed the turning point by \(\frac{1-2p}{1-p}\gamma\). At this moment, the Follower resumes full speed, and both agents move towards the same direction as before. Then, the Leader runs Algorithm 4 with mobile Follower being the other \(p\)-faulty agent. Note that if at any moment the two agents meet, it is because after a successful turn the two have moved towards each other for time \(\gamma\), whereas if a turn is unsuccessful the two preserve their relative distance \(\gamma\). Therefore, since the moment of the first turning attempt, the two agents meet in expected time \(\sum_{i=0}^{\infty}(i+1)\gamma(1-p)p^{i}=\frac{\gamma}{1-p}\). We conclude that the expected meeting (and turning) point is the original turning point. Most importantly, the probability that the resulting turning point is away from the given turning point drops exponentially with \(p\), and is also proportional to \(\gamma\), which can be independently chosen to be arbitrarily small. Lemma 4 is quite powerful, since it shows how to simulate deterministic turns with arbitrarily small deviation from the actual turning points. More importantly, that deviation can be chosen to drop arbitrarily fast, dynamically, hence we can achieve smaller expected deviation later in the execution of the algorithm, compensating this way for the passed time. Therefore, we obtain the following theorem. Theorem 4.1.: _For all \(p\in[0,1/2)\), two deterministic faulty agents can solve \(p\)-\(\textsc{PDLS}\) with competitive ratio \(9+\epsilon\), for every \(\epsilon>0\), independently of the underlying communication model. Also, the performance is concentrated arbitrarily close to \(9+\epsilon\)._ Is it worthwhile noticing that agents' movements, in the underlying algorithm of Theorem 4.1 is still probabilistic, due to the probabilistic faulty turns. However, choosing appropriate parameters every time Algorithm 4 is invoked, one can achieve arbitrary concentration in the expected performance of the algorithm, hence the bound of \(9+\epsilon\) can be practically treated as deterministic. In contrast, using the same trick, we can achieve a much better competitive ratio, but only in expectation equal to the one of [18] (with uncontrolled concentration). To see how, note that by the proof of Theorem 4.1, the two \(p\)-faulty agents can stay together in order to collect sufficiently many random bits and simulate any finite number of queries to a random oracle. Then using Lemma 4.1, the agents simulate the optimal randomized algorithm of [18] with performance \(4.59112\), designed originally for one randomized \(0\)-faulty agent that makes only \(2\) queries to the uniform distribution. In other words, the two deterministic \(p\)-faulty agents can overcome their faulty turns using Lemma 4.1 and the lack of random oracle by invoking Theorem 4.1. To conclude, we have the following theorem which requires that \(p>0\). Theorem 4.2.: _Two deterministic faulty agents can solve \(p\)-\(\textsc{PDLS}\) with competitive ratio \(4.59112+\epsilon\), for every \(\epsilon>0\) and for every \(p\in(0,1/2)\), independently of the underlying communication model._ In our final main result we show that two \(p\)-faulty agents operating in the wireless model can do better than \(9\) for all \(p<1/2\), as well as better than \(4.59112\) for a large spectrum of \(p\) values. Note that the achieved competitive ratio is at least \(3\), which is the optimal competitive ratio for searching with two \(0\)-faulty agents in the wireless model, and that our result matches this known bound when \(p\to 0\). Theorem 4.1: _Two deterministic \(p\)-faulty agents in the wireless model can solve \(p\)-PDLS with competitive ratio \(3+4\sqrt{p(1-p)}+\epsilon\), for every \(\epsilon>0\) and for every \(p\in[0,1/2)\)._ ``` 0:\(p\)-faulty agents with distinct roles of Leader and Follower, \(s<1\) and \(\gamma>0\). 1: Agents search in opposite direction until target is found and reported. 2: Target finder becomes Leader, and non-finder becomes Follower. 3: Non-finder changes speed to \(s\), attempts a turn (that fails with probability), and continues moving until she meets with the finder. 4: Finder moves in same direction for \(\gamma>0\) and runs Algorithm 4 (target plays role of Follower), until the target is reached again. 5: Finder (Leader) continues until she meets the non-finder. 6: Non-finder (Follower) stays put until met by the Finder (Leader) again. 7: Leader continues moving in the same direction (away from the target and the Follower) for time \(\gamma\) and then runs Algorithm 4 with the Follower being the immobile agent, in order to turn. 8: When the Leader turns successfully, she picks up the Follower, and continuing in the same direction, together, they move to the target. ``` **Algorithm 5** Search with two wireless \(p\)-faulty agents Proof ( (sketch) of Theorem 4.1; See Section \(E\) for the details.): The proof is given by the performance analysis of Algorithm 5 for a proper choice of speed \(s<1\). In this simplified (sketch of) proof, we make the assumption that the target finder (using the target) as well as the two agents when walking together can make a deterministic turn. Indeed, using Algorithm 4, we show, in the full proof (Section \(E\)) how the actual probabilistically faulty turns have minimal impact in the competitive ratio. We assume that the target is reported by the finder at time \(1\), when the distance of the two agents is \(2\). As for the non-finder, she turns successfully when she receives the wireless message with probability \(1-p\). Since the finder moves towards her, their relative speed \(1+s\). This means that they meet in additional time \(2/(1+s)\), during which time the non-finder has moved closer to the target by \(2s/(1+s)\). Hence, when the two agents meet, they are at distance \(2-2s/(1+s)\) from the target. On the other hand with probability \(p\) the non-finder fails to turn, and the two agents continue to move towards the same direction, only that the non-finder's new speed is \(s\). So, their relative speed in this case is \(1-s\). This means that they meet in additional time \(2/(1-s)\), during which time the non-finder has moved further from the target by \(2s/(1-s)\). Hence, when the two agents meet, they are at distance \(2+sW+s(2+sW)/(1-s)\) from the target. Also recall that when they meet, they can make together a forced turn (that affects minimally the termination time), inducing this way total termination time (and competitive ratio, since the target was at distance \(1\)) \[3+p\left(\frac{2}{1-s}+\frac{2s}{1-s}\right)+(1-p)\left(\frac{2}{1+s}-\frac{2 s}{1+s}\right)=\frac{5-s(s+4-8p)}{1-s^{2}}.\] We choose \(s=s(p)=\frac{1-2\sqrt{p-p^{2}}}{1-2p}\), the minimizer of the latter expression, that can be easily seen to attain values in \((0,1)\) for all \(p\in(0,1/2)\), hence it is a valid choice for a speed.Now we substitute back to the formula of the competitive ratio, and after we simplify algebraically, the expression becomes \(3+4\sqrt{(1-p)p}\). It is worthwhile noticing that the upper bound \(4.59112\) of Theorem 4.1 holds in expectation, without being able to control the deviation. However, the upper bound of Theorem 4.1 holds again in expectation, but the resulting performance can be concentrated around the expectation with arbitrary precision. Moreover, the derived competitive ratio is strictly increasing in \(p<1/2\), and ranges from \(3\) to \(5\). Hence, the drawback of Theorem 4.1 is that for high enough values of \(p\) (\(p>0.197063\)), the induced competitive ratio exceeds \(4.59112\). Therefore, we have the incentive to choose either the algorithm of Theorem 4.1, when \(p\leq 0.197063\), and the algorithm of Theorem 4.1 otherwise. It would be interesting to investigate whether a hybrid algorithm, combining the two ideas, could accomplish an improved result ## 7 Conclusion In this paper we studied a new mobile agent search problem whereby an agent's ability to navigate in the search space exhibits probabilistic faults in that every attempt by the agent to change direction is an independent Bernoulli trial (the agent fails to turn with probability \(p<1/2\)). When searching with one agent, our best performing algorithm has optimal performance \(4.59112\) as \(p\to 0\), performance less that \(9\) for \(p\leq 0.436185\), and optimal performance up to constant factor and unbounded as \(p\to 1/2\). When searching with two faulty agents, we provide \(3\) algorithms with different attributes. One algorithm has (expected) performance \(9\) with arbitrary concentration, the other has performance \(4.59112\), and finally one has performance \(3+4\sqrt{p(1-p)}\) (ranging between \(3\) and \(5\)) again with arbitrary concentration. It is rather surprising that even in this probabilistic setting with one searcher, we can design algorithms that outperform the well-known zig-zag algorithm for linear search whose competitive ratio is \(9\), as well as that the problem with two searchers admits bounded competitive ratio for all \(p\in(0,1/2)\), and unlike the one search problem. Interesting questions for further research could definitely arise in the study of similar, related "probabilistic navigation" faults either for their own sake or in conjunction with "communication" faults in more general search domains (e.g., in the plane or more general cow path with \(w\) rays) and for multiple (possibly collaborating) agents.
2309.07257
The Potential of Ridesharing Adoption and its Effects on CO2 Emissions and Customer Experience
Taxi services are an integral part of urban transport and are a major contributor to air pollution and traffic congestion, which adversely affect human life and health. Sharing taxi rides is one way to reduce the unfavorable effects of cab services on cities. However, this comes at the expense of passenger discomfort, quantified in terms of longer travel times. Taxi ridesharing is a sophisticated mode of urban transport that combines individual trip requests with similar spatiotemporal characteristics into a shared ride. We propose a one-to-one sharing strategy that pairs trips with similar starting and ending points. We examine the method using an open dataset with trip information on over 165 million taxi rides. We show that the cumulative journey time can be reduced by 48 percent while maintaining a relatively low level of passenger inconvenience, with a total average delay compared to an individual mobility case of 6 minutes and 42 seconds. This advantage is accompanied by decreases in emissions of 20.129 tons on an ordinary day and a potential fare reduction of 49 percent, which could point to a widespread passenger acceptance of shared taxi services. Overall, a matching rate of 13 percent is reached while a 27 percent matching rate is attained for high-demand areas. Compared to many-to-many sharing dynamic routing methodologies, our scheme is easier to implement and operate, making fewer assumptions about data availability and customer acceptance.
Maximilian Kaufmann, Jan Nagler
2023-07-21T10:38:12Z
http://arxiv.org/abs/2309.07257v1
# The Potential of Ridesharing Adoption and its Effects on CO2 Emissions and Customer Experience ###### Abstract Taxi services are an integral part of urban transport and are a major contributor to air pollution and traffic congestion, which adversely affect human life and health. Sharing taxi rides is one way to reduce the unfavorable effects of cab services on cities. However, this comes at the expense of passenger discomfort, quantified in terms of longer travel times. Taxi ridesharing is a sophisticated mode of urban transport that combines individual trip requests with similar spatiotemporal characteristics into a shared ride. We propose a one-to-one sharing strategy that pairs trips with similar starting and ending points. We examine the method using an open dataset with trip information on over 165 million taxi rides. We show that the cumulative journey time can be reduced by 48 percent while maintaining a relatively low level of passenger inconvenience, with a total average delay compared to an individual mobility case of 6 minutes and 42 seconds. This advantage is accompanied by decreases in emissions of 20.129 tons on an ordinary day and a potential fare reduction of 49 percent, which could point to a widespread passenger acceptance of shared taxi services. Overall, a matching rate of 13 percent is reached while a 27 percent matching rate is attained for high-demand areas. Compared to many-to-many sharing dynamic routing methodologies, our scheme is easier to implement and operate, making fewer assumptions about data availability and customer acceptance. ## 1 Introduction One of the biggest problems facing cities worldwide is vehicular traffic congestion and the resulting air pollution. It has a high price in terms of money and life (Santi et al., 2014). The widespread use of private cars to meet daily mobility needs is one of the most significant issues with managing the planet's finite resources. In addition to being utterly inefficient, it involves massive unneeded material transport, excessive energy waste, urban traffic jams, air pollution, and the exploitation of human resources for driving cars rather than engaging in paid employment or leisure activities (Herminghaus, 2019). Meanwhile, the World Health Organization estimates that outdoor air pollution is one of the significant risks for non-communicable diseases, which is primarily brought on by vehicular traffic and is responsible for over three million deaths per year worldwide. About ninety percent of people living in urban areas are exposed to levels of fine particulate matter that exceed WHO Air Quality Guidelines. As New York City constitutes the basis of the cab trips referred to in this research, we need to mention that air pollution has significantly decreased in many high-income countries, including the US, because of efforts to reduce smog-forming emissions as well as particulate matter (WHO, 2016). Another major problem that almost every populous city must contend with is congested roads, from which, in addition to higher pollution levels, economic losses result (Sweet, 2011). According to a study from the Partnership of New York City, the annual cost of traffic congestion in the New York metropolitan area for 2018 alone is estimated to be 20 Billion USD (Partnership-for-NYC, 2018). To curb this issue to some extent, The City Council of New York City is considering introducing a tolling program that would price every trip into southern Manhattan at up to 23 USD (Ley, 2022). However, with the recent advancements in communication technologies, new business models and approaches could be installed that have the potential to address the problems of road congestion and air pollution in urban areas. For instance, using real-time data creates new opportunities to exploit spare capacity and enables unprecedented monitoring of the urban mobility infrastructure. These developments open up the potential to create new, more innovative transportation systems based on the sharing of cars and effectively provide services that could replace old public transportation infrastructure with on-demand taxi sharing. The advantage of the widespread use of smartphones and their ability to run real-time applications is another driver for implementing ridesharing services (Shaheen and Cohen, 2019). UberPool or Lyft Shared Rides provides examples for successfully implementing ridesharing services, which may raise hopes for a more efficient future. On the other hand, ridesharing can also come with disadvantages. The overall service and waiting times may increase, and passengers may be concerned about the reliability of the shared ride. Moreover, privacy and security issues may be related to traveling with strangers in a shared vehicle. Lastly, the potential positive environmental effects must be treated with caution, as the reduced cost of taxi rides could also lead to increased demand or other rebound effects (Lopez et al., 2014; Storch et al., 2021). Here, we analyze the potential of taxi ridesharing in New York City using an approach that could be installed in real-life cases. We also address the effects of such an approach on the environment and ride quality. The structure of our paper unfolds as follows. Chapter 2 outlines the data structure and substance. The central part is presented in Chapter 3, where the one-to-one ridesharing approach is portrayed, and the empirical findings are described. Chapter 4 discusses the main findings and how a ridesharing provider may benefit. #### Ridesharing When using ridesharing, two or more trip requests with similar origins and destinations are combined to transport multiple passengers simultaneously in one vehicle ((Alonso-Mora et al., 2017)). By combining different trips into one shared ride, ridesharing increases the average number of people per vehicle, decreases the number of vehicles needed to meet the exact demand, and reduces traffic and other adverse effects of urban mobility on the environment (Merlin, 2019). Taxi ridesharing, also known as shared taxis or collective taxis, is an innovative mode of public transport with flexible scheduling and routing that matches, in real-time, at least two different trip requests with similar spatiotemporal characteristics to a shared taxi (Barann et al., 2017). Earlier studies on what we today call ridesharing have focused on small-scale ride-pooling approaches, which are not based on analyzing taxi trip records but rather on surveys. One example is an analysis of (Caulfield, 2009) on one-day data of commuting trips reported as part of a census survey made in Dublin. Caulfield found that 4 % of respondents carpooled to work. They estimated that this ridesharing saved 12,674 tons of CO2 emissions per year. #### Dynamic Ridesharing The developments in information and communication technologies have not just benefitted businesses that could implement a better ridesharing approach using real-time data. It also allowed the analysis of large-scale data using a dynamic approach. Dynamic ridesharing allows ridesharing to occur at short notice and between strangers who do not know each other's itineraries. The greater flexibility of dynamic ridesharing provides additional opportunities to maximize the benefits of sharing and improve the system's efficiency (Lokhandwala & Cai, 2018). In a dynamic ridesharing system, matching the appropriate drivers to form the shared ride is critical. Therefore, many researchers are focused on developing algorithms for perfect ride-matching. These calculations get more complex the more dimensions someone considers (Lokhandwala & Cai, 2018). For this reason, one can see that the recent works (Santi et al., 2014) differ from the more recent ones (Alonso-Mora et al., 2017) regarding the number of ride requests pooled into one vehicle. Santi and coworkers (Santi et al., 2014) introduced the concept of shareability networks and proposed a mathematical model to quantify the benefits of ridesharing. This work is seen as a pioneer in dynamic ridesharing as its results and methodology refer to numerous newer research pieces (Alonso-Mora et al., 2017, Lokhandwala & Cai, 2018, Barann et al., 2017). (Santi et al., 2014) analyzed taxi trip data in New York City and concluded that ridesharing could reduce cumulative trip length by more than 40 %. This model restricted the sharing to a maximum of two ride requests per driver, ignoring the potential benefits of a more flexible system. Moreover, it assumed that all riders' tolerance level for journey delays was the same for drivers and passengers. #### Ridesharing with Autonomous Vehicles With the rapid development of autonomous vehicles, the interest in the potential of ridesharing using such automobiles has received significant attention (Alonso-Mora et al., 2017,Fagnant & Kockelman, 2018). Unlike taxis, whose drivers need to change shifts and take breaks, autonomous vehicles can be available around the clock. Moreover, there is no need to consider the taxi drivers' preferences and waiting times. Therefore, shared autonomous taxis can offer additional benefits compared to traditional ridesharing. The high-capacity ridesharing model of autonomous vehicles by (Alonso-Mora et al., 2017) can satisfy up to 99 % of requests, with an average total delay of 2.5 minutes, by reducing the active operating taxis by 25 %. #### Main Idea of Static Ridesharing Although the implementation of a flexible and dynamic method has the potential to optimally address the problem of sharing taxi rides, these systems reveal inconveniences in real life scenarios because customers might be reluctant to accept picking up and dropping off multiple passengers, almost all of them strangers, during a shared ride (Barann et al., 2017). As mentioned, the many-to-many approach relies on complex matching algorithms, such as optimal routing, combining, and rerouting. Therefore, it could impose operational challenges for taxi operators (Barann et al., 2017). An excellent example of a recent study focusing on ridesharing using a non-dynamic approach was created by (Barann et al., 2017). As an example of large-class spatial sharing problems, this work proposes a framework that enables the analysis of fundamental trade-offs between the advantages and drawbacks of taxi ridesharing systems at the city level. Dividing taxi trips into clusters based on their pickup and drop-off destinations and grouping after a spatiotemporal constraint of matched trips, the research indicated that 48% of taxi trips could be shared. The underlying assumption made implies that every passenger can walk to an assigned pickup destination if the distance is at most 500 meters. The drop-off of all passengers would happen in the same manner, and the targeted location must be reached on foot too. (Barann et al., 2017) argues that this model makes taxi ridesharing easier to implement. It reduces customers' perceived inconvenience by having all passengers meet at the pickup destination and letting them decide whether to share the ride with the other party (Barann et al., 2017). For the reasons named above, the benefits of a static approach may outweigh those of a dynamic approach. Especially implementing a static model that considers a multi-ridesharing scenario may be easy to realize for a shared service provider. Therefore, we employ the ridesharing potential using a one-to-one model, in contrast to (Barann et al., 2017), by proposing a system that does not cluster the demands but uses a grid system of hexagons to define the pickup area. Moreover, our approach assumes smaller walking distances at the pickup locations and no walking distance at the drop-off location, assuming that a city walking difference of a maximum of 500 meters is unrealistic and drop-offs should be made at every target destination. Overall, this should limit the total delays. A time penalty for every drop-off is installed to account for a reasonable drop-off delay. #### Economic Incentives and Psychological Barriers Despite the numerous benefits of ridesharing, some challenges need to be addressed in the analysis to get a realistic understanding of the whole. One of the main challenges is understanding the factors that drive individuals to adopt ridesharing and how to effectively encourage more people to use shared rides instead of traditional modes of individual mobility. The current understanding of these conditions needs to be improved, making it challenging to implement effective strategies for promoting the widespread adoption of ridesharing (Storch et al., 2021). Recent research (Storch et al., 2021) suggests that people considering ridesharing services weigh four primary incentives when deciding whether to request a single or shared ride: financial discounts, anticipated detours, the unknown length of the journey, and the inconvenience of riding in a car with strangers. (Storch et al., 2021) predicts a sharp transition to high-sharing adoption for any given user preferences, implying that even a moderate increase in financial incentives or a minor improvement in service quality may disproportionately increase ridesharing adoption of user groups currently in the low-sharing regime under a variety of conditions. In our analysis, these significant incentives are well considered. In particular, we study the relative potential of the fare reduction in a shared scenario and compare airport fares with total fares showed the possible financial incentives based on the pickup and drop-off locations. To determine the resulting delays of this approach, the three primary variables assessing the trip delay are the size of the pickup and drop-off hexagon, the time interval or departure interval, and the drop-off rank. For every drop-off, a ridesharing participant can expect a delay of 20 seconds. The inconvenience of riding with strangers can also be contained using this methodology. Every trip passenger, like in public transportation, can first assess with whom they would potentially share the trip and decide whether they want to do it. Compared to a dynamic approach, every ridesharing participant in the vehicle is not forced to share with whoever and whenever they wish (Barann et al., 2017). Finally, deciding on the proper drop-off resolution size, only journeys that concluded at a limited distance range were matched. As a result, we only consider trips that did not make lengthy detours to other places, and due to the small size of the drop-off destination, there was no need to add a distance constraint to reach comprehensive results. ## 2 Data Profiling The dataset is accessible through NYC OpenData and contains pickup and drop-off geo-locations in the form of longitude and latitude and the respective trip dates and times, accurate to the second. Other variables include the passenger count, payment type, fare amount, extra costs, tax, airport fee, tolls, tips, and total amount in US-Dollars (cityofnewyork, 2014). The baseline of all initially considered variables is listed in Table 1. ### Data Availability In order to assess the ridesharing potential using the one-to-one approach, a publicly available dataset on yellow taxi trips from 2014, published by the NYC Taxi and Limousine Commission (TLC), is evaluated. The TLC provides data from 2009 to 2021, including several hundred million journeys conducted by yellow taxis, green taxis, and ride-hailing services. In recent years, the data supplier TLC has discontinued entering the precise geo-coordinates of pickup and drop-off points. Consequently, instead of working with a more recent dataset, the research must rely on outdated data from 2014, 2015, or 2016 (cityofnewyork, 2014). The dataset of 2014 consists of more than 165 million trips and a file size of 25 gigabytes (cityofnewyork, 2014). The dataset of 2014 was preferred over the others because ride-hailing services like Uber and Lyft had yet to be extensively adopted to influence taxi ride demand at that time. Therefore, the 2014 trip data is more typical of the city's total demand than the 2015 and 2016 data (NYC-TLC, 2014). ### Data Cleaning #### 2.2.1 Geo Locations Severe outliers in the pickup and drop-off geo-locations were filtered first using the grid filter (see Figure 1). Two percent of observations were omitted in this step. As a next step, the data must be cleaned from outliers close to New York City but located in impossible locations that could not be addressed by the grid filter (see Figure 1). A reason for these distortions could be uncontrolled biases resulting from urban canyons that may have slightly distorted GPS locations during data collection (Santi et al., 2014). Using a publicly available library called nycgeo, it is possible to filter for geographical and administrative borders in New York City (Herman, n.d.). However, the New York City filter resulted in a shallow loss of observations (see Figure 1). \begin{table} \begin{tabular}{|l|l|} \hline **Variable** & **Description** \\ \hline Pickup Location & Longitude and Latitude \\ Pickup Datetime & Accurate to the Second \\ Drop-off Location & Longitude and Latitude \\ Drop-off Datetime & Accurate to the Second \\ Trip Distance in KM & Trip Distance in Miles * 1.609 \\ Passenger Count & Passengers excl. Taxi driver \\ Fare Amount in USD & Total Fare Amount - Tip Amount \\ Avg. Speed in Km/h & (Trip Distance / Trip Duration) * 3600 \\ Trip Duration in sec. & Drop-off Datetime - Pickup Datetime \\ Great Circle Distance & Pickup vs. Drop-off Coordinates \\ \hline \end{tabular} \end{table} Table 1: Variables City are called green Taxis and are not considered in this research. La Guardia and JFK airports are the only two dense pickup or drop-off destinations outside Manhattan operated with yellow taxis (NYC-TLC, 2014). Visual inspection shows that the density of pickups and drop-offs outside Manhattan, apart from the airports, is decreasing dramatically. The last step separates Manhattan, La Guardia, and JFK Airport journeys from the rest. This step leads to a total data loss of 15 percent. #### 2.2.2 Trip Distance The great circle distance, which is the direct distance taking into account the spherical shape of the Earth, was used to compare all travel distances. The calculated distance was sometimes less than the actual distance traveled, which is geometrically impossible (Donovan & Work, 2017). Those observations were therefore excluded from the dataset. Additional cleaning procedures were carried out to eliminate irrational entries. Trips less than 500 meters or more than 60 kilometers were eliminated too. Manual investigation revealed that those trips frequently had inflated times or distances. Additionally, all journeys with similar origins and destinations were abandoned by excluding all trip records with great circle distances below 100 meters (Santi et al., 2014). Trips with extraordinary lengths, for instance, those that span less than a minute or longer than two hours, and trips with speeds below 5 km/h and above 88 km/h, which resembles the maximum New York State speed limitation of 55 mph were eliminated too (Barann et al., 2017,NY-Safety-Council, n.d.). #### 2.2.3 Passenger Count Regarding the TLC (NYC-TLC, n.d.), the maximum number of passengers allowed in a yellow cab taxi is four or five, excluding the driver, depending on whether the trip is operated with a four or five-person Figure 1: Geometric Locations and Outlier Detection taxicab. However, additional passengers below the age of seven are allowed when held on an adult's lap. Hence, all trips with a passenger count above six will be excluded (Costello, 2018). #### 2.2.4 Trip Fares The research article (Zhang, 2012) is based on the 2009 yellow taxi trip data, showing that the distribution of fares closely resembles the distribution of trip length. This demonstrates that the primary determinant of the fare amount is trip distance. Nevertheless, (Zhang, 2012) also suggests separating pickups and drop-offs outside and inside the downtown area and considering rush hours that will affect the trip duration and, therefore, the fare amount. When analyzing Figure 2, we assess that airport pickups and drop-offs usually come with the highest trip fare. The reason is that the two airports, JFK and La Guardia, are the only two locations outside the Manhattan borough. We can also assess a clear correlation between the fare amount and the trip distance. The histogram allows a visual assessment of outliers. It is based on the total ride fare per kilometer. All values below $ 1.5 and above $ 8.5 per kilometer were excluded from future analysis. ### Data Exploration Figure 3 confirms the naive expectation that fare amount, trip duration and trip distance are highly correlated. Yet, the trip fare exhibits a higher correlation with the distance than the duration. This also aligns with an analysis by Zhang and coworkers (Zhang, 2012). We further assume that the trip fare is even better approximated using the great circle distance instead of the trip duration. However, we must consider statistical measures to claim this statement with confidence. We also notice the positive correlation for the trips to and from the airport regarding trip length and travel costs. The highest demand for taxi rides, represented by the number of pickups, is visualized in Figure 4. The darker a hexagon is colored, the more pickups or drop-offs there have been in the area. There is a precise centering of areas with the highest demand, as the top ten areas are all located in southern Manhattan. The two locations with the highest taxi demand are both at Madison Square Garden, with 340 thousand registered pickups in March. That area accounts for 2.75 % of all recorded taxi trips of that month. The visualized ten hexagonal areas comprise 1.25 million pickups and represent 10 % of all registered pickups in Manhattan. The most popular exit points (see Figure 4 - right) display a similar picture. 330 thousand drop-offs are recorded at Madison Square Garden, and the top 10 drop-off destinations sum up to 1.2 million taxi trips representing 9.6 % of all rides. Considering these figures raises hopes that a static approach can yield good results. In order to assess hours with the highest travel demands, the month of March is visualized in groups (see Figure 2: Trip Fare vs. Trip Distance Figure 4: Busiest Pickup (left) and Drop-off (right) Locations Figure 3: Correlation Matrix Figure 5). Colored blue is the peak demand period; The morning-noon peak, from 08:00 to 15:00, and the evening peak, from 17:00 to 22:00, show the most rides per hour. This is important as we can later assess how the ridesharing potential reacts if solely implemented within these peak demand periods. #### 2.3.1 Unused Capacity Inspection of Figure 6 suggests that despite the highest taxi trip fares due to the longest average taxi journeys to and from the two airports (see Figure 2), the vehicle occupancy rates are low compared to the average occupancy in New York's Taxis. The only rides outside Manhattan we consider in this analysis due to their high request rate are JFK and LaGuardia airports. Only about 30% of taxi trips to and from the airports have a load factor of more than one person, compared to the average total load factor of 65%. Since the financial incentive should be highest for trips to and from the airports due to the higher travel costs, we will also examine this in the following. ## 3 Ridesharing Potential Here, using a simple static approach, we analyze the ridesharing potential for densely populated cities. Our approach is based on clustering pickup and drop-off points (Barann et al., 2017), while also adopting the methodology of (Liu et al., 2022) where the urban area is divided into hexagonal partitions. With our analysis, we can evaluate the ridesharing potential without restricting ourselves in terms of car capacity (Barann et al., 2017), whereas in a dynamic routing system proposed by (Alonso-Mora et al., 2017,Santi et al., 2014), with an increasing number of potential rides that could be shared, the routing system gets mere complex as an increasing amount of nodes and therefore routes need to be taken into account. The quality of service parameter is chosen as an absolute time rather than a relative increase of the travel time because it is consistent with similar realizations in the literature and is driven by the fact that absolute delay information is likely more helpful than the percent assessment of travel time increase for potential customers of a shared taxi service (Santi et al., 2014). ### Model As a starting point, we divide the urban space into hexagonal partitions as L = {l1, l2, li, ln}. The duration of a day is discretized into time intervals as T = {t1, t2, ti, tn} which represent the time window at the pickup destination. In order to create these hexagons latitude and longitude of pickup and drop-off points are converted to an H3 index, which is a string representation of the hexagon that the point falls in (Uber-Technologies, n.d.,crazycapivara, n.d.). The data is then grouped by the pickup H3 indices, drop-off H3 indices, and pickup time within a given time window. Whenever the number of trips within a group exceeds 1, there is a demand for ridesharing. We assume all trips that fall into the defined group can be shared using one vehicle. As the number of ride requests and the accompanying passengers sometimes exceed the capacity of yellow taxis, we assume that at the determined pickup points, there is always the option to switch to a larger vehicle, such as a van. Total carpooling potential can be determined by calculating the number of trips that start and end in the same H3 cell in a ridesharing scenario versus the number of trips in a non-ridesharing scenario. As a result, Figure 5: Peak Hours Figure 6: Occupancy - Total Trips vs. Airport Trips Figure 7: Methodology the ridesharing potential could be described as a ratio between the total trips in a ridesharing scenario and whole taxi trips. However, similar to more intricate many-to-many Taxi Ride Sharing systems, this approach is constrained by several restrictions and other assumptions listed below. #### 3.1.1 Vehicle Constraints and Assumptions A regular yellow taxicab has a passenger limitation of only five persons (NYC-TLC, n.d.). Most ridesharing situations can be fulfilled with this type of vehicle (Barann et al., 2017). Here, we allow for multi-ridesharing because this enables us to examine the full ridesharing potential. After all, a limitation can drastically reduce this potential (Santi et al., 2014). In a real-life situation, multi-ridesharing that exceeds the 5-passenger limit can be performed using vans, minibusses, or large buses. Allowing multi-ridesharing, on the other hand, has a negative influence on waiting times (Barann et al., 2017). However, passengers' inconveniences could be compensated with increased savings due to the greater total distance saved. #### 3.1.2 Spatial Constraints and Assumptions The **starting point** of every participant in a shared ride scenario is the original pickup location of the ride request. We assume that the new **pickup location** is always the center of a hexagon by neglecting that the exact point could be in an inaccessible location, for instance, in the middle of the street or a park. The **pickup distance** is the great circle distance between the starting point and the pickup location. We assume every participant will walk this distance. The **drop-off location** will be the exact location as in a non-ridesharing scenario. The **total trip distance** in a shared scenario is the maximum trip distance of all ride requests or individual mobility trips that are pooled into a shared ride. This statement agrees with the assumption that the drop-offs do not result in any detours that could lengthen or shorten the trip. The **individual trip distance** will stay the same as in the original individual mobility case, as all participants are getting dropped off at their desired destination. However, as we relocate the pickup location to the hexagon center in a ridesharing scenario, the individual trip riding distance will change, depending on the starting point. If the starting point is closer to the drop-off location than the pickup point (hexagon center), the individual trip distance will increase and vice versa. By analyzing the great circle distance between all starting points and drop-offs in an individual mobility scenario and comparing it with the great circle distances in a shared scenario, we can observe that the difference in trip distance averages zero (see Figure 8). Based on this observation, we assume that the individual trip distance for every ridesharing participant stays approximately the same as in an individual mobility scenario. #### 3.1.3 Temporal Constraints and Assumptions The **starting time** of every participant in a shared ride scenario is the original pickup time of the ride request. The **arrival time** is the walking duration to the previously defined pickup location added to the pickup starting time. The **walking duration** is based on a study (Amanda & Director, 2006) determining walking speed and attributes of pedestrians on lower Manhattan's sidewalks was observed. The median speed of all observed pedestrians was 4.67 kilometers per hour, which is slower than researchers found in previous studies (Fruin, 1992,Knoblauch et al., 1996) but could be explained by the fact that the majority of the observations took place in the middle of the day (Amanda & Director, 2006). Given that roughly 50% of individuals in this study walked slower than the median speed, some participants were likely not walking to a designated location, such as a taxi pickup place, and were not slowed down by distractions. Therefore, we use the 4.67 km/h to determine the walking duration. As a foundation, we use the pickup distance to approximate the walking distance and calculate the walking speed using the 4.67 km/h. The **time window** or interval is the time difference between the trip announcement and departure time. Within this time window, we assume a vehicle with unlimited capacity is waiting at the previously defined pickup location. It applies that the number of daily intervals is 1440 minutes divided by the interval. The **departure time** is constant and dependent on the size of the time window, but for every sharing scenario, the departure time must be greater or the same as the arrival time. The **drop-off rank** is installed to approximate the individual delay for dropping off passengers as we assume the taxi drops everyone off at an individual location. This rank is based on the individual trip length in a shared taxi. The passenger with the shortest trip length is ranked first, and the passenger with the most extended trip length is ranked as the number of pooled taxi trips in the vehicle. By ranking every trip request in a shared scenario, we can penalize those with longer trip distances, as they are getting dropped off based on individual rank. To specify the **drop-off delay**, we assume that every drop-off delays the total trip duration by 20 seconds. This delay is added to the remaining passengers in the vehicle and depends on the drop-off rank. Figure 8: Great Circle Distance The **individual trip duration** will stay the same as in the original individual mobility case for the reasons described in the individual trip distance section. * _Drop-off delay = 20 sec. * rank - 20sec_ * arrival time + drop-off delay_ * _Drop-off time = Departure time + individual trip duration + drop-off delay_ Using the **great circle distance** between the starting point and drop-off location, we approximated the walking duration of the whole trip. Knowing the total trip walking duration, we exclude every individual mobility scenario that reaches the drop-off point earlier when walking compared to participating in ridesharing. Another constraint that must be installed is a fixed **drop-off delay** for every passenger in case the number of ride requests in one shared vehicle is too large. Too large means that the accumulated drop-off delay of all passengers would exceed the delay they would suffer if they get dropped off all at once and walk to the drop-off location. ### What-If Analysis and Robustness Previous studies show a positive correlation between ridesharing opportunities and both variables; time interval and cluster size (Barann et al., 2017). In general, a what-if analysis allows for the data-intensive modeling of complicated systems and analyses the results under alternative hypotheses (Golfarelli et al., 2006). In short, the analysis helps to assess the robustness of parameter choices. The analysis should help us to study how resolution size and time interval will influence ridesharing opportunities. The following what-if study evaluates the ridesharing potential with changing parameters resolution and interval to quantify their impact on the outcomes for the proposed one-to-one ridesharing model (see Table 2). The analysis considered taxi trips between the 19th and 26th of March. We consider time intervals from 1 to 25 and a pickup resolution of either 9 or 10. To add some flexibility while minimizing inconvenience for passengers, we set the drop-off resolution to size 9, as the drop-offs are driven in the car. Therefore, a resolution of 10 would likely be too small and drastically influence the sharing potential. Whereas considering a resolution size of 8 is irrational as the side length of that hexagon is 530 meters and too extensive considering the presented drop-off methodology. #### 3.2.1 Hexagon Resolution The two proposed resolution sizes are resolution 9 (see Figure 9 - left), with an average side length or maximum walking distance to the pickup destination of 200 meters, and resolution 10 (see Figure 9 - right), with an average side length or maximum walking distance of 75 meters (Uber-Technologies, n.d.). Comparing the results from figure 10 with the given time intervals indicates that the resolution size 9 performs better in the ridesharing scenario by still restraining the total delay. Especially with higher time intervals, there is observable that the lower resolution is more beneficial in terms of delay and sharing potential. For example, we compare the average delays at 135 seconds and reckon that the ridesharing potential at resolution 9 is 10% but only 4% with a resolution of 10 (see Table 2). This supposition becomes even more evident when looking at Figure 10. The bigger hexagon size accounts for more sharing potential per every second of total delay. The only periods where resolution 10 outperforms resolution 9 are below 3 minutes of total delay. Figure 10: Sharing Potential vs. Total Delay Figure 9: Resolution 9 (left), Resolution 10 (right) \begin{table} \begin{tabular}{|l|l l|l l|} \hline \multicolumn{5}{|c|}{Drop-off Resolution} \\ \multicolumn{5}{|c|}{9} \\ \hline \multicolumn{2}{|c|}{Pickup Resolution} & \multicolumn{3}{|c|}{Pickup Resolution} \\ \multicolumn{5}{|c|}{9} \\ \hline T & Sharing Potential & Total Delay & Sharing Potential & Total Delay \\ [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Resolution Size vs. Delay #### 3.2.2 Trip Intervals In order to evaluate a reasonable time window for subsetting the departure time of all taxis at the hexagon center with resolution 9, Figure 11 assesses the growth of the ridesharing potential dependent on the interval size. A first glance reveals that with higher time intervals, the ridesharing potential flattens and moves towards zero growth. Throughout the 25-time intervals, the total delay of shared rides stays almost linear. The percent difference in ridesharing potential follows a steady decrease until interval 10 but then varies in growth from year to year. Following this investigation, we can assume that setting the time interval to 10 and having the pickup resolution at 9 is reasonable. This setting will result in the best sharing potential compared to the total delay (see Figure 11 - right). ### Two-Scenario-Analysis As observable in table 2, the trip sharing potential with a resolution of 9 at the pickup and drop-off destination at a given 10-minute time window will result in 12,1% sharing potential following the presented approach. It is now essential to determine not just the overall average but also analyze how the sharing potential behaves on an ordinary day and extraordinary day (Barann et al., 2017), as well as figure out how far our findings could be generalized to times with lower and higher taxi demands (Santi et al., 2014). As can be observed in Figure 12, there is a clear connection between the number of trips that happened within a day and the potential to participate in ridesharing. The sharp decline in potentially shared rides on the 9th, 16th, and 23rd of March can be traced back to the below-average total trips on these respective days. Another exciting aspect observable in Figure 12 is that even though the ridesharing potential is lowest on Sundays, it is only the second lowest weekday regarding taxi demand. The three weekdays with the lowest taxi demand in March were all Sundays. Comparing the color spectrum and the size of the bars shows a clear positive correlation between taxi demand and ridesharing potential. The same picture can be seen in other studies (Alonso-Mora et al., 2017,Barann et al., 2017,Santi et al., 2014). By comparing all daily taxi demands and ridesharing potentials in March, we choose an ordinary day based on the average daily demand of 383 thousand trips and the ridesharing potential of 12.9%. Wednesday, Figure 11: Interval Size vs. Sharing Potential & Delay March 20, is likely to represent an ordinary day with an average ridesharing potential of 13% and a total taxi demand of 388 thousand, whereas Sunday, March 9, represents a minimum sharing potential of 10.9 % in the observed month and fourth lowest day in terms of taxi demand representing the extraordinary day (Barann et al., 2017). #### 3.3.1 Trip Savings and Environmental Benefits As mentioned earlier, the trip distance of a shared taxi trip equals the maximum individual trip distance of the pooled passengers in the ridesharing scenario. The analysis reveals that, on average, this approach will save 48.6 % of the cumulative trip distance in a shared scenario. Examining multi-ridesharing approaches with more than five pooled ride requests, the average cumulative trip distance saved is even 82 % on both ordinary and extraordinary days. We can estimate the daily environmental emission savings by following these findings and the total distance saved in a day (see table 3). The emission calculation is based on the average emissions of vehicles in the US in 2016, which is approximately 250 grams per kilometer (EPA, n.d.). Within this approach, we assume that there is no capacity limitation. Therefore, some trips must be made with greater capacity vehicles with higher emissions per kilometer. We neglect this fact in our saved CO2 calculation. Nevertheless, knowing what vehicle capacities must be available for the shared ride is crucial. Table 4 illustrates how many vehicles of what type must be used to serve the ridesharing demand implementing this approach. For regular cars, a passenger limit of 5 was assumed. We assumed a capacity Figure 12: Taxi Demand and Ridesharing Potential \begin{table} \begin{tabular}{l l l} Metrics & Ordinary & Extraordinary \\ \hline Individual Trips & 338,099 & 314,066 \\ Shared Trips & 50,582 & 38,538 \\ Total Reduction of Trips & 27,865 & 21,205 \\ \%. in vehicle reduction & 8.24 \% & 6.75 \% \\ Total Distance saved & 80,517 km & 64,743 km \\ \% Avg. trip distance saved & 48.56 \% & 48.63 \% \\ Saved CO2* & 20,129 tons & 16,186 tons \\ \end{tabular} \end{table} Table 3: Saved Distance and Emissions limit of 10 passengers for vans, and minibusses should transport up to 30 passengers. #### 3.3.2 Delay As shown in Table 5, the level of taxi demand has a negligible effect on total delays. This favors the presented approach since, unlike the research of (Barann et al., 2017), demand has little effect on delay. The delay is relatively independent of the demands as we suppose a static departure time at the pickup location. #### 3.3.3 Possible Cost Savings The price of a taxi trip is dependent on the length and duration of the trip. This section focuses solely on the potential cost savings passengers could have and neglects that the taxi driver also suffers a delay due to the waiting time at the pickup destination and the additional delay of 20 seconds for every drop-off in the multi-ridesharing situation. Similar to calculating the total Trip distance, we use the maximum fare amount in a ridesharing situation as a foundation and assume that the remaining fares of the ridesharing passengers could be seen as the maximum potential cost savings. \begin{table} \begin{tabular}{l l l} Metrics & Ordinary & Extraordinary \\ \hline Average cost-saving potential & 49 \% & 49 \% \\ \end{tabular} \end{table} Table 6: Cost-Saving Potential \begin{table} \begin{tabular}{l l l} Metrics & Ordinary & Extraordinary \\ \hline Car Total Trips & 17,625 & 13,073 \\ \% Passengers carried & 37.8 \% & 53.7 \% \\ Van Total Trips & 4,631 & 3.746 \\ \% Passengers carried & 55.9 \% & 37.2 \% \\ Minibus & 461 & 514 \\ \% Passengers carried & 6.3 \% & 9.1 \% \\ \end{tabular} \end{table} Table 4: Vehicle Types \begin{table} \begin{tabular}{l l l} Metrics & Ordinary & Extraordinary \\ \hline Average Total Delay & 402 sec. & 404 sec. \\ Average Walking Duration & 96 sec. & 94 sec. \\ Average walking distance & 124 m. & 122 m. \\ Average Delay until Departure & 390 sec. & 392 sec. \\ Average Delay at Drop-off & 10 sec. & 12 sec. \\ \end{tabular} \end{table} Table 5: Delay ### Cost Savings within Airport Trips As mentioned in the previous analysis, car occupancy is significantly lower with trips to and from the airport. However, the financial incentives for such trips should be higher as the trip fares are relatively costly (Storch et al., 2021). Contrary to our expectations, the ridesharing potential at the airports JFK and La Guardia is 5 % below the monthly average of the total analyzed in New York City, showing an 8 % sharing potential. However, sharing a taxi trip with pickup and drop-off destination at the airports comes with a higher cost benefit, as these trips are, on average, the most extended (see Figure 2). Taxis charge an extra fee as they operate at a different rate doing pickups at JFK or La Guardia Airport (NYC-TLC, 2014). In relative figures, the average cost savings in the shared ride scenario at the airport account for 50 %, similar to the ordinary case demonstrated above. However, on average, a trip from or to the airport in a shared or unshared vehicle costs $ 42, but a regular trip within Manhattan costs $ 12.50. Hence, the incentive to split fare costs using ridesharing services is higher for trips from and to the airport than ordinary taxi trips in NYC. ### Peak-Hour Scenario Next, we study how far higher demands could lead to a better performance of the one-to-one sharing approach, and if it is worth implementing a shared service solely available within these high-demand hours. In the peak hour scenario, the observations in low taxi demand hours are (see Figure 5) filtered out to assess the sharing potential only considering peak hours of the day. This examination shows that the total ridesharing potential is 13.1%, given a 10-minute interval. Compared to the base case, this is a negligible increase. Following this, we assume that the ridesharing potential is increasing with growing demand but is less drastic than the static one-to-one approach by (Barann et al., 2017). We can also assess that the average waiting time in such high-demand situations is 390 seconds, 5% lower than the base case that includes the off-peak hours. ### Locations with the Highest Sharing Potential Exploring all drop-off and pickup locations from March 2014 provides us with a clear picture of destinations with the highest ridesharing potential. The majority of taxi trips are in central Manhattan. However, the best ridesharing opportunities are in southern Manhattan (see Figure 13). Based on these findings, a Taxi Ride Sharing service in New York City should primarily serve Manhattan's central and southern areas to reach the highest ridesharing potential. A closer look at the pickup hotspots (see Figure 13) reveals three significant clusters showing the highest ridesharing opportunity: Madison Square Garden, The Grand Central Station, and the intersection at 42nd Street and 8th Avenue. The most prominent clusters for drop-off locations are Madison Square Park and Javits Center for this specific month. In a circumstance where the ridesharing approach is limited to the ten highest-demand pickup areas (visualized in Figure 4), we achieve a matching rate of about 27% (see table 7). Restricting the model to serve these ten hotspots excludes 90% of New York City's taxi demand. Consequently, this simple approach has the potential to transform 2.5% of New York taxi rides into a shared service on an ordinary day and around 2% on an extraordinary day by solely focusing on 10 pickup areas. \begin{table} \begin{tabular}{l l l} Metrics & Ordinary & Extraordinary \\ \hline Ride Sharing Potential & 26.7 \% & 22.9 \% \\ Representation of total rides & 10.5 \% & 9.2 \% \\ \end{tabular} \end{table} Table 7: Hotspot Matching Rate Figure 13: Pickup (left) and Drop-off (right) locations with the highest demand Discussion We have proposed an easy and much-needed sharing strategy that pairs trips with similar starting and ending points. For a dataset of more than 165 million taxi rides, we have shown that the cumulative journey time can be reduced by 48 percent, fares can be reduced by 49 percent, while emissions can be decreased by 20.129 tons of CO2 - on an ordinary day. Compared to many-to-many sharing dynamic routing schemes, our scheme is substantially easier to implement and operate. We have demonstrated that a ridesharing provider can adjust to the complex distance and matching time constraints simultaneously and how to do so. While the more patient a passenger is, the more significant the matching opportunities that occur. Details on passenger behaviors may vary and are often scarce, making it difficult to develop a robust framework. We have revealed that reducing the matching time interval has a significant detrimental effect on the matching rate. Therefore, one may argue that a 10-minute time frame may be optimal for a taxi ridesharing service in New York City, whereas at other places someone else would never want to get delayed by 400 seconds to save up to 49% of the trip costs. Nevertheless, the time limitation mitigates the effect of the distance constraint, and its arrangement also determines a fair distribution of disadvantages of the passengers in a shared trip. While the distance limitation increases, the resolution size decreases, and distant passengers must walk to join a transport (Barann et al., 2017). Our analysis robustly demonstrates that a varying trip density does not render our one-to-one approach inefficient. A lower trip density results in a lower matching rate, but considering the resulting 13 % matching rate on an ordinary day, a 10.9 % matching rate on a low trip demand day, and a 13.1 % matching rate only considering high demand hours demonstrates the robustness of our conclusions. In addition, our approach is more durable than a previously suggested one-to-one model (Barann et al., 2017). Their approach showed 53 % ridesharing potential in high demand but only 21% in low demand times. Nonetheless, as the number of ride requests increases, the trip density's impact on the matching rate diminishes. In fact, we achieve a matching rate of 27% on an ordinary day by concentrating on the local pickup hotspots. Yet, information on spatial and temporal characteristics of such high-demand trips and matching situations may be exploited for implementing a public transportation service with multi-ridesharing potential. (Santi et al., 2014) demonstrated that a flexible, dynamic ridesharing system may achieve close to maximum matching by using 25% of daily cab journeys in New York City, or around 100,000 trips per day. Therefore, they concluded that ridesharing is also effective in low-density cities. In contrast, (Stiglic et al., 2015) concluded that the ridesharing potential in traditional ridesharing depends heavily on the density of spatial and temporal distribution of the stated trips (Barann et al., 2017). According to related studies, there is a low likelihood of passengers wanting to adopt ridesharing, even if they are financially incentivized. (Stiglic et al., 2015) demonstrated that ridesharing initiatives may fail. Understanding this is ultimately important and particularly applicable to our suggested one-to-one ridesharing strategy, as passengers must wait for possible mates and have the time to decide until departure if they want to share the ride, especially considering that the passengers can see with whom they would share the ride. Our analysis suggests that the proportion of matched rides in New York City may reach up to 13%, providing evidence that the proposed strategy can be successfully applied. As we have mapped out certain high-demand areas and hotspots for pickups and drop-offs, one could implement a ridesharing service in these areas that performs at a higher rate. Taxi firms should consolidate shared journeys using hotspots, such as transit hubs or notable landmarks, as starting and ending sites (Furuhata et al., 2013). Models suggest that picking meeting locations with care can considerably enhance the number of matched rides in traditional ridesharing scenarios (Stiglic et al., 2015). Our findings indicate that multimodal transportation hubs such as Madison Square Garden could be ideal for a one-to-one sharing service. Also, considering airports as shared-ride pickup hotspots due to the high financial incentives, they may efficiently help increase adoption even though the matching success is lower than in Manhattan.
2302.05732
Entanglement in the Quantum Spherical Model -- a Review
We review some recent results on entanglement in the Quantum Spherical Model (QSM). The focus lays on the physical results rather than the mathematical details. Specifically, we study several entanglement-related quantities, such asentanglement entropies, and logarithmic negativity, in the presence of quantum and classical critical points, and in magnetically ordered phases. We consider both the short as well as the long-range QSM. The study of entanglement properties of the QSM is feasible because the model is mappable to a Gaussian system in any dimension. Despite this fact the QSM is an ideal theoretical laboratory to investigate a wide variety of physical scenarios, such as non mean field criticality, the effect of long-range interactions, the interplay between finite-temperature fluctuations and genuine quantum ones.
Sascha Wald, Raul Arias, Vincenzo Alba
2023-02-11T15:56:05Z
http://arxiv.org/abs/2302.05732v1
# Entanglement in the Quantum Spherical Model - a Review ###### Abstract We review some recent results on entanglement in the Quantum Spherical Model (QSM). The focus lays on the physical results rather than the mathematical details. Specifically, we study several entanglement-related quantities, such as entanglement entropies, and logarithmic negativity, in the presence of quantum and classical critical points, and in magnetically ordered phases. We consider both the short as well as the long-range QSM. The study of entanglement properties of the QSM is feasible because the model is mappable to a Gaussian system in any dimension. Despite this fact the QSM is an ideal theoretical laboratory to investigate a wide variety of physical scenarios, such as non mean field criticality, the effect of long-range interactions, the interplay between finite-temperature fluctuations and genuine quantum ones. _Keywords_: entanglement; entanglement gap; Schmidt gap; entanglement negativity; universality; phase transition; quantum phase transition; classical and quantum fluctuations; long-range interactions ## 1 Introduction Quantifying entanglement in strongly interacting many-body systems has become an important research theme in recent years, and has provided useful insights to understand the structure of quantum correlations [1, 2, 3, 4, 5]. In principle, quantum states are entangled whenever they cannot be written as a product state. A plethora of entanglement witnesses has been introduced to quantify the extent to which quantum states are entangled. Widely used tools that play an important role in entanglement studies comprise, amongst others: Renyi entropies, the mutual information, the entanglement negativity, and the entanglement spectrum. We shall introduce these quantities in Sec. 3 in more detail. It is important to note that there is not a single entanglement quantifier that works for all setups and systems, reflecting the intricacy of quantum entanglement in many-body systems. Crucially, the study of entanglement in many-body systems heavily relies on numerical simulations that are quite demanding, even with modern computing hardware. Thus, obtaining reliable scaling predictions or extracting qualitative behaviors in the thermodynamic limit is challenging and often not attainable. Similar to the study of continuous phase transitions, a viable option to overcome computational limitations is to study simplified systems that allow for analytical investigations and predictions [6]. The spherical model [7, 8, 9, 10] has firmly established itself as a reference system whenever investigations in generic many-body systems prove to be challenging. The spherical model and its quantum formulation [9, 10, 11, 12, 13] are analytically solvable in a variety of scenarios, including arbitrary spatial dimension, temperature and external fields. Moreover, the model possesses a phase transition separating a paramagnetic from a ferromagnetic phase that is generally not in the mean-field universality class. Not surprisingly, the QSM, and closely-related models, proved to be useful to understand entanglement properties of quantum many-body systems [14, 15, 16, 17, 18, 19, 20, 21]. Crucially, the QSM allows to derive the precise finite-size scaling of entanglement-related quantities, often analytically. This happens because the QSM is mappable to a Gaussian bosonic system with a constraint. This implies that entanglement properties are obtained from the two point correlation functions [22], which are accessible analytically [11, 13]. The aim of this review is to give a few examples of the wide variety of physical scenarios where the behavior of entanglement can be addressed in the controlled setup of the QSM, yet retaining the complexity of a quantum many-body system. Specifically, here we focus on the main results of Refs. [15, 16, 17, 18]. This review is organized as follows. In Sec. 2 we review relevant properties of the QSM. In particular, we highlight how the spherical model was conceived, how a quantum version is formulated, and sketch how to generally solve the model. Phase diagrams of all scenarios considered in this review are also discussed. In Sec. 3 we introduce all relevant entanglement-related quantities that we investigate in this work, such as the entanglement entropy, the negativity, the entanglement spectrum. We also briefly discuss how entanglement quantities are calculated in the QSM. Sections 4, 5, 6 and 7 are dedicated to the main results. In Sec. 4 we focus on the interplay between quantum and classical fluctuations at finite temperature criticality in the three-dimensional QSM. In particular, we show that the logarithmic negativity is able to distinguish genuine quantum correlations from classical ones. In Sec. 5 we explore the entanglement gap (or Schmidt gap), which is the lowest laying gap of the so-called entanglement spectrum (ES). We consider the zero-temperature QSM in two dimensions. By using dimensional reduction, we compute the scaling behavior of the entanglement gap at criticality. In Sec. 6 we show that in the ferromagnetic phase the entanglement gap can be written in terms of standard magnetic correlation functions, due to the presence of a Goldstone mode. Finally, in Sec. 7 we study the entanglement gap in the one-dimensional QSM with long-range interactions and at zero temperature. In Sec. 8 we summarize and conclude our results. ## 2 Quantum spherical model The Ising model, see Ref. [23] for an overview, has significantly contributed to our modern understanding of collective phenomena [6]. Despite its apparent simplicity it finds applications in a wide variety of fields, see, e.g., [24, 25, 26, 27, 28] for a brief list that is by no means exhaustive. The classical Ising model is analytically solvable in one spatial dimension, see, e.g., [29], but it does not possess a phase transition. In two dimensions the model is still exactly solvable and it exhibits a finite temperature transition. The Ising universality class of the transition is one of the most studied in statistical physics [30]. Alas, already the three dimensional Ising model is not solved analytically to date [31]. To overcome this problem, Berlin and Kac in 1952 suggested a generalization of the Ising model by replacing the discrete Ising spin degrees of freedom with continuous ones with an additional constraint that enforces some of the properties of the original Ising degrees of freedom [7]. Specifically, since each Ising spin on a lattice \(\mathcal{L}\) satisfies \(\sigma_{i}^{2}=1\), it Figure 1: **Illustration of configuration spaces**: On the left we show all possible configurations of two Ising spins (vertices of a square). On the right we show the extension of the configuration space to two spherical spins. is obvious that \(\sum_{i\in\mathcal{L}}\sigma_{i}^{2}=V\) with \(V\) being the system volume. Replacing the Ising spins with continuous degrees of freedom, i.e., \(\sigma_{i}\to S_{i}\in\mathbb{R}\), while simultaneously enforcing the external constraint \(\sum_{i\in\mathcal{L}}S_{i}^{2}=V\) yields the original formulation of the classical spherical model, see Fig. 1. This classical spherical model is exactly solvable in any dimension and supports a finite temperature phase transition in more than two dimensions \(d>2\). After the paper of Berlin and Kac, later in the same year it was shown that the strict spherical constraint can be relaxed to be only satisfied on average, i.e., \(\sum_{i\in\mathcal{L}}\left\langle S_{i}^{2}\right\rangle=V\), without affecting the universal bulk behavior of the model [8]. Interestingly, the spherical model is related to more realistic spin systems like the \(O(N)\) Heisenberg model in the limit \(N\to\infty\) of infinite spin dimensionality [32]. The spherical model with the average constraint admits also a quantum generalization [9, 10, 11, 12, 13]. The Hamiltonian of this quantum spherical model (QSM) reads \[H=\sum_{n\in\mathcal{L}}\left(\frac{g}{2}p_{n}^{2}+\frac{1}{2}\sum_{m\in \mathcal{L}}u_{nm}x_{n}x_{m}\right),\quad\mbox{with}\quad\sum_{n\in\mathcal{L }}\left\langle x_{n}^{2}\right\rangle=V. \tag{1}\] Here the classical spin degrees of freedom \(S_{i}\) are replaced by position operators \(x_{i}\), and associate momentum operators \(p_{i}\) were introduced which satisfy \([x_{n},p_{m}]=\mathrm{i}\hbar\delta_{nm}\). We consider \(\mathcal{L}\) as a \(d\) dimensional hypercubic lattice. In addition to the finite temperature transitions of the classical spherical model, the QSM supports a zero-temperature quantum phase transition [11, 13]. If \(u_{nm}\) is short-ranged, e.g., nearest neighbor interaction, then the quantum phase transition exists for \(d>1\) only. This transition is generally in the same universality class as the thermal transition in the \(d+1\) dimensional classical spherical model [11]. Conversely, for long-ranged interactions a quantum phase transition is also present for \(d=1\)[11]. The universality class depends on the long range exponent \(\alpha\) (see below), although in a simple manner. Generally, \(u_{nn}=2\mu\) where \(\mu\) is a Lagrange multiplier (chemical potential) that allows to enforce the spherical constraint. This \(\mu\) plays the physical role of a mass for the model, or, equivalently, of the inverse correlation length. Hence, the critical line of the model is retrieved from the constraint for \(\mu=0\)[11, 13]. Figure 2: **Phase diagrams of the QSM in different dimension**: In \(d=3\) (left panel) the QSM with nearest neighbor interactions exhibits a thermal critical line and a quantum phase transition at \(T=0\). In \(d=2\) (center panel) only a quantum phase transition is present. For long-range interactions in \(d=1\) (right panel) the zero-temperature critical behavior depends on the long-range exponent \(\alpha\). For \(\alpha<2/3\) the transition is mean-field (purple). Here, we only consider translation invariant systems with periodic boundary conditions such that the Hamiltonian generally decouples in Fourier space. Let us introduce the Fourier transformed operators as \[x_{n}=\frac{1}{\sqrt{V}}\sum_{k\in\mathcal{B}}e^{\mathrm{i}kn}q_{k},\qquad p_{n} =\frac{1}{\sqrt{V}}\sum_{k\in\mathcal{B}}e^{-\mathrm{i}kn}\pi_{k} \tag{2}\] with the \(d\) dimensional Brioulli zone \(\mathcal{B}\). We recast the QSM Hamiltonian in the form \[H=\sum_{k\in\mathcal{B}}\frac{g}{2}\pi_{k}\pi_{-k}+\frac{u_{k}}{2}q_{k}q_{-k}= \sum_{k\in\mathcal{B}}E_{k}\left(b_{k}^{\dagger}b_{k}+\frac{1}{2}\right) \tag{3}\] with \(u_{k}\) being the Fourier transform of the interaction potential \(u_{nm}\), \(E_{k}=\sqrt{gu_{k}}\) the eigenenergies of the QSM, and \(b_{k}\), \(b_{k}^{\dagger}\) are adequately chosen bosonic ladder operators. This allows us to explicitly write the equilibrium correlation functions for a system at temperature \(T\) as \[\mathbb{X}_{nm} :=(x_{n}x_{m})=\frac{g}{2V}\sum_{k}e^{\mathrm{i}(n-m)k}\frac{1}{E _{k}}\coth\left(\frac{E_{k}}{2T}\right), \tag{4}\] \[\mathbb{P}_{nm} :=(p_{n}p_{m})=\frac{1/g}{2V}\sum_{k}e^{-\mathrm{i}(n-m)k}E_{k} \coth\left(\frac{E_{k}}{2T}\right). \tag{5}\] In the following sections we shall focus on entanglement quantities in the QSM. Specifically we consider the following situations * QSM at finite temperature \(T>0\) for \(d=3\) with nearest neighbor interactions, viz., \[u(k)=2\mu+2(3-\cos k_{x}-\cos k_{y}-\cos k_{z})\] (6) * QSM at \(T=0\) for \(d=2\) with nearest neighbor interactions, viz., \[u(k)=2\mu+2(2-\cos k_{x}-\cos k_{y})\] (7) * QSM at \(T=0\) for \(d=1\) with long-range interactions [33, 34], viz., \[u(k)=2\mu+\left(2(1-\cos k)\right)^{\alpha/2}.\] (8) In Eq. (8), \(\alpha\) is the exponent governing the decay with distance of the long-range interactions. The phase diagrams for these systems are depicted in Fig. 2. An important ingredient for the further analysis is to understand the finite-size scaling of the spherical parameter \(\mu\). Specifically, we use that \(\mu\to 0\) in the ferromagnetic phase and at criticality for \(L\rightarrow\infty\). For finite \(L\) conversely, \(\mu\) is always finite. A variety of works have considered this scaling in the classical spherical model, see, e.g., Refs. [35, 36, 37, 38]. ## 3 Entanglement in the quantum spherical model In this section we introduce several relevant quantum-information-motivated quantities, which have attracted a lot of attention in the statistical and high energy theory communities in the last few years. Consider a many-body quantum system described by a Hilbert space \(\mathcal{H}\) and a density matrix \(\rho=|\psi_{0}\rangle\langle\psi_{0}|\) in the corresponding zero-temperature ground state \(|\psi_{0}\rangle\). Upon partitioning the system into two parts \(A\) and \(B\), see Fig. 3, with corresponding Hilbert spaces \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) we can define the reduced density matrix \(\rho_{A}\) of subsystem \(A\) by tracing out subsystem \(B\), viz., \[\rho_{A}=\mathrm{Tr}_{B}(\rho). \tag{9}\] Although \(\rho\) is pure, \(\rho_{A}\) is typically a mixed state because the zero-temperature ground-state is not separable. In this scenario, the entanglement entropy \[S_{A}=-\mathrm{Tr}\rho_{A}\log\rho_{A} \tag{10}\] is a measure of entanglement between the two subsystems. In terms of the entanglement spectrum [4], i.e., the eigenvalues \(\lambda_{i}\) of the reduced density matrix \(\rho_{A}\), we can express the entanglement entropy as [1, 2, 4, 39] \[S_{A}=-\sum_{j}\lambda_{j}\ln\lambda_{j}. \tag{11}\] If conversely, the density matrix \(\rho\) is not pure, e.g., at finite temperature, or if \(\rho\) is pure but one is interested in the entanglement between two non-complementary regions (see Fig. 3 c), then the von Neumann entropy is not a good entanglement witness. A useful quantifier in these cases is the logarithmic negativity [40, 41, 42, 43, 44, 45]. The negativity is defined from the so-called partially transposed reduced density matrix. Given a partition of \(A\) as \(A=A_{1}\cup A_{2}\) (see Fig. 3\(c\))), the matrix elements of the partial transpose \(\rho_{A}^{T_{2}}\) with respect to the degrees of freedom of \(A_{2}\) are defined as \[\langle\varphi_{1}\varphi_{2}|\rho_{A}^{T_{2}}|\varphi_{1}^{\prime}\varphi_{2 }^{\prime}\rangle:=\langle\varphi_{1}\varphi_{2}^{\prime}|\rho_{A}|\varphi_{1} ^{\prime}\varphi_{2}\rangle. \tag{12}\] Here, \(\{\varphi_{1}\}\) and \(\{\varphi_{2}\}\) are orthonormal bases for \(A_{1}\) and \(A_{2}\) respectively. In contrast to the eigenvalues of the reduced density matrix \(\rho_{A}\), the eigenvalues \(\zeta_{i}\) of \(\rho_{A}^{T_{2}}\) can be positive or negative. The logarithmic negativity is then defined as \[\mathcal{E}_{A_{1}:A_{2}}=\ln\mathrm{Tr}|\rho_{A}^{T_{2}}|. \tag{13}\] The behavior of the logarithmic negativity has been fully characterized in systems that are described by conformal field theory at zero temperature [46] and at finite temperature [47]. Generally, the negativity follows an area law scaling as observed in a variety of systems, see, e.g., Refs. [48, 49, 50, 14, 51, 15]. Finally, we introduce the entanglement spectrum (ES), viz., \(\{\xi_{i}=-\ln(\lambda_{i})\mid\lambda_{i}\in\mathrm{spec}(\rho_{A})\}\). The lowest entanglement gap (Schmidt gap) is defined by \[\delta\xi=\xi_{1}-\xi_{0}, \tag{14}\] where \(\xi_{0}\) and \(\xi_{1}\) are the lowest and the first excited ES level, respectively. The ES has received a lot of attention following the observation that it contains universal information about the edge modes in fractional quantum Hall systems [52]. Subsequently, the ES was investigated in a variety of setups, e.g., in conformal field theory [53, 54, 55, 56], in quantum Hall systems [57, 58, 59, 60, 61, 62], in frustrated and magnetically ordered systems [63, 64, 65, 66, 20, 67, 68, 69, 70, 71, 72, 73, 74, 75] or systems with impurities [76]. The main topic of this review is to investigate the entanglement-related quantities introduced above in the QSM in \(d=1\), \(d=2\) and \(d=3\) spatial dimensions. Since the QSM is mappable to a Gaussian system of bosons (cf. Eq. (3)), entanglement-related quantities can be extracted from the position and momentum correlators (cf. Eqs. (4) and (5)) \(\mathbb{X}\equiv\langle x_{n}x_{m}\rangle\) and \(\mathbb{P}\equiv\langle p_{n}p_{m}\rangle\) (see [22] for a review). First, we consider the correlators restricted to subsystem \(A\), denoting them as \(\mathbb{X}[A]\) and \(\mathbb{P}[A]\). The single-particle eigenvalues \(\epsilon_{i}\), with \(i\in[1,|A|]\), of the entanglement Hamiltonian \(H_{A}\), which is defined as \(\rho_{A}=\exp(-H_{A})\), are readily related to the eigenvalues \(e_{i}\) of the matrix product \(\mathbb{C}_{A}=\mathbb{X}[A]\mathbb{P}[A]\), viz., \[\sqrt{e_{i}}=\frac{1}{2}\coth\left(\frac{\epsilon_{i}}{2}\right). \tag{15}\] The eigenvalues of the entanglement Hamiltonian \(H_{A}\) are constructed by filling the single-particle levels \(\epsilon_{i}\) in all possible ways. This allows also to obtain the von Neumann entropy \(S_{\rm vN}\) in terms of the eigenvalues \(e_{j}\) as \[S_{\rm vN}=\sum_{j}\left[\left(\sqrt{e_{j}}+\frac{1}{2}\right)\ln\left(\sqrt{e _{j}}+\frac{1}{2}\right)-\left(\sqrt{e_{j}}-\frac{1}{2}\right)\ln\left(\sqrt{ e_{j}}-\frac{1}{2}\right)\right]. \tag{16}\] Hence, diagonalizing \(\mathbb{C}_{A}\) allows us to deduce the full entanglement spectrum. In particular, assuming that the single-particle entanglement spectrum levels are ordered as \(\epsilon_{1}\leq\epsilon_{2}\leq\cdots\leq\epsilon_{|A|}\), the Schmidt gap is simply given by \(\delta\xi=\epsilon_{1}\). For Gaussian bosonic systems, the logarithmic negativity can be constructed from the two-point correlation functions [77]. First, we define the transposed matrix \(\mathbb{P}[A^{T_{2}}]\) as \[\mathbb{P}[A^{T_{2}}]\equiv\mathbb{R}[A^{T_{2}}]\mathbb{P}[A]\mathbb{R}[A^{T _{2}}], \tag{17}\] Figure 3: **Geometry of partitions**: In panel a) a generic three-dimensional system is depicted. In panel b) the system is bipartitioned into two subsystems \(A\) and \(B\) with \(B\) the complement of \(A\). In panel c) the subsystems \(A_{1}\) and \(A_{2}\) are non-complementary. where the matrix \(\mathbb{R}[A^{T_{2}}]\) acts as the identity matrix on \(A_{1}\) and as minus the identity matrix on \(A_{2}\). The eigenvalues \(\nu_{i}^{2}\) of \(\mathbb{X}[A]\mathbb{P}[A^{T_{2}}]\) form the single-particle negativity spectrum. In terms of them the negativity can be written as [77] \[\mathcal{E}=\sum_{i}\max(0,-\ln(2\nu_{i})). \tag{18}\] ## 4 Quantum and classical fluctuations at finite temperature criticality Understanding the interplay of classical and quantum fluctuations is an important but challenging task [78, 79, 75]. One way of approaching this question is by studying entanglement witnesses in the vicinity of a finite temperature phase transition that is driven by classical fluctuations. It has been observed that a variety of entanglement witnesses are sensitive to classical criticality. For instance, it has been shown that the negativity develops cusp-like singularities [14, 51]. In this section we review our investigation from Ref. [15] of entanglement-related quantities at the finite-temperature transition in the \(d=3\) dimensional QSM, see Fig. 2 (left panel). First, we discuss the von Neumann entropy \(S_{\rm vN}\) for the bipartition of the system into two equal parts (see Fig. 3). As we mentioned in Sec. 3, \(S_{\rm vN}\) is not a valid entanglement witness at finite temperature. In fact, the von Neumann entropy becomes the standard thermal entropy at finite temperature. Indeed, as shown in Fig. 4 (left panel), \(S_{\rm vN}\) satisfies a standard volume-law scaling. Being sensitive to both quantum and classical correlations, the von Neumann entropy overestimates the amount of entanglement, which is expected to scale with the boundary between the two subsystems. Moreover, the von Neumann entropy does not show any singularity at the Figure 4: **von Neumann entropy and entanglement spectrum**: (Left panel) Volume-law scaling of the von Neumann entropy across the finite-temperature transition in the three-dimensional QSM. Singular terms are present but they vanish at the transition and are overshadowed by the analytic background. Non-analytic contributions are more visible in the entanglement spectrum (notice the divergence of the largest negative single-particle ES level in the right panel). In the inset we show \(\delta_{n}\) (cf. Eq. (19)), which measures the nonanalyticity of the levels across the transition. Here \(\delta_{n}\neq 0\) signals nonanalytic behavior. because singular terms, although they are present, vanish at the critical point, and are overshadowed by the analytic background. Singularities are more visible in the single-particle entanglement spectrum, as illustrated in the right panel of Fig. 4. In the figure we show the entanglement spectrum for two adjacent blocks of linear size \(\ell\) embedded in an infinite system. The eigenvalues quickly decay upon increasing their index and most of them satisfy \(e_{n}\approx 1/4\). Clearly, only those eigenvalues with low index can yield potentially singular contributions. In the inset we investigate the singularity using the quantity \[\delta_{n}=(e_{n})^{\prime}_{+}-(e_{n})^{\prime}_{-} \tag{19}\] that measures the difference of the right and left derivatives of \(e_{n}\) with respect to \(g\) at \(g_{c}\). Clearly \(\delta_{n}\neq 0\) indicates a non-analyticity and we observe this for small \(n\). Next, we discuss the entanglement negativity. As outlined in Sec. 3, the negativity is a proper entanglement witness, and as such obeys an area law, see center and right panel in Fig. 5. Crossing the thermal transition at fixed finite \(T\), i.e., changing the quantum driving parameter \(g\) the negativity decays slowly as \(1/g\) for large \(g\), see Fig. 5 right panel. This is in contrast to the behavior when crossing the transition with the temperature \(T\) at fixed \(g\), as depicted in the center panel of Fig. 5. Here, the negativity shows a sudden death after and remains exactly zero for increasing \(T\). We also see that the negativity does not show any cusp singularity across the finite temperature transition but develops a cusp when approaching low temperatures (see inset right panel in Fig. 5). This signals that singularities, although present, are strongly suppressed. Furthermore, in Fig. 5 (left panel), we map out the negativity in the full phase diagram. In the figure the dashed line is the critical line separating the paramagnetic and the ferromagnetic Figure 5: **Entanglement negativity in the three-dimensional QSM**: The left panel is an overview of the negativity in the \(g-T\) plane, with \(T\) the temperature and \(g\) the quantum coupling (cf. Eq. (1)). Here we always consider the half-system negativity. The critical line is reported as black dashed line. The red-dashed line is the death line above which the negativity is exactly zero. The death line as extracted from the negativity between two adjacent spins is reported as red solid line. The orange solid line is the behavior as \(\sqrt{g}\) at small \(g\). The centre (right) panel shows the negativity across the phase transition varying the temperature (quantum parameter). In the right panel the continuous line is the result in the thermodynamic limit. The inset shows the behavior of the negativity at \(T=0.2\), i.e., close to the quantum phase transition. phase. We observe that the negativity generally attains a maximum at the quantum phase transition, hinting at a strongly entangled quantum state. We also observe that the negativity increases upon lowering the temperature and is largest for \(T=0\). We also highlight the numerical death line in Fig. 5 (dotted line in the left panel) above which the negativity is zero. Interestingly, most of these findings can be quantitatively understood considering two adjacent sites embedded in an infinite system. This setup allows for analytic investigations as shown in Ref. [15]. For large \(g\) and constant \(T\), this approach qualitatively predicts the slow negativity decay from Fig. 5 (right panel), viz., \[\mathcal{E}=-\ln\left(\frac{g-2}{g}\right)\overset{g\to\infty}{\simeq}\frac{2 }{g}. \tag{20}\] Similarly, it predicts the existence of the death line (continuous line in the figure), and correctly captures its onset for small \(g\) as \(\sqrt{g}\), see Fig. 5 (left panel). ## 5 Entanglement gap at 2D quantum criticality As we have seen in Sec. 4 the low-laying entanglement spectrum encodes relevant information about the critical properties of the system. To further investigate this aspect we review in this section our studies in Refs. [16, 17] of the behavior of the Schmidt gap at quantum criticality and in the ferromagnetic phase in the two dimensional QSM. Interest in the behavior of the Schmidt gap has spiked in the last decade. In the left panel of Fig. 6 we show the numerical findings for the behavior of \(\delta\xi\) across the phase diagram, see Fig. 2. Here, we consider a bipartition of the system into two equal halves. In the paramagnetic regime, we observe that the gap converges rapidly Figure 6: **Overview of the entanglement gap in the two-dimensional QSM**: The left panel shows the gap \(\delta\xi\) across the whole phase diagram. We observe distinct behaviors in the paramagnetic phase (finite gap) and the ferromagnetic phase (vanishing gap). We always consider the half-system entanglement spectrum. The continuous line in the left panel is the result in the thermodynamic limit [16]. In the center panel we show the rescaled gap \(\delta\xi\ln(L)\). At the critical point the gap decays as \(1/\ln(L)\), which implies that the rescaled gap should exhibit a crossing. In the right panel we plot \(\delta\xi\ln(L)\) versus \((g-g_{c})L^{1/\nu}\). Upon approaching the thermodynamic limit \(L\to\infty\) a data collapse is expected. The cross symbol is \(\delta\xi\ln(L)\) at \(g=g_{c}\) and \(L\to\infty\). Still, subleading terms are too large to observe the collapse, as confirmed in the inset. to a finite number upon increasing the linear system size \(L\). Hence, the gap remains finite in the thermodynamic limit \(L\to\infty\). Conversely, the behavior at the critical point [16] and in the ferromagnetic phase [17] differs from that in the paramagnetic phase. In the ferromagnetic phase, the entanglement gap scales as [17] \[\delta\xi\simeq\frac{\Omega}{\sqrt{L\ln(L)}}, \tag{21}\] where the constant \(\Omega\) is known analytically [17], and depends on low-energy properties of the QSM and on the geometry. For instance, \(\Omega\) is sensitive to the presence of corners in the boundary between \(A\) and the rest. Hence, the gap closes algebraically, involving logarithmic corrections. At criticality we find that the Schmidt gap still closes, i.e., \(\delta\xi\to 0\) albeit significantly slower. Precisely, the gap vanishes as [15] \[\delta\xi\simeq\frac{\pi^{2}}{\ln(L)}. \tag{22}\] This result is obtained for the bipartition into equal halves as follows. Since we use periodic boundary conditions in both directions, and the bipartition does not introduce corners, the momentum \(k_{y}\) remains a good quantum number also for the reduced density matrix. This allows to exploit dimensional reduction [80] mapping the problem to a one-dimensional one. Hence, we may use the analytical result for a one dimensional massive harmonic chain [22] in order to obtain Eq. (22). Since the harmonic chain result is derived using the corner transfer matrix on two infinite halves, whereas we have periodic boundary conditions also along the \(x\) direction, Eq. (22) is exact only at leading order in \(L\). Our results are numerically confirmed in Fig. 7. In the figure we consider the largest eigenvalue \(e_{1}\) of \(\mathbb{C}_{A}\) (see Section 3). This is related to the entanglement gap via Eq. (15) and Eq. (14). In particular, a diverging \(e_{1}\) implies a vanishing entanglement gap. Fig. 7 shows that the leading behavior of \(e_{1}\) at large \(L\) is correctly captured by the analytic result (full line). Again, Fig. 7 confirms that the entanglement gap is finite in the paramagnetic phase, whereas in the ordered phase a faster divergence is observed (cf. Eq. (21)). In the right panel of Fig. 7 we subtract the analytic prediction for the leading behavior of \(e_{1}\). The continuous line is a fit to \(A_{0}+A_{1}\ln(L)\). The result of the fit confirms the presence of a logarithmic correction to the leading behavior. ## 6 Entanglement gap and symmetry breaking in the QSM In the ferromagnetically ordered phase of the QSM, the dispersion develops a zero mode, which reflects the Goldstone mode associated to symmetry breaking. This implies that the position correlation function (see Eq. (4)) diverges. Here we show that this fact is sufficient to fully determine the scaling of the entanglement gap. First, the divergence in Eq. (4) is reflected in the fact that the eigenvector of \(\mathbb{C}_{A}\) associated with the largest eigenvalue becomes flat in the thermodynamic limit [16, 17]. Hence, we may rewrite the position correlator up to leading order as \[\mathbb{X}\simeq\chi^{x}\left|1\right\rangle\left\langle 1\right|,\quad\text{ with }\chi^{x}\propto L, \tag{23}\] with \(\left|1\right\rangle\) being a normalized flat vector. First, we note that \(\left\langle 1\right|\mathbb{X}\left|1\right\rangle=\chi^{x}\). Furthermore, it is easy to verify that \(\left\langle 1\right|\mathbb{P}\) and \(\left|1\right\rangle\) are left and right eigenvectors of \(\mathbb{C}\). Both correspond to the largest (diverging) eigenvalue \(e_{1}\). Hence, it is straightforward to identify \[e_{1}=\left\langle 1\right|\mathbb{X}\left|1\right\rangle\left\langle 1\right| \mathbb{P}\left|1\right\rangle:=\chi^{x}\chi^{t}, \tag{24}\] where \(\chi^{t}:=\left\langle 1|\mathbb{P}|1\right\rangle\). Here we should observe that \(\chi^{x}\) resembles a susceptibility for the position variables \(x_{i}\), whereas \(\chi^{t}\) is the susceptibility of the \(p_{i}\). Eq. (24) establishes a remarkable correspondence between the entanglement gap and standard quantities such as \(\chi^{x}\) and \(\chi^{t}\). A similar decomposition as in Eq. (24) was employed in Ref. [81] to treat the zero-mode contribution to entanglement in the harmonic chain. We now verify numerically that, as expected, the eigenvector of \(\mathbb{C}_{A}\) corresponding to its largest eigenvalue \(e_{1}\) becomes flat in the ferromagnetic phase, which ensures the validity of Eq. (24). Our results are reported in Fig. 8 where we show the overlap \(\varphi\) between the flat vector and the exact eigenvector of the correlation matrix. At criticality, the eigenvector is not flat in the thermodynamic limit \(L\to\infty\) (see Fig. 8). On the other hand, below the critical point the eigenvector becomes flat in the thermodynamic limit. ### Entanglement gap in the ferromagnetic phase of the 2D QSM Let us now discuss the application of Eq. (24) in the ordered phase of the two-dimensional QSM. To employ Eq. (24) we have to evaluate the flat vector expectation Figure 7: **Scaling of the largest eigenvalue \(e_{1}\) of \(\mathbb{C}_{A}\)**: The left panel shows the scaling of \(e_{1}\) in the paramagnetic and the ferromagnetic phase as well as at criticality in the \(2d\) QSM. In the paramagnetic phase \(e_{1}\) converges to a finite value and in the ferromagnetic phase \(e_{1}\) diverges (hence \(\delta\xi\to 0\)). At criticality we see a slower divergence than in the ordered phase. The full grey line is a fit to \(1/\pi^{4}\ln(8L/\gamma_{2})+A_{0}+A_{1}\ln(8L/\gamma_{2})\), with \(A_{0},A_{1}\) fitting parameters. The leading part \(\sim\ln^{2}(L)\) is obtained analytically. Here \(\gamma_{2}\) is a known constant [16]. In the right panel we investigate the subleading logarithmic term. values of the position and momentum correlation matrix. The standard way is to decompose them in a thermodynamic and a finite size part, i.e., \[\left\langle 1\right|\mathbb{X}\left|1\right\rangle=\left\langle 1\right| \mathbb{X}^{\left(\mathrm{th}\right)}\left|1\right\rangle+\left\langle 1 \right|\mathbb{X}^{\left(L\right)}\left|1\right\rangle \tag{25}\] and similar for \(\left\langle 1\right|\mathbb{P}\left|1\right\rangle\), using the Poisson summation formula \[\sum_{n=a}^{b}f(n)=\frac{f(a)+f(b)}{2}+\int_{a}^{b}f(x)\mathrm{d}x+2\sum_{p= 1}^{\infty}\int_{a}^{b}f(x)\cos(2\pi px)\mathrm{d}x. \tag{26}\] The FSS of these contributions is then obtained from the FSS of the spherical parameter \(\mu\) and from standard methods such as stationary phase methods, Euler-Maclaurin formulas, and Mellin transform techniques. One finds [17] \[\left\langle 1\right|\mathbb{X}^{\left(\mathrm{th}\right)}\left|1\right\rangle \sim L^{2},\quad\left\langle 1\right|\mathbb{X}^{\left(L\right)}\left|1 \right\rangle\sim L^{2},\quad\left\langle 1\right|\mathbb{P}^{\left( \mathrm{th}\right)}\left|1\right\rangle\sim\frac{\ln(L)}{L}. \tag{27}\] Notice that \(\chi^{t}\) vanishes at \(L\to\infty\). Thus, the entanglement gap scales as \(\delta\xi\sim 1/\!\sqrt{L\ln(L)}\). This is in agreement with our numerical simulations [17]. Furthermore, it has been suggested that for continuous symmetries, the gap should vanish as [20]\(\delta\xi\sim 1/(L\ln(L))\) which differs from our result. This could be specific of the QSM, although the issue would require further clarification. Finally, one can assume that the decomposition in Eq. (24) also holds at the critical point [16]. This gives \[\left\langle 1\right|\mathbb{X}^{\left(\mathrm{th}\right)}\left|1\right\rangle \sim L,\quad\left\langle 1\right|\mathbb{X}^{\left(L\right)}\left|1\right\rangle \sim L,\quad\left\langle 1\right|\mathbb{P}^{\left(\mathrm{th}\right)}\left|1 \right\rangle\sim\frac{\ln(L)}{L},\quad\left\langle 1\right|\mathbb{P}^{\left(L \right)}\left|1\right\rangle\sim\frac{\ln(L)}{L}. \tag{28}\] This implies that the entanglement gap vanishes as \(\delta\xi\sim 1/\!\sqrt{\ln(L)}\). Although this scaling is not correct, reflecting that Eq. (24) does not hold at criticality, it still captures the logarithmic character of the entanglement gap. Figure 8: **Lowest eigenvector of \(\mathbb{C}_{A}\)**: We show the overlap \(\varphi\) between the eigenvector of \(\mathbb{C}_{A}\) corresponding to the largest eigenvalue and the flat vector. In the ordered phase of the QSM one has that \(\varphi\to 1\) in the thermodynamic limit. This is not the case at the critical point and in the ferromagnetic phase. ## 7 Entanglement gap in 1D QSM with long-range interactions Recently, there has been increasing interest in quantum systems with long-range interactions [82, 83], also due to significant experimental advances [84]. Since long-range interactions affect the structure of quantum correlations between subsystems, it is interesting to study entanglement witnesses in these systems. Indeed, the study of entanglement in long-range systems has seen a significant surge of interest [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]. Arguably one of the paradigmatic systems is the long-range QSM at \(T=0\) in one spatial dimension that we introduced in section 2. In terms of the long-range exponent \(\alpha\) the interaction strength between two lattice sites behaves as \(u_{nm}\sim|n-m|^{-(1+\alpha)}\). The parameter \(\alpha\) satisfies \(0\leq\alpha<2\) where \(\alpha=2\) would correspond to nearest-neighbor interaction and \(\alpha=0\) is essentially an infinite range interaction. The zero temperature phase diagram is reported in Fig. 2 (right panel). For \(0<\alpha<2/3\) the transition is of the mean-field type, whereas for \(2/3<\alpha<2\) the model shows a non-mean-field transition [11, 18]. Crucially, despite the long-range nature of the model, in the ordered phase Eq. (24) holds true. Thus, the study of the scaling of the entanglement gap proceeds as outlined in section 6. The susceptibilities \(\chi^{x}\) and \(\chi^{t}\) can be analyzed for \(L\rightarrow\infty\) with regularization techniques involving the Mellin transform [96, 18] (details can be found in Ref. [18]). In Fig. 9 we present in the left and center panel a numerical analysis of the entanglement gap across the phase diagram. We observe that \(\delta\xi\) remains finite in the paramagnetic phase, whereas it shows a vanishing behavior in the ferromagnetic phase upon increasing \(L\). As in the two dimensional QSM it is useful to decompose the correlation matrices into a thermodynamic and a finite size part. Using Mellin techniques we obtain [18] \[\langle 1|\,\mathbb{X}^{\rm(th)}\,|1\rangle\sim L,\quad\langle 1|\,\mathbb{X} ^{(L)}\,|1\rangle\sim L^{\alpha/2},\quad\langle 1|\,\mathbb{P}^{\rm(th)}\,|1 \rangle\sim L^{-\alpha/2},\quad\langle 1|\,\mathbb{P}^{\rm(L)}\,|1\rangle\sim L^{- \alpha/2}. \tag{29}\] Figure 9: **Entanglement gap in the long-range \(1d\) QSM**: The left and center panel show the entanglement gap for different values of the long-range exponent \(\alpha\) (\(\alpha=1\) and \(\alpha=1.5\) respectively) across the phase transition. We observe a qualitative difference between the paramagnetic phase and the ferromagnetic phase. The right panel shows the closure of the entanglement gap for different values of \(\alpha\) in the ferromagnetic phase. Symbols are numerical results from exact diagonalization and the continuous line corresponds to Eq. (24). Black dashed lines correspond to the analytical prediction. Further details such as the precise prefactors, subleading contributions, the behavior at criticality and the numerical benchmarks for all these results can be found in Ref. [18]. These results allow us to deduce the entanglement gap using Eq. (24). We obtain \[\delta\xi\sim L^{-(1/2-\alpha/4)}. \tag{30}\] In the right panel of Fig. 9 we show a comparison of exact diagonalization results and Eq. (30), finding perfect agreement. Note that the entanglement gap decays algebraically with \(L\), and multiplicative logarithmic corrections are absent. This differs strictly from the logarithmic behavior encountered in the previous section. ## 8 Summary and Conclusions We provided an overview of several results on the entanglement scaling in the QSM. In particular, we investigated a variety of scenarios comprising \(d=1,2,3\) dimensional quantum systems with long and short range interactions. Precisely, we discussed the interplay of classical and quantum fluctuations at thermal transitions in the \(3d\) QSM. We presented results for the entanglement entropy, the entanglement negativity and the entanglement spectrum in Sec. 4. In particular, we mapped out the negativity across the whole phase diagram (see Fig. 5). A more detailed analysis can be found in our work in Ref. [15]. In Sec. 5 and Sec. 6 we focussed on the quantum phase transition and on the ferromagnetic phase at zero temperature in two spatial dimensions. We discussed the behavior of the entanglement gap in the different quantum phases and at criticality. We showed that the entanglement gap is capable of detecting criticality in the QSM, although logarithmic corrections are present. We refer to Refs. [16, 17] for more details on the entanglement spectrum, the precise finite-size scaling, and for the study of the effect of corners in the entanglement spectrum. Finally, we reviewed entanglement properties of a one dimensional spherical quantum chain with long-range interactions [18]. Again, we considered the behavior of the entanglement gap across the zero temperature quantum phase transition for different long-range interactions. Remarkably, the QSM allows for a detailed analytical investigation of the entanglement gap in the ordered phase of the model. In particular, it is possible to understand how the entanglement gap is affected by the long-range nature of the interactions. The spherical model - here in its quantum formulation - has again proven itself as a remarkably useful tool to study collective phenomena in strongly interacting many-body systems. Clearly, the QSM will continue to serve as a reference system for future studies of entanglement-related quantities. For example it would be interesting to further investigate the influence of corners on the entanglement patterns. Furthermore, it would be enlightening to consider the QSM on quasi two-dimensional structures, such as ladders. It would also be interesting to study the influence of disorder, or explore the full entanglement Hamiltonian explicitly. Moreover, it has been shown that the criticality in the QSM (in and out of equilibrium) can be exploited as a resource in quantum metrology [97]. It would be interesting to explore to which extend entanglement might affect and support quantum metrology protocols. Furthermore, non-equilibrium and relaxational quantum dynamics has been extensively studied in the QSM in the past years [98, 99, 100, 101, 102, 103]. It would be interesting to derive the spreading of entanglement in such scenarios in the QSM, possibly exploiting the results from Refs. [104, 105, 106, 103]. An intriguing idea is to study the entanglement using the Kibble-Zurek dynamics [107, 108]. ## Acknowledgement It is our pleasure to dedicate this work to our friend and colleague Malte Henkel on the occasion of his 60\({}^{\mathrm{th}}\) birthday. SW would like to further thank Malte for years of excellent and interesting collaborations, exchanges and guidance.
2302.13150
A Semi-Intelligence Algorithm For Charging Electric Vehicles
The penetration of Electric Vehicles (EVs) in Distribution Networks (DNs) has many challenges for distribution companies, such as voltage drop, network losses, etc. To overcome these problems, it is recommended to use coordinated charging methods. However, these methods require high-cost telecommunication, measurement, and processing infrastructure and can only be implemented in smart grids. This article introduces a Semi-Smart Method (SSM) to charge EVs that does not require complex infrastructure. This method, using a simple and inexpensive local automation system, charges EVs during off-peak hours of the DN, thus improving its parameters. Since EVs are charged at low-traffic time intervals, the proposed method benefits EV owners. To get real results, 4- wire DN is considered to model the effect of the neutral wire. To confirm the effectiveness of the proposed method, it is compared to various uncontrolled loading methods. A standard 19-bus test system is used for simulations.
Arash Mousaei, Hesamodin allahyari
2023-02-25T20:27:37Z
http://arxiv.org/abs/2302.13150v1
# A Semi-Intelligence Algorithm For Charging Electric Vehicles ###### Abstract The penetration of Electric Vehicles (EVs) in Distribution Networks (DNs) has many challenges for distribution companies, such as voltage drop, networklosses, etc. To overcome these problems, it is recommended to use coordinated charging methods. However, thesemethods require high-cost telecommunication, measurement, and processing infrastructure and can only be implemented in smart grids. This article introduces a Semi-Smart Method (SSM) to charge EVs that does not require complex infrastructure. Thismethod, using a simple and inexpensive local automation system, charges EVs during off-peak hours of the DN, thus improving its parameters. Since EVs are charged at low-traffic time intervals, the proposed method benefits EV owners. To get real results, 4-wire DN is considered to model the effect of the neutral wire. To confirm the effectiveness of the proposed method, it is compared to various uncontrolled loading methods. A standard 19-bus test system is used for simulations. Electric Vehicles (EVs), Distribution Networks (DNs), Semi-Smart Method (SSM) ## I Introduction The reduction of fossil fuel resources and the expansion of environmental pollutants have caused the expansion of the use of Electric Vehicles (EVs) in the transportation industry. In general, EVs include various structures that Plug-in Hybrid Electric Vehicles (PHEVs) are the most efficient. PHEVs use the Internal Combustion (ICE) system and electric battery in a combined way and have the ability to charge/discharge from/to the electricity Distribution Network (DN). Despite this, the expansion of more and more PHEVs and their need for electric power has created many problems for electric distribution companies, such as severe voltage fluctuations, increasing losses, and increasing the possibility of blackouts due to overload. This issue is considered an important challenge in terms of demand-side management for electricity distribution companies. These problems can be solved by renovating the structure of the power systems and creating smart electricity DN and expanding telecommunication and control infrastructures. In fact, in smart grids, EVs are considered controllable loads, whose charging schedules can be programmed. This type of charging of EVs, which is done using infrastructures of communication, measurement, and processing, is called coordinated, controlled, or intelligent charging. A lot of research has been done on the effect of intelligent charging of EVs on various parameters of the network, such as the voltage profile, losses, reliability, harmonics, etc., and its advantages compared to uncontrolled or random charging have been mentioned. In [9], a model for the controlled charging of EVs is presented in which a three-objective function is used with the goals of minimizing losses, optimizing the load factor, and minimizing load changes. In [10], binary linear programming is used for the problem of synchronous charging of EVs. The reference [11] uses the valley filling strategy for the controlled charging of EVs. The reference [12] for the coordinated charging of EVs, there is a compromise between the total production cost correction. To solve the optimization problem, it uses a decentralized approach. The reference [13] deals with the issue of charging EVs from the standpoint of the emission of polluting gases. The objective function used in this reference is to minimize the emission of CO\({}_{2}\), and the costs related to charging EVs. In [14], a model for the synchronous charging of EVs is presented, in which the definition of two coefficients called the capacity margin coefficient and the charging priority coefficient are used. The model used in [15], is a multi-layer composite programming model that is used for the smart charging of EVs in the DN. The objective function of the above model minimizes the total charging costs and the ratio of the peak load to the average load. In [16], the problem of charging EVs in the presence of distributed generation resources is done. The model used in this reference is a multi-objective optimization problem whose objective function includes operating costs, pollutant reduction, and load changes. The reference [17] presents a smart charging method in which the amount of energy received from the upstream network is minimized and the amount of energy produced by renewable energy sources is maximized. In [18], a multi-objective optimization problem is presented in the presence of EVs, in which the cost of energy production and the emission of polluting gases is minimized. The reference [19] provides a scenario-based optimization model to solve the problem of placing scattered production resources in the presence of renewable energy sources of electric vehicles. In [20], the types of EVs charging methods have been examined in detail. As mentioned, the controlled charge methods are proposed against the uncontrolled charge method. In the uncontrolled charging method, it is assumed that the owner of every EV will connect his car to electricity and start charging the car as soon as he gets home. Considering that the time when people arrive at home is usually during peak hours (16:00 to 18:00), therefore using the uncontrolled charging method will put a lot of pressure on the DN. Therefore, the use of charging methods controlled or intelligent will be very effective. Because in such methods, the information related to the network is collected at every moment and sent to the central processor, and finally the charging time and also the charging rate of each EV are determined. It is necessary to implement smart methods, the existence of telecommunication, measurement, and processing infrastructures, the construction of which has very high costs. In this article, a Semi-Smart Method (SSM) is presented in which a simple, practical, and inexpensive automation system is used to charge electric vehicles in times of network shortage. Using the proposed method, on the one hand, will reduce the negative impact of charging EVs during peak times on network parameters, and on the other hand, it will make the owner of an EV pay the cost of charging his car based on the off-peak tariff. To accurately model, the DN and arrive at the real answers, the network is considered as an unbalanced 4-wire network and the effect of the presence of the neutral wire in the charging of EVs is investigated. This is a topic that has been addressed in few articles. In better words, in the research carried out in this field, usually, the DN is either considered balanced or if it is unbalanced, the cable is not modeled. The arrival and departure time of the car, the initial charge, and the capacity of the car's battery are taken into account, and the Monte Carlo method and appropriate probability functions are used to model them. To check the efficiency of the presented method, a sample DN is used and the proposed SSM charging is compared with the controlled charging method and some other charging strategies. In each of the DN, different parameters of the network including distribution network losses, voltage profile, maximum voltage drop, and also the amount of neutral wire's voltage in different buses of the network are calculated. The obtained results confirm the effectiveness of the proposed method. Therefore, the innovations of this article are: 1. Providing a SSM for charging EVs that has the following advantages: * Inexpensive, without the need for various measurement, communication, and processing equipment, and infrastructures. * Prevention of the adverse effect of uncontrolled charging of EVs during peak load times on network parameters. * Minimizing the cost of charging electric cars. 2. Considering an unbalanced 3-phase 4-wire DN to observe the effect of the presence of the neutral wire and arrive at real answers. ## II Distribution network modeling DNs usually have a ring structure that is used radially. The current consumption in the DN is single-phase and 3-phase, and due to the different behavior of existing subscribers, the electric load of the network is generally unbalanced. As long as the network load is balanced, no current will pass through the loom wire. But in the unbalanced state, the current of the neutral wire is opposite to zero, and this issue causes the difference of the neutral voltage in different buses of the DN. Since a load of DN is unbalanced, it should be considered as a quadrilateral for accurate modeling of this network and to reach real answers. Figure 1 shows the equivalent circuit related to two consecutive buses in a 4-wire radial DN. According to this figure, the equations related to the voltage drop in the phase lines (ph) and neutral (n) of two consecutive buses i and j at time t can be considered in the following form. \[\left\{\begin{aligned} & V_{i,ph,t}-V_{j,ph,t}=Z_{ij,ph}I_{ij,ph,t}\\ &\forall i,j\in B\quad ph\in PH\quad t\in T\end{aligned}\right. \tag{1}\] \[\left\{\begin{aligned} & V_{i,n,t}-V_{j,n,t}=Z_{ij,n}I_{ij,n,t}\\ &\forall i,j\in B\quad t\in T\end{aligned}\right. \tag{2}\] Also, the equations of the phase current and the voltage related to the bus in time will be as follows \[\left\{\begin{aligned} &\sum_{\begin{subarray}{c}k=1\\ k<i\end{subarray}}^{N}I_{k,ph,t}-I_{i,ph,t}^{Load}=\sum_{\begin{subarray}{c}j =1\\ j>i\end{subarray}}^{N}I_{l,j,ph,t}\\ &\forall i,j,k\in B\quad ij,k\in L\quad ph\in PH\quad t\in T\\ &\left\{\begin{aligned} &\sum_{\begin{subarray}{c}k=1\\ k<i\end{subarray}}^{N}I_{k,n,t}+\sum_{ph}I_{i,ph,t}^{Load}=\sum_{ \begin{subarray}{c}j=1\\ j>i\end{subarray}}^{N}I_{l,j,n,t}\\ &\forall i,j,k\in B\quad ij,k\in L\quad ph\in PH\quad t\in T\\ &\left\{\begin{aligned} &\sum_{\begin{subarray}{c}k=1\\ k<i\end{subarray}}^{N}I_{l,j,ph,t}+I_{l,j,n,t}=0\\,&\forall i,j,k\in B\quad ij\in L\quad ph\in PH\quad t\in T\end{aligned} \right.\end{aligned}\right. \tag{4}\] The amounts of the active power, and the reactive power in each node are determined based on the voltage and current of the same node using the following relations: \[\left\{\begin{aligned} & P_{i,ph,t}^{Load}+jQ_{i,ph,t}^{Load}=(V_{i,ph,n,t})(I_{i,ph,t}^{Load})^{*}\\ &,\forall i,j\in B\quad ph\in PH\quad t\in T\\ &\left\{\begin{aligned} & V_{i,ph,n,t}=V_{i,ph,t}-V_{i,n,t}\\ &,\forall i\in B\quad ph\in PH\quad t\in T\end{aligned} \right.\end{aligned}\right. \tag{7}\] One of the conventional methods for solving the equations of load distribution in 4-wire radial DNs is to use the backward-forward method. ## III Modeling of electrical loads of the distribution network In this article, electrical loads have been divided into two categories: ordinary household loads and loads related to EVs, and the modeling of each of them will be discussed in the continuation of this section. ### _Modeling of ordinary household loads_ In this article, household loads are of single-phase type and have known active and reactive power. In this way, we will face an unbalanced distribution network in which the available loads have uncertainty. To determine the load of each of the subscribers, a basic load curve similar to Figure 2 is considered. For this purpose, the value of the load curve in each time period has been considered as the average value of the normal distribution function and the standard deviation equal to 20%. Fig. 1: The equivalent circuit related to two consecutive buses in a 4-wire radial DN LV. Fig. 2: Base load curve for ordinary household loads ### _Modeling of EVs_ Another type of loads considered in this article are EVs, which are considered among controllable electric loads. Electric cars, like ordinary household goods, have various uncertainties, including: battery capacity, time of entering the house, amount of initial charge and time of leaving the house. In this article, for the modeling of the above parameters, the same method is used to determine the curve of the household load of the subscribers. with the difference that instead of the normal distribution function, appropriate probability distribution functions have been used for each of the above parameters based on the information of the available articles in this field. Another important point in the modeling of EVs is determining their power consumption when charging. Usually, two types of slow and fast chargers are used to charge EVs. In the slow charging method, which is usually done in homes, the current drawn by the EV is equal to 16 A or 32 A under a voltage of 220 V, which is equivalent to power. 3.5 kW and 7 kW. In fast charging, the consumption power of the EV increases up to 50 kW or higher, which is usually used in special parking lots for charging EVs. In addition, in order to reach the optimal values of various parameters of the smart DN, the charging power rate can also be controlled. But in this article, it is assumed that the network lacks special communication and control infrastructures, and for this reason, its value is 3.5 kW. Considering that EVs use only active power, \(P_{l,p,h,t}^{Load}\) and \(Q_{l,p,h,t}^{Load}\) in (6) can be considered as the following relations: \[\left\{\begin{aligned} & P_{l,p,h,t}^{Load}=P_{l,p,h,t}^{Conv}+P_{l,p,h,t}^{ PHEV}\\ &\forall i\in B\quad ph\in PH\quad t\in T\end{aligned}\right. \tag{8}\] \[\left\{\begin{aligned} & Q_{l,p,h,t}^{Load}=Q_{l,p,h,t}^{Conv}\\ &\forall i\in B\quad ph\in PH\quad t\in T\end{aligned}\right. \tag{9}\] When each car arrives at home and knowing the initial charge of the car battery, battery capacity, and charging power, the time needed to fully charge the EV can be obtained from the relationship below. \[\left\{\begin{aligned} & TCT_{l,ph}=\frac{BC_{l,ph}\times(0.95-ISOC_{l, ph})}{ChP}\\ &,\forall i\in B\quad ph\in PH\end{aligned}\right. \tag{10}\] It should be mentioned that usually, to prevent battery damage and reduce its useful life, the battery is charged up to 95% of its capacity. The main point after determining the time required to charge EVs, is determining the time to charge the car. In the next section, a SSM for EV charging will be presented. ## IV Semi-Smart charging of electric vehicles As mentioned in the previous sections, two methods are usually used to charge EVs, which are: (1) controlled or smart method; 2) uncontrolled or random method. In the uncontrolled method, every EV starts charging as soon as it enters the house and connects to the network. In this case, there is no need for any telecommunications infrastructure to control the charging of cars, and considering the random behavior of car owners, it is non-deterministic in nature. But in controlled mode, the goal is: "Determining the optimal charging time and power rate for charging EVs in the network." Specifically, in the smart method, the existence of communication platforms and smart measurement equipment and a central processing and control system is unavoidable. The method of semi-smart charging of EVs is presented, which does not require network smart and the use of expensive equipment on the one hand, and on the other hand, avoids the negative effects of uncontrolled charging of EVs during peak load times. As can be seen from Figure 2, the load curve of a typical household consumer can be divided into three areas: 1) low load, 2) intermediate load, and (3) high load or peak load. In order to avoid putting pressure on the DN during peak times, generally, electricity tariff is also divided into three regions according to Figure 3, which are: (1) green region (low tariff), (2) blue region (medium tariff) and (3) red region (high tariff). As it can be seen from Figure 3, charging of EVs in the low load area on the one hand will cause less pressure on the DN and as a result of improving the parameters of the DN, and on the other hand, it will have a lower charging cost for the car owners. This is when most car owners prefer to connect their car to the network as soon as they reache home so that the car is ready for the next morning with a full charge. Since the charging is not controlled, it puts a lot of pressure on the network and the use of the smart charging method is also expensive, in this article a SSM is presented in which, by using cheap equipment such as programmable key EVs are charged during times of network shortage. The working method is as follows: as soon as the car owner reaches home, he connects his car to the network through a programmable key, and the information related to the time of leaving the house, the initial charge, and it enters the battery capacity of its car as input information to the key. According to the input information and using (8), the time required to fully charge the EV's battery is calculated by the key processor and the EV charging time is determined in such a way that the end of it is the moment the car owner leaves the house in the morning. Be later the charging start time is calculated based on the time needed to charge the car and also the charging power rate of the car or the same 3.5 kW. In other words, the amount \(P_{l,p,h,t}^{Load}\) for all EVs is determined locally by programmable keys installed in the parking lots of homes and based on the input information of the car owner, using the below equations: \[\left\{\begin{aligned} & P_{l,p,h,t}^{Load}=ChP,Dep_{l,ph}-TCT_{l,ph} \leq t\leq Dep_{l,ph}\\ & 0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ Simulation To check and evaluate the efficiency of the proposed method, a 380V three-phase 19-bus feeder was used according to figure 4, and the information related to the resistance and inductance of the phase lines is given in Table I. The specifications of the neutral conductor are also included. It is assumed that in each bus, three home consumers are connected to each of the phases, and therefore the total number of subscribers in the network is equal to 57. The studied period is a day and night which is divided into 15-minute intervals. The load curve of the household sharers has been determined according to the method presented in section (3-A), and the peak load and common power factor have been taken into account as 2 kW and 0.91 lead-phase, respectively. In order to assign EVs to network subscribers, the working method is in the order that the penetration coefficient of EVs is considered equal to 60%. In other words, 60% of household subscribers, that is, 34 out of 57 subscribers located in different phases, are the owners of EVs, which were randomly selected. To model the uncertainty in the parameters of EVs, the same method presented in section (3-A) is used for modeling household loads, with the difference that instead of the normal function, the probability distribution functions shown in Table II are used. Table III shows the locations of randomly selected EVs, and their various parameters based on the above explanations. As it can be seen from Table III, for example, in bus 1, only the users of phases a and b have electric cars, whose battery capacity is equal to 26 and 19 kwh, respectively. To check and evaluate the efficiency of the proposed method, a 380V three-phase 19-bus feeder was used according to figure 4, and the information related to the resistance and inductance of the phase lines is given in Table I. The specifications of the neutral conductor are also included. It is assumed that in each bus, three home consumers are connected to each of the phases, and therefore the total number of subscribers in the network is equal to 57. The studied period is a day and night which is divided into 15-minute intervals. The load curve of the household sharers has been determined according to the method presented in section (3-A), and the peak load and common power factor have been taken into account as 2 kW and 0.91 lead-phase, respectively. In order to assign EVs to network subscribers, the working method is in the order that the penetration coefficient of EVs is considered equal to 60%. In other words, 60% of household subscribers, that is, 34 out of 57 subscribers located in different phases, are the owners of EVs, which were randomly selected. To model the uncertainty in the parameters of EVs, the same method presented in section (3-A) is used for modeling household loads, with the difference that instead of the normal function, the probability distribution functions shown in Table II are used. Table III shows the locations of randomly selected EVs, and their various parameters based on the above explanations. As it can be seen from Table III, for example, in bus 1, only the users of phases a and b have electric cars, whose battery capacity is equal to 26 and 19 kwh, respectively. To observe the effect of the presence of EVs on the network parameters, first the network is simulated without the presence of EVs, which is called the base case. Also, the results related to the presented SSM for EV charging will be compared with three other methods. Therefore, simulations are performed in five different modes, which are: 1. First case: DN, without the presence of an EVs 2. Second case: DN with the presence of EVs, and the charging of vehicles in an uncontrolled method. 3. Third mode: DN with the presence of EVs, and charging cars using a timer key. In this case, it is assumed that all cars start charging at 24:00. 4. The fourth mode: DN with the presence of EVs and car charging using a timer key and DN zoning. In this case, it is assumed that the DN is divided into 3 regions and the cars of each region start charging at 23:00 with an interval of one hour. The way of division into regions and the start time of charging in each region is shown in Table IV. 5. Fifth mode: DN with the presence of EVs and vehicle charging using a programmable key. In all three cases, the backward-forward method has been used to play the load in each time period. The phase voltage Fig. 4: LV feeder for distribution of 19 buses of the DN of bus 1 is considered to be equal to 220 volts and the neutral voltage in bus 1 is equal to zero. The first parameter that is used to compare the SSM with uncontrolled charging is the voltage profile along the feeder. In figures 5 to 7, the changes of the voltage of different phases in relation to the zero for the worst bus in the five described cases are shown. It is necessary to mention that the worst bus is the bus where the highest amount of voltage drop occurs. The voltage values in the worst bus for different conditions are shown in Table V. As it is clear from the results, the highest amount of voltage drop is related to the case where the cars start charging by using the timer switch and at a specific time, the same 24:00. This issue can be solved by dividing the network into three zones and allocating separate charging times to each zone, and it is observed that by using this method, the amount of voltage drop is improved compared to the uncontrolled charging method. It is also observed that in order to reach a better answer, a programmable key can be used and the amount of voltage drop can be brought to an acceptable value. The zero-voltage value is another parameter whose changes in the worst bus have been compared for different conditions and the results are shown in Figure 8. It is necessary to mention that the worst bus is meant here, the bus where the highest amount of zero voltage occurs. As expected, due to the load imbalance of the network, the current passing through the neutral wire is not zero, and as a result, the voltage of the Fig. 5: Voltage changes of phase (A) compared to zero in the worst bus Fig. 6: Voltage changes of phase (B) compared to zero in the worst bus Fig. 7: Voltage changes of phase (C) compared to zero in the worst bus neutral wire will also be zero in the different buses of the opposite network. Uncontrolled entry of EVs into the network causes the network to become more unbalanced and the amount of zero voltage to increase. However, it is observed that the amount of zero voltage is less in the case of using a programmable key than in other cases. To clarify, it should be noted that in the uncontrolled method, the charging of cars is uneven and happens randomly during the peak times of the network. In the methods related to the use of a timer key, there is also a kind of synchronization between the entry of large unbalanced loads into the network, which can increase the level of imbalance of the basic network. However, in the presented method, the charging of the cars is done asynchronously in times of network shortage, and this results in obtaining better results in comparison with other charging methods. Network losses are among other parameters that are very important in the distribution network. Figure 9 shows the amount of total wasted energy related to the first to fifth modes during day and night. As can be seen, the amount of wasted energy in the absence of EVs is equal to 192.803 kwh, which increases to 287.249 kwh when EVs enter the grid and their controlled charging. By using SSM and charging EVs during off-peak hours, the amount of network losses is reduced, so that with the use of the timer switch, the value is 271.949 kwh, and with the use of the timer switch and zoning of the network, it decreases to 265.830 kwh. This means that by using simple equipment and without network intelligence, the amount of network losses will be improved by 5.6% and 8% respectively by using the above methods. Finally, by using the programmable key, the amount of network losses reaches 256.240 kwh, which means a 12.1% reduction in network losses. It is observed that the semi-smart charging method, using a programmable key, improves the network parameters in the presence of EVs compared to the uncontrolled charging method, and makes the car charging costs for the owners as low as possible. It is necessary to mention that by generalizing the presented model, the ability to inject power from the Vehicle to the Grid (V2G) can also be added to it. This is a topic that is being researched. ## VI Onclotion In this article, a SSM for charging EVs in the DN has been presented. Compared to smart methods, this method has a very small cost, and compared to the uncontrolled charging method, it improves the parameters of the DN in the presence of EVs. In this method, it is assumed that the charging rate of EVs is fixed and the goal is to determine the charging interval of the car. To do this, by using a programmable key, the EV charging time is determined in such a way that the end of the time is the car's departure time in the morning of the next day. Therefore, it is necessary for the car owner to enter his car information as an input to the programmable key. By applying the presented method on the 19-bus DN and comparing the results obtained from it with different uncontrolled methods, its efficiency has been evaluated. In any case, the main network parameters including losses, voltage drop and zero voltage have been calculated. The obtained results show that the use of the proposed method leads to an improvement of 12.1% in the amount of network losses and 3.3% in the amount of network voltage drop. The important point is that using the proposed method will minimize the cost of charging EVs for their owners, because in the presented method, EV charging is done in times of network shortage, in which the cost of the power of many consumers is high. ### List of symptoms \begin{tabular}{l l} i, j, k & Indications related to network buses \\ ph & Indication related to phase \\ n & Indication of the neutral wire \\ t & Indication related to time period \\ B & Set of all network buses \\ L & Set of all network lines \\ PH & Set of all phases of the network \{a, b, c\} \\ T & Set of all time intervals \\ BC\({}_{i,ph}\) & Capacity of the battery of the EV placed in the \\ ph and i-th bus \\ ChP & Car charging power rate (3.5 kW) \\ Dep\({}_{i,ph}\) & Departure time of the EV located in ph and i- \\ th bus in the next morning \\ ISOC\({}_{i,ph}\) & Value of the initial charge of the EV placed in \\ & the ph and i-th bus \\ P\({}_{i,ph,L}\)\({}^{Conv.}\) & Active power of a normal household load is \\ & located in the ph and i-th bus at time t \\ Q\({}_{i,ph,L}\)\({}^{Conv.}\) & Reactive power of a normal household load is \\ & located in the ph and i at time t \\ Z\({}_{i,gh}\) & Impedance of the ph between the i-th bus and \\ & j-th bus \\ Z\({}_{i,n}\) & Impedance of the null conductor between the \\ & i-th bus and j-th bus \\ I\({}_{i,ph,n}\) & Current of the ph between the i-th bus and j-th \\ & bus at time t \\ I\({}_{i,n,l}\) & Current of the neutral conductor between the \\ & th bus and j-th bus at time t \\ I\({}_{i,ph,L}\)\({}^{Load}\) & Electric charge current between the ph and \\ & zero bus at time t \\ P\({}_{i,ph,L}\)\({}^{Load}\) & Active power of the electric charge is located \\ & between the ph and zero bus at time t \\ Q\({}_{i,ph,L}\)\({}^{Load}\) & Reactive power of the electric charge is \\ & located between the ph and zero bus in the at \\ & time t \\ TCT\({}_{i,ph}\) & The time required to charge the car \\ V\({}_{i,ph,t}\) & Phase voltage ph of the i-th bus at time t \\ V\({}_{i,n,t}\) & Neutral conductor's voltage of the i-th bus at \\ & time t \\ V\({}_{i,ph,t}\) & The difference in the potential of the ph of i-th \\ & bus compared to neutral conductor \\ \end{tabular}
2305.11938
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- languages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models
Sebastian Ruder, Jonathan H. Clark, Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel A. Sarr, Xinyi Wang, John Wieting, Nitish Gupta, Anna Katanova, Christo Kirov, Dana L. Dickinson, Brian Roark, Bidisha Samanta, Connie Tao, David I. Adelani, Vera Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Melvin Johnson, Dmitry Panteleev, Partha Talukdar
2023-05-19T18:00:03Z
http://arxiv.org/abs/2305.11938v2
# Xtreme-Up: A User-Centric Scarce-Data Benchmark ###### Abstract Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs)--languages for which NLP research is particularly far behind in meeting user needs--it is feasible to annotate small amounts of data. Motivated by this, we propose Xtreme-Up, a benchmark defined by: its focus on the **scarce-data** scenario rather than zero-shot; its focus on **user-centric** tasks--tasks with broad adoption by speakers of high-resource languages; and its focus on **under-represented** languages where this scarce-data scenario tends to be most realistic. Xtreme-Up evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. Xtreme-Up provides methodology for evaluating many modeling scenarios including text-only, multimodal (vision, audio, and text), supervised parameter tuning, and in-context learning.1 We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models.2 Footnote 1: While Xtreme-Up supports in-context learning, our results indicate that few-shot in-context learning is less effective than fine-tuning on 100s of examples for ULs. We advocate for comparing such approaches directly as the community explores Xtreme-Up. Footnote 2: [https://github.com/google-research/xtreme-up](https://github.com/google-research/xtreme-up) ## 1 Introduction The development of natural language processing (NLP) technology that serves most of world's languages is hindered by the stark lack of data for most languages Joshi et al. (2020). While there is increasing interest in developing datasets and models for under-represented languages (ULs), existing datasets are often informed by established research directions in the NLP community de Marneffe et al. (2021). While linguistic tasks such as syntactic parsing have become less practically relevant Glavas and Vulic (2021), other tasks such as news summarization or sentiment analysis are informed by the availability of data in high-resource language settings and may be less useful for speakers of ULs Varab and Schluter (2021); Muhammad et al. (2022). Impactful capabilities such as question answering or virtual assistants Asai et al. (2021), on the other hand, often depend on ancillary technologies such as language ID, data filtering, automatic speech recognition (ASR), or optical character recognition (OCR) that are typically under-performing or unavailable for ULs Caswell et al. (2020); Bapna et al. (2022); Kreutzer et al. (2022); Rijhwani et al. (2021); Khare et al. (2021). As a result, speakers of ULs will not be able to reap the benefits of such capabilities, even if the development of models is successful. In order to make progress on NLP for ULs, we should thus focus on building datasets and evaluating models on tasks that are most likely to benefit speakers of those languages.3 To this end, we propose Xtreme-Up (Under-Represented and User-Centric with Paucal4 Data), a benchmark focusing on evaluation of multilingual models on user-centric tasks in a few-shot setting. Footnote 3: Speakers of ULs have many different needs ranging from standard NLP technology to language documentation and revitalization Bird (2022). Our focus is on standardized, institutional, and contact languages including dialects and non-standard language varieties spoken by large speaker populations. Footnote 4: We borrow the term _paucal_—meaning few—from linguistics, to emphasize the scarce-data nature of Xtreme-Up. We focus on tasks that technology users encounter regularly in their daily lives: i) information access tasks, which represent generally useful NLP capabilities; and ii) input/output tasks that enable other technologies. We show the corresponding tasks and their role in typical interactions with language technology in Figure 1. Moving away from the standard cross-lingual zero-shot setting (Hu et al., 2020; Ruder et al., 2021), we introduce a standardized multilingual in-language fine-tuning setting based on the amount of data that can realistically be annotated or generated within 8h for a language. Our results highlight the limitations of current models on ULs, demonstrate the potential of language models (LMs) to improve user-centric applications, and show the benefit of byte-based approaches, among other findings. In this work, we contribute the first massively-multilingual few-example benchmark including: **a)** newly created data for QA, OCR, autocomplete, semantic parsing, and sentence-level transliteration; **b)** new task setups for named entity recognition (NER) enabling evaluation on natural--rather than tokenized--text; and for QA and retrieval providing a more interesting setting than the gold passage (GoldP) setup while offering a lower barrier-to-entry than the full TyDi QA Clark et al. (2020) or XOR (Asai et al., 2021) tasks; **c)** carefully-designed experimental setups, standardizing in-language fine-tuning and in-context learning and focusing on the information access scenario for ULs for ASR and MT; **d)** baseline results for all datasets on commonly-used subword and byte-based models. ## 2 Related Work Multilingual benchmarksSome studies employ highly multilingual individual datasets for the evaluation of multilingual models, including Universal Dependencies (de Marneffe et al., 2021) or XLSum (Hasan et al., 2021). At the same time, there is increasing work on datasets in ULs for a variety of applications (Niyongabo et al., 2020; Winata et al., 2023; Muhammad et al., 2023). Due to their rapidly growing capabilities, NLP models are increasingly evaluated on a suite of datasets. Existing multi-task multilingual benchmarks such as XTREME (Hu et al., 2020), XGLUE (Liang et al., 2020), and XTREME-R (Ruder et al., 2021) cover 20-50 mainly high-resource languages and prioritize tasks with available data, regardless of their utility to speakers. In contrast, XTreme-Up focuses on under-represented languages and user-centric tasks, creating new data for under-represented tasks and languages. Multilingual evaluationThe choice of the experimental setting and aggregation metric are important considerations in multilingual evaluation. Prior work focused on zero-shot cross-lingual transfer (Hu et al., 2020), which--despite being compelling from a scientific perspective (Artexe et al., 2020)--is less practically useful. While in-language fine-tuning has been explored before (Lauscher et al., 2020; Hedderich et al., 2020), XTreme-Up is the first to standardize the setting across tasks based on realistic annotation costs. Different frameworks aggregate performance in different ways across languages. Blasi et al. (2022) assess the utility of a task by weighting model performance based on the size of the speaker population while Khanuja et al. (2023) introduce the Gini coefficient to quantify performance disparity across languages. XTreme-Up opts for a simple average over ULs, emphasizing intuitiveness and accessibility of the results. ## 3 XTreme-Up ### Design Principles Xtreme-Up is motivated by the following design principles: Figure 1: The tasks in Xtreme-Up and their role in language technology. Left: enabling access to language technology; middle: facilitating information access as part of larger systems (question answering, information extraction, virtual assistants); right: making information accessible in the speaker’s language. Under-represented languagesWe follow the ontology of Joshi et al. (2020) in defining ULs based on available data. Specifically, we select languages in categories 1-3 (e.g., Amharic, Estonian, Kin-yarwanda) as under-represented, leaving categories 4-5 as high-resource languages (e.g., English, German, Hindi). We focus on tasks with existing data in ULs and tasks where we can efficiently collect such data at scale (see Appendix A for an overview of ULs in Xtreme-Up). User-centric tasksWe focus on widely adopted user-facing tasks benefiting speakers of high-resource languages. We further break these down into two major groups: **1)** input/output tasks; and **2)** information access tasks (see Figure 1). Scarce dataWe focus on a realistic scenario where a small amount of data is available in each UL. Mirroring reality, we do _not_ restrict the amount of training data available in high-resource languages, but rather provide only as many labeled training examples as can be annotated in a realistic amount of time for ULs (see Section 3.2). EfficiencyWe focus on massively multilingual evaluation settings that can still be run efficiently with a modest amount of compute. Text-centric, yet multi-modalWe focus on tasks that can be tackled using textual data alone and provide baseline systems that do so. We frame multi-modal tasks (OCR and ASR) so that natively multi-modal models can be evaluated fairly alongside text-only models. We accomplish this by releasing original audio, image, and text model inputs while also providing baseline system output that can be fed to second-stage text-only systems. We hope to see fully multi-modal models take up this challenge over the coming years. We provide an overview of the tasks in Xtreme-Up in Table 1. We discuss motivation and high-level information in the next section and provide more details for each task in Appendix B. ### How much data? To ensure a realistic amount of training data, we limit the training data in each task per language to the number of examples that can be annotated in 8 hours. We believe this reflects the real difficulty of annotating training and evaluation data for a very large number of languages. In this way, we design for the _task first_ and will let the research to develop technology that addresses these challenges follow. For each task, we estimate how long it takes to annotate a single example for a trained annotator.5 We base our estimates on prior work and our own annotation efforts.6 We show the data annotation time estimates in Table 1. For tasks with larger training datasets, we sub-sample the available data \begin{table} \begin{tabular}{l l l r r r r r r} \hline \hline & Task & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ sum over HL+ULs \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ avg. per UL \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Validation \\ sum across ULs \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ sum across ULs \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \# of ULs \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Metric \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Annotation Cost \\ minutes/sample \\ \end{tabular} } \\ \hline \multirow{4}{*}{ \begin{tabular}{l} **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** \\ **OCR** **OCR** \\ **OCR** \\ **OCR** **OCR** \\ **OCR** **\\ **OCR** **OCR** \\ **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** **\\ **OCR** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** ** **OCR** **\\ **OCR** ** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** **\\ **OCR** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** **OCR** **\\ **OCR** **OCR** \\ **OCR** **OCR** **\\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** **OCR** \\ **OCR** ** **OCR** \\ **OCR** ** **OCR** \\ **OCR** ** **OCR** **OCR** \\ accordingly. Table 1 shows the sub-sampled data sizes. We show an example instance of each task in Table 2. ### Input / Output Tasks Automatic speech recognition (ASR; B.1)The goal of ASR is to transcribe speech into human-readable text. It thus serves as a fundamental step for enabling natural language understanding applications on speech input. In many scenarios, users may strongly prefer to speak rather than type and so high-quality ASR is an enabling factor for such user interactions. We employ the FLEURS dataset (Conneau et al., 2023) consisting of recordings in 102 languages for sentences from FLORES-101 (Goyal et al., 2022), which were translated from English Wikipedia to 101 languages. We evaluate on the under-represented portion of the data, which covers 77 languages. Optical character recognition (OCR; B.2)OCR, the process of converting text from images into machine-readable formats, is used in a wide range of applications, from extracting language data locked in paper books (Rijhwani et al., 2020) and imaging legal documents (Singh et al., 2012), to improving accessibility for people with low vision or blindness (Mowar et al., 2022). It is especially important for under-represented languages where both training data and content that users may wish to access may not be abundant as digital text on the web. While most existing datasets focus on higher-resourced languages (Nayef et al., 2017; Rigaud et al., 2019), there has been recent interest in developing OCR for ULs. This includes the creation of a small dataset for endangered languages (Rijhwani et al., 2020) and a synthetic dataset for 60 languages (Ignat et al., 2022). We create a dataset that aims to fill the gaps and augment previous work in OCR for ULs, by focusing on larger-scale, typologically-diverse, and user-centric data. Our dataset contains transcriptions for books in seven languages: Amharic (am), Bengali (bn), Kannada (kn), Myanmar (Burmese; my), Sanksritt(sa), Sinhala (si), and Swahili (sw). The books domain is the primary use-case for a large number of downstream users, but is one of the most challenging for OCR models (Rigaud et al., 2019). The dataset consists of transcriptions of entire pages and thus enables leveraging the full context understanding capabilities of large language models. To demonstrate these capabilities, we use the approach of "OCR post-correction": training language models to correct recognition errors in transcriptions from existing OCR systems (Hammarstrom et al., 2017; Rijhwani et al., 2020). Autocomplete (B.3)Autocomplete (or predictive text), i.e., predicting the rest of a word a user is typing, is a useful technology that speeds up human-computer interaction (Anson et al., 2006). As such, autocomplete has become a technology that users have come to expect and rely on for input in high-resource languages. The standard next word prediction task (Sundermeyer et al., 2012) does not accurately reflect this practical setting as it relies on predicting entire units (words, subwords, or characters); similarly, perplexity-based evaluation makes comparisons across segmentations and languages difficult (Mielke, 2019) while ignoring important threshold effects associated with the typical top-k predictions in a user interface (Tam and Wells, 2009). To fill this gap, we introduce a new autocomplete task that unifies character, subword, and token-level LM settings by focusing on a "word" as the predictive unit. Models are required to complete the next word based on a left context of \(N\) words and an optional character n-gram prefix. We use accuracy@3 for evaluation to reflect the requirement of displaying a limited number of candidates to the user. We process high-quality natural language data from Universal Dependencies (de Marneffe et al., 2021), which we deduplicate against mC4 (Xue et al., 2021), the most common multilingual pretraining corpus in order to test models predictive rather than memorization capabilities. Transliteration (B.4)Transliteration is the conversion of text between writing systems (Wellisch, 1978). Unlike translation, it does not change content but only script. For example, the Hindi sentence ("the thing is blue") might be written "vastu neela hai" in the Latin script (which is often called _romanization_).7Transliteration is important because it allows users to type in their preferred script (e.g., Latin script) even if it is different than their preferred display script (e.g. Devanagari) and is used internally by many machine translation systems to rewrite names from different scripts. Footnote 7: Informal romanization of this sort is very common, e.g., in South Asia, where languages are sometimes written by different communities in both Perso-Arabic and Brahmic scripts. We extend the Dakshina dataset Roark et al. (2020), which provides romanizations of Wikipedia sentences written in the native scripts of 12 South Asian languages. To this data, we added: a) romanizations of native script Wikipedia for one new language (Amharic); and b) transliteration to a third script (Shahmukhi) for one already covered language (Punjabi). The resulting task covers 13 languages from three language families. For all these languages transliteration occurs from the Latin script to the native script of the language, and vice versa and between Shahmukhi (Pexo-Arabic), Gurmukhi (Brahmic), and Latin for Punjabi, leading to a total of 30 transliteration directions. Machine translation (MT; App. B.5)MT is an important technology for users of ULs wishing to read text written in a different language. However, most current approaches require large amounts of parallel training data to achieve good performance, which are often not available for ULs Haddow et al. (2022). We focus on the information dissemination scenario where content from high-resource languages (including from tasks such as cross-lingual QA) is translated to enable information access by common users; as such, XTREME-UP includes translations from English into 93 languages, covering a wide range of high-resource and UL languages. Only 39 ULs are used for evaluation; the high-resource languages are included to allow for transfer learning.8 The dataset is adapted from FLORES-101 Goyal et al. (2022), repurposing half of the dataset's original development set as a training set. See SS6 for a detailed discussion of how we distinguish freely-available unsupervised data versus purpose-annotated supervised data in Xtreme-Up. Footnote 8: Our baseline results were trained only on the 39 UL pairs for efficiency. ### Information Access Tasks Question Answering (B.6)Question answering is an important capability that enables responding to natural language questions with answers found in text Kwiatkowski et al. (2019). We focus on the _information-seeking_ scenario where questions are asked (and therefore written by dataset annotators) without knowing the answer--it is the system's job to locate a suitable answer passage (if any); this is in contrast to the school-like reading comprehension scenario where questions are written while looking at text, which is guaranteed to contain the answer. Importantly, information-seeking question-answer pairs tend to exhibit less lexical and morphosyntactic overlap between the question and answer since they are written separately. We include two variants of the task: in the **language QA** task, both the question and passage are in the same language. In this task, original questions and passages are from the TyDi QA dataset Clark et al. (2020). In the **cross-language QA** task, the question is in the user's native language while the passage and answer are in a high-resource language having a large amount of available answer content (English). For this task, we use examples from TyDi XOR Asai et al. (2021) in 7 languages. We additionally collect new data in 23 new Indic languages for cross-lingual QA by professionally translating questions and answers from existing Indic languages in XOR QA. This methodology mitigates the issue of translating Western-centric English data to locales with different topical interests. Cross-lingual QA is especially important for ULs since they may lack plentiful in-language answer content on the web. In Xtreme-Up's QA task, a system is given a question, title, and a passage and must provide the answer--if any--or otherwise return that the question has "no answer" in the passage.9 To this end, we generalize the gold passage Clark et al. (2020) setting, augmenting it with negative examples. These negatives are obtained from (a) passages within the same article as a passage containing the answer and (b) question-answer pairs from the full TyDi QA dataset where no answer was found in the candidate Wikipedia article. The data is split into training, validation, and test splits in such a way to avoid deduplication and overlap of splits, even across our various QA tasks.10 Footnote 9: This format follows the SQuAD v2 setup Rajpurkar et al. (2018). Footnote 10: This turns out to be non-trivial given the different splits strategies across the various datasets and our decision to create a train, validation, and test set even where only a train and validation set were previously available for public download. Retrieval for QA (B.6)Within the information-seeking QA scenario, the above core QA task assumes answer candidate passages as an input. In practice, a passage retrieval system for question-answering allows for the extraction of relevant text from a vast text corpus. The retrieved passages can then be used by a question-answering system to extract or generate an answer to the user's question. In Xtreme-Up, we separate retrieval into two distinct tasks, **in-language retrieval** and **cross-language retrieval**. For in-language retrieval, both the questions and passages are in the same language. The preparation of negatives, deduplication, and splits are identical to the QA task above. For validation and test, we create an index of 271k in-language passages (447k English passages for the cross-language task) making for a small enough index for efficient experimentation, while containing distractors that make for a challenging task, since these distractors are drawn from the same articles containing the target passages. #### Named entity recognition (NER; B.7) NER is an important capability for information access systems that users depend on with applications ranging from recognizing requests for entity lookups to performing information extraction to populate the knowledge graphs that handle those requests. NER is also a capability needed in spell-checking and localization systems [11]. Identifying entities in ULs poses challenges due to the use of different scripts, lack of capitalization, different numerical representations, etc. We build on MasakhaNER [2] and MasakhaNER 2.0 [2], two large NER datasets in African languages, which provide data in the standard CoNLL tokenized format [13]. In order to enable evaluation in a setting that is closer to the real world, we automatically map the annotated spans to the original raw text. The combined data with byte-level span annotations--termed MasakhaNER-X--covers 20 languages.12 Footnote 11: We emphasize the word _capability_ here since we recognize that stand-alone NER _systems_ may not be strictly necessary in the long run; however, the capability of recognizing and properly handling entities will remain. Footnote 12: We remove the Fon and Hausa subsets of MasakhaNER 2.0 due to quality issues in the annotated data. #### Semantic parsing (App. B.8) Semantic parsing is the task of mapping a natural language utterance to a logical form or a structured interpretation that can be executed by a system such as a virtual assistant. For example a user utterance can be classified into an intent and parsed into slots: "_wake me at 8 am_" would be mapped to the "_CreateAlarm_" intent and would have a single "time" slot with "_8 am_" as value. Then the assistant may use this interpretation to create an alarm at the specified time. While modern models are becoming very capable of responding to users' language inputs, we believe this task is especially timely as users will increasingly want to turn their interactions with assistants and chat-like dialog systems into actions on external systems, which require API calls; this capability is what the semantic parsing task evaluates. Recently, researchers published more multilingual semantic parsing datasets that focus on virtual assistant domains [11, 22, 23]. We extend a portion of an existing semantic parsing dataset to new languages targeting the following features: a) high-quality utterances produced by professional translators; b) a wide range of domains and intents; c) inclusion of different language families and some underrepresented languages; d) sentences with culturally relevant entities; and e) code-mixed sentences, i.e., multiple language within the same sentence--a common phenomenon in multilingual societies. We adapt the test split of MTOP13[11] with professional translators/annotators to the following 15 languages: Amharic, Belarusian, Bengali, Brazilian Portuguese, Finnish, German, Hausa, Hungarian, Japanese, Russian, Swahili, Tamil, Turkish, Yoruba, and Zulu. Together with the original MTOP languages, the new MTOP++ dataset covers a total of 20 languages. The data we collect, differently from MTOP, is localized (i.e., Western-centric entities are replaced with more culturally relevant entities for the target language), following recent trends in multilingual benchmarking [11, 12, 13]. Footnote 13: All the other datasets were not yet available at the start of the project and annotation tasks. Still, such datasets are not focused on ULs. We also extend MTOP to three widely spoken but under-represented Indic languages in a code-switching setting: Hindi-English, Bengali-English and Tamil-English. We automatically convert the test-split of MTOP to code-mixed utterances using PaLM [10] and run human verification on such utterances. ### Overall Evaluation For each task, we evaluate model performance by computing a task-specific score. We employ character-level metrics such as character error rate (CER) and character n-gram F-score (chrF; Popovic, 2015) rather than their word-level counterparts as they enable more fine-grained evaluation and are better suited to morphologically rich lan guages. We obtain a final score by averaging the scores of all tasks. For each task, we only average performance over ULs (discussed in SS3.1). For metrics such as character error rate (CER) where lower is better, we invert the scores before averaging scores across tasks. For mean reciprocal rank (MRR), which is in the 0.0-1.0 range, we renormalize it to the 0-100 range before averaging. While this scalar provides a quick overall impression of a system's quality across a broad range of tasks, it is not a substitute for analyzing performance on individual tasks, languages, or types of examples. ## 4 Experiments ### Experimental setting Multilingual fine-tuningIn contrast to prior benchmarks that focus on zero-shot cross-lingual transfer from English, Xtreme-Up focuses on the more realistic scenario of fine-tuning on a small amount of data in the target language. To make this scenario scalable in a massively multilingual setting, Xtreme-Up fine-tunes a single model on the combined training data across the available languages for each task. The data for each language is sub-sampled to emulate data sizes that can be realistically annotated within a reasonable time frame (see SS3.2). In-language in-context learningWe also provide a 5-shot in-context learning setting where a model is provided with an English instruction and 5 exemplars in the target language in order to evaluate the progress on few-shot learning with large models for ULs. We provide the instruction for each task in Appendix C.14 Footnote 14: The choice of prompt and exemplars can have a significant impact on performance (Zhao et al., 2021, 2021). We provide a single instruction and set of exemplars per task and language for replicability and leave the search for better instructions and exemplars to future work. ### Baselines We provide results on a handful of baseline systems that have already been developed by the research community. Given that our focus in this paper is on the dataset and task setup rather than system building, we do not focus on offering novel modeling types nor do we exhaustively evaluate all possible models; rather we view these results as estimating a starting point from some well-known modeling approaches and seeding contributions from the broader research community.15 Footnote 15: Xtreme-Up offers a **public results tracker** for use in tracking the community’s progress on Xtreme-Up. We conceptualize these results not as a competition, but as offering insights about different models and their trade-offs, each justifying and explaining how it should be compared to the others and how it informs the research landscape. Submissions can be made via self-service git pull requests. Multilingual fine-tuning baselinesFor the main experimental setting of multilingual fine-tuning, we provide the following baselines: **mT5-base**(Xue et al., 2021) and a subword-based multilingual encoder-decoder model; **ByT5-base**(Xue et al., 2022), a byte-based multilingual encoder-decoder model. In-context learning baselineFor the in-context learning setting, we employ **Flan-PaLM**(Chung et al., 2022), an instruction-tuned version of PaLM (Chowdhery et al., 2022). We provide additional information on the baseline systems in Table 3. To offer baseline systems that allow experimentation with text-only models, we use upstream models to provide initial output for ASR and OCR, and present text-based baselines that use these as inputs. We expect these baselines to give way to fully multi-modal models as research progresses. These initial ASR and OCR outputs should be seen as part of a baseline system, _not_ part of the Xtreme-Up benchmark itself. For ASR, we augment the data with predictions of the state-of-the-art Maestro-U (Chen et al., 2023) and then use a downstream text model to improve the outputs Bassil and Alwani (2012). Similarly, for OCR, we use the off-the-shelf Google Vision OCR16 to get first-pass outputs, and train language models to improve them (Dong and Smith, 2018; Rijhwani et al., 2020). Footnote 16: [https://cloud.google.com/vision/docs/ocr](https://cloud.google.com/vision/docs/ocr) InfrastructureModels were trained using seqio and T5X (Roberts et al., 2022) on TPUs (Kumar et al., 2019; Pope et al., 2022). \begin{table} \begin{tabular}{l l c c c} \hline \hline Model & Eval & \# of & Vocab & \% of non-en \\ & setting & params & units & pre-train data \\ \hline mT5-Base & FT & 580M & Subwords & 94.3 \\ ByT5-Base & FT & 580M & Bytes & 94.3 \\ Flan-PaLM & ICL & 62B & Subwords & 22.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Additional information on baseline models including the setting in which we evaluate them (fine-tuning vs in-context learning), their size, their vocabulary, and the fraction of non-English pre-training data. ### Results We show the baseline results in Table 4.17 Footnote 17: Detailed per-language results are available at [https://github.com/google-research/xtreme-up](https://github.com/google-research/xtreme-up). Byte-based models outperform subword-based on ULs.The byte-based ByT5 outperforms the subword-based mT5 across most tasks. Gains are particularly pronounced for tasks that require dealing with information on the character level such as autocomplete and transliteration and for predicting information on the word level such as for NER and semantic parsing. These results demonstrate that as we train and evaluate our models on under-represented languages, standard modeling choices such as subword representations fall short. In-context learning underperforms fine-tuning on limited data.The Flan-PaLM model generally performs worse than the models using fine-tuning, despite being much larger. Nevertheless, it achieves reasonable performance on machine translation, which is likely reflected in the pre-training data. On other tasks, however, it fails to reliably apply its English-centric knowledge to ULs. Despite fine-tuned models performing relatively well on NER, the in-context learning model is unable to consistently generalize to the task in a few-shot setting in under-represented languages. On semantic parsing, the model fails to generalize to the large number of domain-specific intents and slots using standard prompting in ULs.18 The autocomplete tasks in particular demonstrate the lack of robust cross-lingual information in the English-centric PaLM model: it struggles to complete a sentence given a character prefix and fails to reliably convert between different scripts in the same language. Xtreme-Up thus provides a strong challenge to test the generalization abilities of in-context learning methods to ULs. Footnote 18: We leave the exploration of multilingual adaptive prompting and dynamic exempler selection (Drozdović et al., 2023) methods to future work. There is a lot of headroom left to improve performance on ULs.Overall, across all tasks there is still a considerable amount of headroom left. For ASR, OCR and transliteration, around 10% of characters are still incorrectly predicted. On auto-complete, models only make the correct prediction in about one fourth of all cases. For MT, on average only about a third of n-grams in the hypothesis are also present in the reference, and vice versa. For QA and retrieval, there are large performance differences between in-language and cross-language settings and much headroom still left. On NER, models perform relatively well but are still far from perfect performance on the task. Finally, on semantic parsing models are only able to produce the correct output in around a third of all cases. ## 5 Analyses Lowest-performing languagesModels generally perform poorly on African languages. On transliteration, models perform relatively worst on the newly added Amharic language. On NER, which covers only African languages, performance is lowest for Amharic--likely due to its different script--and the extremely under-represented \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Input \& Output Tasks} & \multicolumn{6}{c}{Information Access Tasks} \\ & ASR & OCR & Autocomplete & Transliteration & MT & QA & Retrieval & NER & \begin{tabular}{c} Semantic \\ Parsing \\ \end{tabular} \\ \hline \hline & Avg & CER\(\downarrow\) & CER\(\downarrow\) & Acce@3\(\uparrow\) & CER\(\downarrow\) & chrF\(\uparrow\) & F1\(\uparrow\) & MRR\(\uparrow\) & F1\(\uparrow\) & EM\(\uparrow\) \\ \hline \hline & \multicolumn{6}{c}{_Multilingual fine-tuning_} \\ \hline mT5-Base & \(8.5\) & (11.1)\(\bigstar\) & \(12.7\) & \(37.6\) & \(22.5\) & 59.7 (74.9 / 44.6) & 0.23 (0.41 / 0.07) & \(74.0\) & \(21.8\) \\ ByT5-Base & \(8.2\) & (11.1)\(\bigstar\) & \(27.6\) & \(14.6\) & \(26.9\) & 71.4 (82.3 / 60.5) & 0.29 (0.45 / 0.18) & \(84.0\) & \(37.5\) \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c} \hline \hline Flan-PaLM-62B & \(23.2\) & — & \(0.0\)\({}^{\dagger}\) & \(77.4\) & \(32.1\) & 22.9 (20.9 / 24.9) & — & \(12.9\) & \(0.1\) \\ \hline \hline \end{tabular} \end{table} Table 4: Overall results of baselines across all Xtreme-Up v1.0 tasks for the test split. Scores on Xtreme-Up average over evaluation scores of _under-represented_ languages. QA and retrieval performance is the average of in-language and cross-language settings (indicated in brackets as in-language / cross-language). For OCR, we do not apply any additional models (mT5 nor ByT5) on top of the baseline OCR system; we show these results in parentheses. We do not attempt in-context learning (ICL) results for retrieval since ICL is typically only used for text-in, text-out use cases. \(\bigstar\) For OCR, we use the Google OCR API. \({}^{\dagger}\) For autocomplete, while we observe reasonable performance on English completions, we find that the model typically does a very poor job outside of English. Ghomala'. Similarly, translation models underperform in Amharic and Yoruba. On ASR, the lowest-performing languages are Yoruba but models also struggle with other languages such as Gaelic, and many South Asian languages such as Lao, Khmer, and Burmese. Task-specific observationsByT5 provides the best performance while the size of the model does not seem to impact performance much. Several aspects of the data lead to higher error rates in transliteration: the model struggles with input in the Perso-Arabic script and to produce output in Latin based on a different script. For autocomplete (see Appendix B.3), our analyses indicate that models perform better on text that uses the Latin script. ## 6 Recommendations In this section, we make recommendations to researchers who plan to make use of this benchmark. Use of splitsXtreme-Up offers a train, validation, and test split for each task. We recommend using the training split for learning the parameters of your model or as exemplars for in-context learning while iteratively checking your progress on the validation (i.e. development) split. The test split should _not_ be used for iterative evaluation of your models or other sorts of hill-climbing; instead, it should be reserved for reporting your results and comparing _after_ you have finished development on your models. Experiments that follow this customary scientific rigor should expect to show better generalization and less overfitting to the test split. Use of additional pre-training dataOne potential confounder for results along different pre-trained models is the variation in pre-training data; where this data overlaps with the targets (outputs) in Xtreme-Up validation and test splits, results can be artificially inflated, providing a sense that results are better than they are in reality--if the validation or test data leaked into the pre-training data via contamination during large-scale data scraping, then it's unlikely that the system would truly perform as well for new unseen inputs. Therefore, we recommend that when researchers modify the pre-training data for a model, they explicitly report overlap (contamination) between the targets of the Xtreme-Up validation/test splits and their pre-training corpus.19 Footnote 19: We recognize that this is a very large-scale undertaking, requiring a fairly large amount of compute. As such, we suggest that it’s may only be needed when making claims that compare systems (e.g. that the system with possibly-contaminated pre-training data is equivalent, better, or almost as good as some other system). Note, this analysis only needs to be done once for each pre-training corpus (e.g., once for mC4) and it is very likely that organizations with enough compute to pre-train a new model on a new corpus would also have sufficient compute to calculate overlap. Use of additional supervised dataIt is entirely possible that the community will find creative ways to improve models based on supervised data not included with Xtreme-Up. However, researchers should bear in mind how this might affect the comparability of their results with other models. The following axes should be considered: 1. _Any_ additional data from high resource languages is always allowed in the Xtreme-Up setting. 2. Supervised data (e.g. parallel data for MT) harvested from the web, religious, books, and other opportunistic sources will typically be out-of-domain and is therefore admissible; conversely, supervised data from ULs from highly similar tasks or domains should generally be considered against the spirit of the Xtreme-Up benchmark. 3. Monolingual data from UL is admissible with the caveat that one should measure overlap with targets, as discussed above. Avoid off-the-shelf MT systemsData augmentation via automatically translating high-resource supervised data to languages with less supervised data has proven a very effective means of improving system quality. However, it is not necessarily realistic to use a pre-existing MT system (e.g. an API or an open-source model) since those systems have typically been trained on a large amount of parallel data--or at least unknown data. This means that additional supervised data would then be leaking into the experimental setup, which is otherwise intended to reflect the reality that most under-represented languages have very little supervised data. If data augmentation via translation is used, we encourage researchers to report the parallel data sources used and argue why this experimental setup is realistic--or to clearly point out such usage in their experiments as an unavoidable confound and discuss the limitations this sets on what conclusions can be drawn about how results will extrapolate to the breadth of under-represented languages. In all cases, researchers should rigorously report what additional data was used and how; each use case comes with its own considerations and, above all, researches should make a well-reasoned argument that their use of data (i) does not artificially inflate evaluation scores and (ii) reflects a real-world scenario of finding and applying data. ## 7 Conclusion We have presented Xtreme-Up, a multilingual benchmark distinguished by its being (i) scarce-data, (ii) user-centric, and (iii) focused on under-represented languages. The benchmark contains input modalities of text, images, and audio while still allowing experimentation with text-only models. We hope this benchmark will be useful in accelerating research that is useful to speakers of under-represented languages and in highlighting both the progress and limitations of current models of language. ## Acknowledgements We thank Slav Petrov, Jason Riesa, Raphael Hoffmann, Dipanjan Das, Clara Rivera, Chris Alberti, Michel Reid, and Timothy Dozat for helpful discussions and feedback. We are grateful to Noah Constant for a review of a draft of the paper. We also gratefully acknowledge the contributions of the researchers who built the datasets that have gone into Xtreme-Up; we recommend that all component datasets be cited individually when using Xtreme-Up in a paper such that dataset authors (many of whom are not authors of this article) receive credit for their work and so that those original sources remain easily discoverable in the literature. ## Contributions In this section, we provide more detail about the contributions of each author. ## General overview **Project leads**: Sebastian Ruder, Jonathan Clark **Primary contributors and task owners**: Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel Sarr, Xinyi Wang, John Wieting **Major contributors**: Nitish Gupta, Anna Katanova, Christo Kirov, Dana Dickinson, Brian Roark, Bidisha Samanta, Connie Tao **Supporting contributors**: David Adelani, Vera Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Melvin Johnson, Dmitry Panteleev, Partha Talukdar **By Task ASR**: Min Ma **Autocomplete**: Jean-Michel Sarr, Vera Axelrod, Colin Cherry, Sebastian Ruder, Jonathan Clark **MT**: Parker Riley, Isaac Caswell, Colin Cherry, Jonathan Clark **NER**: Sebastian Ruder, David Adelani, Dan Garrette **OCR**: Shruti Rijhwani, Dana Dickinson, Reeve Ingle, Dmitry Panteleev, Sebastian Ruder **QA**: Mihir Kale, John Wieting, Nitish Gupta, Partha Talukdar, Jonathan Clark **Retrieval**: John Wieting **Semantic parsing**: Massimo Nicosia, Bidisha Samanta, Partha Talukdar **Transliteration**: Alexander Gutkin, Anna Katanova, Christo Kirov, Brian Roark ## Appendix A By Contribution **Evaluation framework and public results tracker**: Xinyi Wang **Program management**: Connie Tao **Fine tuning and modeling**: Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel Sarr, John Wieting, Sebastian Ruder, Jonathan Clark **Data processing**: Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel Sarr, Xinyi Wang, John Wieting, David Adelani, Vera Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Dmitry Panteleev, Sebastian Ruder, Jonathan Clark **Data collection**: Massimo Nicosia, Bidisha Samanta, Nitish Gupta, Anna Katanova, Christo Kirov, Dana Dickinson, Brian Roark **In-context learning**: Sebastian Ruder **Benchmark design**: Jonathan Clark, Sebastian Ruder, Melvin Johnson
2301.05262
Neural Shadow Mapping
We present a neural extension of basic shadow mapping for fast, high quality hard and soft shadows. We compare favorably to fast pre-filtering shadow mapping, all while producing visual results on par with ray traced hard and soft shadows. We show that combining memory bandwidth-aware architecture specialization and careful temporal-window training leads to a fast, compact and easy-to-train neural shadowing method. Our technique is memory bandwidth conscious, eliminates the need for post-process temporal anti-aliasing or denoising, and supports scenes with dynamic view, emitters and geometry while remaining robust to unseen objects.
Sayantan Datta, Derek Nowrouzezahrai, Christoph Schied, Zhao Dong
2023-01-12T19:16:32Z
http://arxiv.org/abs/2301.05262v1
# Neural Shadow Mapping ###### Abstract We present a neural extension of basic shadow mapping for fast, high quality hard and soft shadows. We compare favorably to fast pre-filtering shadow mapping, all while producing visual results on par with ray traced hard and soft shadows. We show that combining memory bandwidth-aware architecture specialization and careful temporal-window training leads to a fast, compact and easy-to-train neural shadowing method. Our technique is memory bandwidth conscious, eliminates the need for post-process temporal anti-aliasing or denoising, and supports scenes with dynamic view, emitters and geometry while remaining robust to unseen objects. Shadow mapping, neural networks 2022 Sabadow mapping, neural networks ## 1 Introduction The most popular approach to image processing is to use the image to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object to map the object map map the object to map the object to map the object to map the object to map the object to map map the object to map the object to map the object to map the object to map the object map map map the object to map the object to map the object to map the object map map map the object to map map the object to map the object to map the object to map map the object to map map the object to map map the object to map the object to map map the object to map map the object to map map the object to map map the object to map the object map map map map the object to map map the object to map map the object to map map map the object to map map map the object to map map map map the object to map Introduction Shadows provide important geometric, depth and shading cues. Real-time hard and soft shadow rendering remains a challenge, especially on resource-limited systems. Pre-filtering based methods (Anneh et al., 2007; Donnelly and Lauritzen, 2006; Peters and Klein, 2015) are fast but approximate. They are prone to light leaking artifacts, reduced shadow contrast, and limited contact hardening. Interactive ray-tracing (Keller and Waechter, 2009) coupled with post-process denoising (Chaitanya et al., 2017; Schied et al., 2017) and upscaling (Xiao et al., 2020) can deliver high quality dynamic shadows, but even the fastest GPU ray-tracers fall short of the performance demands of interactive graphics. Low ray-tracing hardware adoption and the added engineering complexity of integrating GPU ray tracers into rasterization-based pipelines is another limitation. Pre-computation based methods (Mildenhall et al., 2020; Sloan et al., 2002) do not generally support dynamic objects or near-field light transport, and require significant memory. We propose a machine learning-based method that generates high quality hard and soft shadows for dynamic objects in real-time. Our approach does not require ray-tracing hardware, has high performance (\(<\) 6ms), requires little memory (\(<\) 1.5MBs), and is easy to deploy on commodity low-end GPU hardware. We use the output of "vanilla" rasterization-based shadow mapping (i.e., no cascades, etc.) to hallucinate temporally-stable hard and soft shadows. We design a compact neural architecture based on the statistics of penumbra sizes in a diversity of scenes. The network admits rapid training and generalizes to unseen dynamic objects. We demonstrate improved quality over state of the art in high-performance pre-filtering based methods while retaining support for dynamic scenes and approaching reference-quality results. We show that careful feature engineering, application and memory aware architecture design, combined with a novel temporal stability loss results in a system with many favorable properties: apart from compactness and high-performance, our output precludes the need for post-process temporal anti-aliasing (TAA), further reducing the renderer's bandwidth requirements. We parameterize our network by emitter size, allowing us to encode both hard and soft shadow variation into a single trained net. We demonstrate its effectiveness on several scenes with dynamic geometry, camera, and emitters. Our results are consistently better than workhorse interactive methods, and they also rival much slower (and more demanding, system- and hardware-wise) interactive ray-tracing and denoising-based pipelines. Finally, we discuss scene dependent optimizations that further reduce our network size and runtime. ## 2. Related Work Shadow mapping (Williams, 1978) and its variants are efficient shadowing methods for point and directional lights in dynamic scenes. Shadow map resolution and projection leads to shadow aliasing artifacts, with solutions (e.g., depth biasing (Dou et al., 2014; King, 2004)) leading to secondary issues and trade-offs. Modern shadow mapping relies on delicately engineered systems that combine many _cascaded maps_(Engel, 2006; Zhang et al., 2006). Here, we refer readers to a comprehensive survey (Eisemann et al., 2011). Filtering-based methods prefilter (in emitter-space) depth-based visibility to reduce aliasing. One simple such method weights nearby depth samples (Reeves et al., 1987); this _percentage closer filtering_ remains a commonly used technique in interactive applications, with a recent variant that modulates the filter size based on the relative blocker and receiver positions is used to approximate soft shadows (Fernando, 2005). More recently, a new class of filtering methods replace binary depth samples with statistical proxies, allowing for more sophisticated pre-filtering (Anneh et al., 2007, 2008; Donnelly and Lauritzen, 2006) and coarse approximations of soft shadows. Moment shadow maps (Peeters and Klein, 2015) are the state of the art of these methods, but it can suffer from banding, aliasing, light leaking in scenes with high depth complexity. Screen-space methods treat G-buffers, including screen-projected shadow map data, leveraging image-space locality and GPU parallelization for efficient filtering in a deferred shading pipeline. Here, accurate filtering here requires the determination of an potentially-anisotropic filter kernel (due to perspective distortion), and so depends non-linearly on the viewing angle (Zheng and Saito, 2011) and pixel depths (MohammadBagher et al., 2010). Our method similarly treats image-space G-buffer data, but we instead _learn_ compositional filters from data. High-fidelity soft shadows also benefit from occluder depth estimates from both the emitter and shade point, of which only the former is readily available from the shadow map and the latter can be approximated using min- (2018) or average-filtering (2010) of the projected shadow map. Again, we rely on learning compositions of convolution and pooling layers to model (the effects of) these depth estimates. Ray tracing hardware opens up an exciting new avenue for dynamic hard and soft shadows. These methods, however, remain power-inefficient and typically require post-process denoising (traditional (Schied et al., 2017) or machine learning-based (Chaitanya et al., 2017; Munkberg and Hasselgren, 2020)) and TAA (Edelsten et al., 2019; Xiao et al., 2020) to attain modest interactivity. ## 3. Overview Overall, our approach is straightforward; we generate a set of screen-space buffers using a G-buffer and a shadow mapping pass before passing them as inputs to our network. The output of the network is compared against ray-traced shadows as target during training. Although straightforward, simply using a UNet (Nalbach et al., 2016; Ronneberger et al., 2015) without our proposed training Figure 2. Visualizing supervised learning pairs. The network inputs are the rasterization buffers modulated by the size of emitter (\(r_{e}\)). The targets are generated using ray-tracing according to the corresponding emitter size. and optimization methodology yields a network that is temporally unstable, bandwidth limited, heavy (>25MBs) and too slow (>100ms) for real-time use. As such, our methodology is focused on making conscious choices to preserve memory bandwidth while having minimal impact on quality. We train our network using screen-space buffers as features and corresponding ray-traced shadows as targets. The approach allows for easy integration into the rendering pipeline while providing room for integration (possible future work) into supersampling (Xiao et al., 2020) and neural-shading (Nalbach et al., 2016). Our approach is also suitable for tiled rendering - popular among mobile devices. We develop a methodology to select a compact set of features that preserve necessary information without increasing memory bandwidth. We then show a simple technique to encode shadows with variable emitter size in a single network. We design a loss function to enhance the temporal stability of network without using historical buffers, thus further reducing bandwidth requirements. We provide several network architecture optimizations aimed at reducing memory and compute requirements. While our general architecture supports flexible emitter sizes, we show a recipe to further optimize our network for a fixed emitter size, enabling further trimming of network layers. ### Supervised training pairs We use supervised learning to train our neural network. The training examples are generated using rasterization and ray-tracing for features and targets respectively. The rasterization pipeline includes a G-buffer pass followed by a shadow mapping pass. Together they generate the following screen-space buffers: * view-space depth \(d\) and normal \(\mathbf{n}\), * emitter-to-ocluder depth \(z\) and emitter-space normal \(\mathbf{n_{e}}\), * pixel-to-emitter distance \(z_{f}\), the emitter radius (size) \(r_{e}\) for spherical sources, and * dot products \(\{c_{e},c_{e}\}\) of \(\mathbf{n}\) with the emitter direction and \(\mathbf{n}\) with the viewing direction. The ray-tracing pass generates converged images of hard and soft shadows using a brute force Monte-Carlo sampling and a mild Gaussian filter. An 8x multi-sample anti-aligning (MSAA) is also applied to the ray-traced targets. We do not however use MSAA in the rasterization pipeline as we expect the network to implicitly learn anti-aliasing from the target images. #### 3.1.1. Sofness control We train a single network to predict a range of shadows with varying softness. Note that the same input from the rasterization is used to generate both hard and soft shadows. The softness is controlled (on a continuous scale) using a scalar parameter indicating the size of the emitter. For training, the emitter sizes are encoded as integer textures between 0 to 4, where 0 indicates a point light and 4 indicates the largest emitter size (diameter 50 cm). Rather than passing the scalar values to the network as an additional constant screen-space buffer, a more bandwidth efficient approach is to add the scalar as a dc-offset to an already existing buffer. We choose the cosine texture (\(c_{e}\)) to add the emitter-size (\(r_{e}\)) dc-offset to. The network targets are also changed corresponding to the selected emitter size. See figure 2. During inference, the network accepts a scalar between 0 and 4 and intrinsically interpolates across discrete emitter values the network is trained with. ### Feature selection While dumping the content of the rasterization pass through a network works, it is bandwidth inefficient and adds a 2.5ms penalty to the cost of evaluating the network. The inefficient technique involves adding a feature extraction network, cascaded before the main network and training the two network end to end. The feature extraction network consisting of several layers of 1x1 convolutions, compresses all 15 channels of rasterizer output down to 4. A 1x1 convolution layer acts as fully connected layer across channels without performing any convolution across pixels. We tested a 2-layer deep 1x1 convolution network, recorded an overall error of \(6.64\times 10^{-3}\) across a suite of scenes involving hard and soft shadows. Our approach eliminates the need for a feature extraction network by systematic evaluation and selection of the rasterization output buffers. We first introduce the notion of _sensitivity_, a metric we use to quantify the importance of a feature. _Sensitivity_ measures a change in the network output due to a small perturbation in the input. Intuitively, sensitivity is lower if a channel's contribution in explaining the output variation is lower. Absolute sensitivity \(S_{i}\) for the \(i^{th}\) input channel \(f_{i}\) is given by \[S_{i}=\mathbb{E}\left[\frac{(\phi(f_{i}+\epsilon_{i})-\phi(f_{i}))}{0.1\sigma_ {i}}\right],\ \ \ \epsilon_{i}\sim\mathcal{N}(0,0.1\sigma_{i}) \tag{1}\] where \(\phi\) is the network, the random perturbation texture \(\epsilon_{i}\) is obtained by sampling a normal distribution. The standard deviation \(\sigma_{i}\) corresponding to the \(i^{th}\) channel is empirically estimated by aggregating all pixels in the dataset for that channel. The formula is repeated several times to reduce sampling noise. To compare the sensitivities across different training instances, we compute **relative sensitivity** as \(s_{i}=S_{i}/\sum_{i}S_{i}\). Armed with relative sensitivity as our yardstick, our problem is thus selecting a subset of features from a set of features \(U=\left\{d,\mathbf{n},z,\mathbf{n_{e}},z_{f},c_{e},c_{e}\right\}+\left\{z-z_ {f},z/z_{f},c_{e}/d,\mathbf{n}\cdot\mathbf{n_{e}}\right\}\). The first set of buffers are obtained directly from the rasterization while the second set is a composition from the first set. We take a tiered approach for selecting the best features. In the first pass, we train our network with all buffers in set \(U\) and reject buffers with low relative sensitivity. We repeat the process until all buffers have sensitivity higher than 1.5%. Refer to supplemental1 material, section 1.0.2 for more Figure 3. Relative sensitivity of the selected features for various scenes. details. Our final set of buffers is as shown in figure 3. We obtain nearly the same error (\(6.67\times 10^{-3}\)) as having a feature extraction while saving an extra 2.5ms. ### Loss function and temporal stability The loss function plays two main role in defining the characteristics of our network. It shapes the network to better fit the hard edges for shadow silhouette and geometry, essentially performing post-process anti-aliasing. Second, our loss function improves temporal stability without using any historical buffers for training and inference. Our approach not only saves memory bandwidth but also enables easier integration into tiled renderers. We achieve the first objective using a weighted combination of per-pixel difference and VGG-19 (Vgg et al., 2015) perceptual loss. The effect of VGG loss on the final output is shown in figure 4 and clearly shows the anti-aliasing effect of VGG-19 on hard edges. Existing methods typically improves temporal stability using historical buffers to better support the network during inference (Edelsten et al., 2019; Xiao et al., 2020) and also to reshape the loss function (Holden et al., 2019; Kaplanyan et al., 2019) during training. In our case, we do not use historical buffers but use random perturbations of the input buffers for reshaping the loss landscape during training. Temporal instabilities arise due to shadow-map aliasing, where shadow map levels do not align one-to-one with screen pixels. As such small, movement in camera or emitter can cause large changes in depth comparisons, especially around shadow silhouettes. Inspired from noise-to-noise training (Lehtinen et al., 2018), we train our network to learn from pairs of noisy inputs, in addition to the traditional supervised learning pair. Our network intrinsically learns to damp sharp changes due to small perturbations with minimal impact on overall quality as shown in figure 5. At each training iteration, we perturb the camera and emitter position by a small value proportional to the distance from the scene and size of emitter. For each perturbation, we collect the input buffers for training. The target is evaluated for only one of the perturbations. We evaluate the network on each perturbations and minimize the differences in the perturbed outputs as an additional loss function. All network instances evaluating the perturbed inputs share the same weights while backpropagation is only enabled through one instance, as \[\mathcal{L}=L(x_{0},\tilde{x})+\sum_{i=1}^{p}L(x_{0},x_{i}), \tag{2}\] where \(L(y,\tilde{y})=\alpha\cdot|y-\tilde{y}|+(1-\alpha)\cdot VGG19(y,\tilde{y})\), and \(x_{i}\) and \(\tilde{x}\) are the network outputs and target. Only one network output-\(x_{0}\) has backpropagation enabled through it. We set \(\alpha=0.9\) and the number of perturbations \(p=3\). ### Temporal stability measurement Several techniques exist for measuring perceptual similarity with respect to a single reference frame (Andersson et al., 2020; Wang et al., 2003) or reference video (Mantiuk et al., 2021). These techniques measure the spatio-temporal difference which indicates the overall reconstruction quality across screen-space and time. Since we sacrifice spatial quality for temporal stability, using these metric may not indicate a reduction in temporal instability due perturbation loss, even when there is a clear visual improvement in temporal stability. Thus we formulate our own metric to measure only the temporal changes without considering spatial similarity with reference. To measure flickering, we find the motion-vector adjusted per-pixel temporal difference (Yang et al., 2020). Since flickering can be quantified as an abrupt change in the pixel intensities between frames, we penalize the large differences more by passing the temporal pixel difference through an exponential. We aggregate the Figure 4. Effect of VGG loss on the final output. VGG loss produces sharper edges for geometry and shadow silhouettes. Figure 5. Comparing the temporal and spatial effect of perturbation loss. The application of perturbation loss reduces temporal instability while causing an increase in spatial blur as shown in the cutouts. We measure temporal instability by comparing the network output between consecutive frames while we measure spatial error by comparing the network output with reference. Temporal instability and spatial errors are represented using false colors purple/gold and red/blue respectively. result across all pixels and frames, with \[E=\frac{1}{P}\sum_{P:I}\left\{\exp(\alpha D_{t}(p))-1\right\}\, \tag{3}\] where \(D_{t}(p)=|I_{t}(p)-I_{t-1}(m(p))|\) is a per-pixel difference between two consecutive frames at time \(t\) and \(m(p)\) abstracts away the motion-vector adjusted lookup at pixel \(p\) in the previous frame. We set \(\alpha=3\), which controls the penalty for large changes in intensity through time, and the normalizing factor \(P\) is the total number of pixels. We reject pixels that fail depth and normal comparison with its reprojection. Figure 6 contrasts the effect on temporal stability between our loss and TAA (1 last frame): the improvement in temporal stability with our perturbation loss is strongest in scenes with dynamic emitters and non-negligible, albeit smaller with dynamic view. ### Network architecture and optimizations The original UNet (Ronneberger et al., 2015) architecture is too slow (\(>\)100ms) to fit into a real-time graphics pipeline. As such, we start with trimming the network down. Our generic network has 5 layers, however, each layer composed of one 3x3 convolution and one 1x1 convolution layers as opposed to the standard double 3x3 convolution. A major departure from the original UNet is using bi-linear interpolation instead of expensive transpose convolutions for upscaling. We also use algebraic sum instead of a concatenation layer for merging the skip connections on the decoder side. A positive side effect of using a sum layer is the reduction in the number of hidden units on the decoder side. With these modifications we reduce the network size from 25MB to just 2.5MB, while the runtime is minimized to 28ms. Quantizing the network to half-precision further reduces the size to 1.5MB and 17ms runtime. Other modifications for improving temporal stability without affecting performance includes using Average-pool instead of Maxpool and removing the skip connection in the first layer. Replacing max-pool with average-pool reduces extremities during processing and smooths out the output. As raw shadow map depth values are prone to aliasing noise, removing the first skip connection ensures the noisy input does not affect the output directly. At this stage, we analyze the performance and error of our network before optimizing it further. Our validation error is \(6.67\times 10^{-3}\) over an ensemble test scenes. From figure 7, we see that the first layer (combined encoder-decoder) requires more time compared to the rest of the layers combined. Moving from inner (\(\sharp\)4) to outer layers (\(\sharp\)0), the resolution is quadrupled while the number of channels is halved. Consequently, the effective cached memory bandwidth doubles as we move from inner to outer layers; however, with increasing resolution, memory operations are also more prone to cache misses. In practice, we see more than doubling of runtime as we move towards the outer layers. Refer supplemental section 3. We further optimize by changing the first layer which consumes disproportionately more time. A naive approach is to replace the first layer with a downsampler on the encoder side and upsampler on the decoder side of UNet. However, simple downsampling and upsampling loses information contained in the input and also produces less sharp output. Instead, we flatten a 2x2 square of pixels into 4 separate channels and use the restructured buffer as the input Figure 8. Figure showing the effect of all optimizations in section 3.5. Figure 6. Comparing reduction in temporal instability between _Perturbation Loss_ and TAA. Reference is trained without perturb. loss or TAA. TAA is implemented as an additional pass after network (trained without perturbation loss) evaluation requiring extra 1.3ms. Figure 7. Layer-wise performance optimization for a 1024\(\times\)2048 input. for second layer. Thus we rearrange the input and change the buffer dimensions from \((h\times w\times ch)\) to \((h/2\times w/2\times ch)\). More concretely, instead of feeding the first layer with full resolution (\(1024\times 2048\)) input with 4 channels, we feed the second layer directly with quarter resolution (\(512\times 1024\)) input with 16 channels. We do the inverse on the decoder side; rearrange 4 output channels into a 2x2 pixel square. Note that the rearrangement of buffers does not add any extra temporary storage for the second layer while removing the first layer (Conv2D operations) completely. On the decoder side, to improve training convergence, we upscale the first output channel to full resolution using a bi-linear interpolation. We then add rest of the three channels to the interpolated output, filling in rest of the details. The performance of our optimized network is 5.8ms. ### Network depth optimizations Our optimizations so far are generic and apply across scene and emitter configurations. Below, we explore _scene specific_ optimizations and tune our network architecture for compactness. Shallower networks have many pragmatic benefits: it has exponentially (power of 2) fewer parameters, is faster to train, and admits faster runtime inference. Instead of relying on adhoc architecture tuning, we will choose architectures based on their ability to capture the shadowing effects we target. Specifically, we will build a simple model to estimate the maximum penumbra size for a scene configuration, and then relate this size exactly to the depth of the network suited to reproducing them. We empirically validate our model. To compute the world space penumbra size, our simplified model assumes a spherical occluder (or, conservatively, a bounding sphere around occluding geometry). When generating training samples, Figure 9. Comparing our network’s ability to generalize to unseen objects (buddpha, bunny, dragon) with other competing techniques (MSM, PCSS, Raytracing & Denoising) for hard and soft shadows. MSM-3 and MSM-9 are Moment Shadow Map variants using 3\(\times\)3 and 9\(\times\)9 prefiltering kernels. we additionally measure the minimum and maximum occluder distances \(z_{max}\), \(z_{min}\) (figure 2, a). We then estimate the penumbra width at a pixel as the sum of inner (\(x_{a}\)) and outer (\(x_{b}\)) penumbra (figure 2, b) as \(\{x_{a},x_{b}\}=z_{f}\,\tan(\theta\pm\theta_{\delta})-r_{e}\). We derive the parameters \(\theta,\theta_{\delta}\) in the supplemental, section 4. After computing a histogram of penumbra sizes in screen-space for each pixel across all the training frames, we select the highest 95th percentile penumbra size as a conservative bound on the _receptive field_ size requirements for our neural architecture. We can modulate the per-layer convolutional layer parameters (kernel size, stride) and pooling operation parameters in order to meet the target receptive field requirements. If we set each convolutional layers to halve the spatial resolution, the _effective receptive field_ of the network grows with \(\times 2^{l}\) for an \(l\)-layer network. Exclusively using \(3\times 3\) kernels, we can solve for \(l=\log_{2}(p_{w}/3)\), where \(p_{w}\) is the conservative screen-space penumbra _width_. Table 1 provides an empirical validation of our technique. We train three networks with 3,5, and 7 layers using the same dataset. The dataset consist of scenes with mixture of penumbra sizes. The penumbra size estimates are computed using our model. During inference, networks with _receptive field_ lower than the predicted penumbra size perform poorly as marked in red color. A more detailed analysis of the empirical verification is provided in the supplemental section 4.1. ## 4. Results and Comparisons We demonstrate our method on a diversity of scenarios. We augment static environments (e.g., rooms) included in our training set to include untrained objects at runtime, illustrating an important use case for interactive settings like games (figure 6). We train a single network on the Bistro interior scene with varying emitter sizes, emitter positions and camera trajectories and introduce the (untrained objects) Buddha, bunny, and dragon for validation. ComparisonIn the soft-shadowing regime, our comparisons focus on two classes of baseline methods: first, high-performance rasterization-based approximations such as Moment Shadow Maps (MSM) (Peters and Klein, 2015) and Percentage Closer Soft Shadows (PCSS) (Fernando, 2005) that align with our engineering (i.e., fully rasterization-based; no ray-tracing) and performance targets as our primary baseline; second, we use interactive GPU ray-tracing with post-process denoising as a more accurate "interactive" baseline, i.e., 5-SPP raytracing with SVGF (Schied et al., 2017). Note that, unlike our method, neither MSM nor PCSS allow explicit control of penumbra style using emitter size; as such, we adjust the pre-filtering kernel size for MSM and PCSS to achieve a penumbra size that most closely matches reference renderings. We use kernel sizes of 3\(\times\)3 for MSM and 9\(\times\)9 for PCSS. Our PCSS baseline also includes a depth-aware post-filtering. Our method consistently improves shadow quality at competitive performances (figure 9). Refer to our video to observe the temporal stability of our results. For hard shadows, we compare to \(3\times 3\) MSM, obtaining alias-free shadows without any light leaking. Please refer to the supplemental section 7 for more results, comparisons. Runtime comparisons are measured at a resolution of 2k\(\times\)1k on an AMD 5600X CPU and Nvidia 2080Ti GPU. The timings in all figures, both main paper and supplemental, exclude G-Buffer generation which consistently requires an additional 2-3 ms (depending on the scene) across all techniques. Each scene is trained on \(\leq\) 400 \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} Predicted Penumbra \\ Width (in pixels) \\ \end{tabular} } & \multicolumn{3}{c|}{ \begin{tabular}{c} Network layers \(\rightarrow\) \\ Receptive field size (in pixels) \\ \end{tabular} } \\ \cline{2-4} & \(3\to 24\) & \(5\to 96\) & \(7\to 384\) \\ \hline \(21\) & 0.005 & 0.005 & 0.012 \\ \hline \(90\) & 0.083 & 0.009 & 0.018 \\ \hline \(180\) & 0.161 & 0.042 & 0.019 \\ \hline \end{tabular} \end{table} Table 1. MSE for variable penumbra sizes and with 3/5/7-layer nets. Figure 11. Generalization to untrained objects (red, first row) in a trained scene. Second row visualizes false color error w.r.t. reference. From left to right, emitter size increases linearly from 0 (hard shadow) to 50 cm diameter. Figure 10. A simplified model to estimate the penumbra size at a given pixel. We assume our occluder is spherical in shape, forming a convex bounding-sphere around the occlusion geometry as shown in figure (a) on the left. Figure (b) shows a simplified diagram to estimate the parameters \(\theta,\theta_{\delta}\). images of resolution 2k\(\times\)1k on a cluster for roughly 16 hours (75 training epochs). ## 5. Limitations Our technique shares similar limitations to other screen-space methods. The unavailability of layered depth information, both in camera- (Ritschel et al., 2009) and emitter-space (Jesus Gumbau, 2018) leads to an ill-posedness of the problem that results in approximation error. In camera space, the lack of peeled-depth data complicates the determination of mutual visibility between pixels. Similarly, computing the blur kernel size to soften shadow silhouettes relies on the distance between shading points and occluders, which is also unavailable in our setting. Our network and training methodology are effectively designed to compensate for this ill-posedness, bridging the visual gap in a diversity of object/scene arrangements by leveraging complex patterns inherent in the data. Figure 12 highlights a standard failure case and our supplemental includes additional discussion (section 8). ## 6. Conclusions We presented a compact, fast neural method and training loss suited to temporally-coherent hard and soft shadows synthesis using only basic shadow map rasterized inputs. We showed that - with a careful, problem-specific architecture design and a new, simple temporal loss - a single small network can learn to hallucinate hard and soft shadows from varying emitter sizes and for a diversity of scenes. It is robust to the insertion of unseen objects, requires only a modest training budget, and precludes the need for any post-process denoising and/or TAA. Our approach yields stable hard and soft shadows with performance similar to workhorse interactive approximations and higher quality than (more expensive) GPU-raytracing and denoising (and TAA) alternatives. Rasterization-based approaches for soft and hard shadows rely on heuristics and brittle manual tuning to achieve consistent, visually-desirable results. Our data-driven approach precludes such tuning, improving shadow quality at a modest cost, producing plausible and temporally-coherent soft shadows without any ray-tracing. Ours is a compact neural shading-based framework (Nalbach et al., 2016) suitable for low-power tiled-rendering systems, striking an interesting trade-off in a complex design space. We demonstrate benefits that largely offset the added training and integration complexity. In the future, pursuing more aggressive neural architecture optimizations, including quantization and procedural architecture search, could likely further improve inference performance. When coupled with sparsification using, e.g., _lottery ticket-based_ method (Frankle and Carbin, 2018), we suspect that significant additional performance gains are possible, all without sacrificing quality. ## Acknowledgments We thank the reviewers for their constructive feedback, the ORCA for the Amazon Lumberyard Bistro model (Lumberyard, 2017), the Stanford CG Lab for the Bunny, Buddha, and Dragon models, Marko Dabrovic for the Sponza model and Morgan McGuire for the Bistro, Conference and Living Room models (McGuire, 2017). This work was done when Sayantan was an intern at Meta Reality Labs Research. While at McGill University, he was also supported by a Ph.D. scholarship from the _Fonds de recherche du Quebec - nature et technologies_.
2308.09153
Alternatives to Contour Visualizations for Power Systems Data
Electrical grids are geographical and topological structures whose voltage states are challenging to represent accurately and efficiently for visual analysis. The current common practice is to use colored contour maps, yet these can misrepresent the data. We examine the suitability of four alternative visualization methods for depicting voltage data in a geographically dense distribution system -- Voronoi polygons, H3 tessellations, S2 tessellations, and a network-weighted contour map. We find that Voronoi tessellations and network-weighted contour maps more accurately represent the statistical distribution of the data than regular contour maps.
Isaiah Lyons-Galante, Morteza Karimzadeh, Samantha Molnar, Graham Johnson, Kenny Gruchalla
2023-08-17T18:50:02Z
http://arxiv.org/abs/2308.09153v1
# Alternatives to Contour Visualizations for Power Systems Data ###### Abstract Electrical grids are geographical and topological structures whose voltage states are challenging to represent accurately and efficiently for visual analysis. The current common practice is to use colored contour maps, yet these can misrepresent the data. We examine the suitability of four alternative visualization methods for depicting voltage data in a geographically dense distribution system-Vornool polygons, A3 lessellations, S2 lessellations, and a network-weighted contour map. We find that Voronoi tessellations and network-weighted contour maps more accurately represent the statistical distribution of the data than regular contour maps. Visualization, Power systems, Tessellation ## 1 Introduction Ensuring a reliable power supply and maintaining voltage within acceptable limits is a regulatory obligation for electricity utility companies [2]. The voltage on each electrical junction (bus) in the grid should be within 5% of the expected value (0.95-1.05 per unit, or p.u.) to ensure hardware functions properly. A single utility can have millions of buses to manage, necessitating effective visualization. In recent years, increased renewable energy penetration has driven power systems models to grow larger and more complex, which further drives the need to accurately visualize these systems. Integration of renewable sources and storage makes the spatial and topological relationships between network nodes are especially important because the grid is affected by both geographical and electrical phenomena such as changing weather or electric vehicle charging. Electrical grid bus voltages present a unique geovisualization problem because they are both spatial and topological, and because they have a highly variable spatial density. This makes it difficult to aggregate and visualize accurately. Colored contour maps have become a standard practice for visualizations among the power systems community [8]. They assign color to each pixel based on a weighted-average of the voltage values of the \(n\) closest buses. Each neighboring bus voltage value is weighted proportionally to the inverse distance to the pixel [18]. Since every pixel is colored, this method produces a continuous distribution of bus values across a geographical area (see Figure 2). They are common in research [9], operations [4], planning [10, 13], and commercial power system tools [16]. However, colored contour maps have limitations in accurately representing power systems data [8]: the algorithm confounds the geographical and topological proximity of bus values, potentially leading to misinterpretation. Furthermore, aggregation in these maps can result in artifacts and the loss of extreme values that may be crucial for identifying critical areas. The contour map smoothing algorithm can cover up spatial discontinuities present in the data. The violin plot at the bottom of Figure 2 shows how the voltage data's statistical distribution differs from that depicted on the contour map. Finally, building the contour maps can be computationally expensive for large data. This can limit the visualization systems in scale and in scope of applications. This paper asks, are there alternative visualization methods to contour maps that can more accurately and efficiently show the state of a grid? Here, we propose alternatives to contour maps and evaluate their effectiveness in representing bus voltage in a dense distribution system. We create and compare four alternative strategies to contouring for visualizing distribution grid voltages: (a) Voronoi tessellation, (b) hexagonal tessellations, (c) four-sided tessellations with multiple resolutions, and (d) networked contours. We evaluate all methods on their effectiveness at accurately depicting the statistical distribution of voltage data, preserving anomalous voltage values, and preserving areas of high variability. We also compare the computational efficiency of each method by using the theoretical algorithm complexity and actual compute times. Finally, we discuss challenges and opportunities with each method, and opportunities for further exploration. ## 2 Related Work While colored contour maps have widespread use for power system data, further research has explored both contour variations and some alternatives. This includes using force-directed network layouts [19], using marker clusters in visual analytics [11] using pseudo-geographical mosaics [14], and varying the kernel used for contouring [18]. However, each of these methods either do not represent the geography of the grid or are prone to the limitations of contouring discussed above. Perhaps the simplest method is directly representing the bus values with a geometric primitive (a mark or a glyph) that encodes the value by size and color [3] (see Figure 3). In an empirical research study, power system engineers were significantly better at identifying voltage anomalies using glyph-based representations than colored contour maps of bus values [8]. However, glyphs are prone to limitations on large networks. In low-density regions, the white space between glyphs can be undesirable and has the potential to be misinterpreted. In high-density regions, where the separation between buses approaches the size of a pixel, the glyphs become over-plotted and obscure some of the variation in the grid. Fig. 1: Voltage on 24,000+ buses represented with Voronoi polygons that accurately depict the system voltage. Tessellating is an alternative method for aggregating spatial data. A tessellation is the covering of a surface with one or more geometric shapes without gaps or overlaps. One common technique is with Voronoi polygons, where each polygon contains a single point [6], in our context, a bus. Alternatively, a surface can be tessellated with a regular polygon, typically triangles, quadrilaterals, or hexagons. For a geospatial tessellation, the most popular GitHub library for hexagons is H3 [17] and for quadrilaterals is S2 [5]. However, there is minimal research on the suitability of these geographic tessellations to visualize topological power systems data. Unlike previous work building on contour maps, this paper explores alternative geographical visualizations that allow for spatial discontinuities or that consider topological relationships between nodes, while still creating a continuous covering. We also present the use statistical measures to evaluate the effectiveness of each visualization. ## 3 Methods **Overview:** To generate our alternative visualizations, we vary both the (a) geographic tessellation shape, (b) the number of layers, and (c) the smoothing algorithm. Within (a) we try three tessellation shapes: Voronoi polygons, hexagons, and quadrilaterals. For (b), we use between 1-3 layers of tiles to show local variability while preserving a full-area covering. Within (c), we try a smoothing algorithm that considers both spatial and network distance. Finally, we evaluate the effectiveness of each with statistical measures like median, kurtosis, range, and standard deviation. **Dataset:** The data used for the visualizations are from the SFO SmartDS synthetic model [15]. This synthetic data was created for high-fidelity power distribution system simulations. It constitutes over 24,000 buses in a few square miles in East Oakland. The voltage data are the output of an actual model simulation. We consider the voltage on every bus at a single snapshot in time. Voltage is measured as the actual voltage divided by the expected voltage, or per unit (p.u.), with 1.0 being the expected value. We filter out outlier voltage values of 0 p.u. (about 30 buses), and we average the voltage values on buses with multiple phases (about 3000 buses). We used the same data as Gruchala et al. [7] to allow for comparisons across studies. The code used for analysis is available on GitHub [12]. Below is a description of how each alternative visualization is created. **Voronoi Tessellation:** A polygon is created around each bus such that all points within the polygon share that point as their nearest bus. The color of each polygon is then selected depending on the voltage value of the bus within the polygon. The vector nature of the Voronoi polygons means that the resolution of the visualization is limited only by number of pixels in the final print. **H3 Tessellations:** We extract the H3 tile index of each bus at H3 resolutions 10-12, ranging from about \(15,000m^{2}\) down to \(300m^{2}\) per cell. The lowest resolution (10) has tiles large enough to ensure that the tessellated surface has no gaps while still having at least one bus in every tile. Meanwhile, the highest resolution (12) has tiles about 50X smaller, close in size to the spacing of buses in dense areas. This means that most of these tiles contain just 1 or 2 buses, which helps preserve anomalous values. The three resolutions of hexes are then colored by their mean voltage and plotted on top of one another, with the highest resolution on top. **S2 Tessellations:** We extract the S2 index of each bus from resolutions 16-18, ranging from about \(25,000m^{2}\) down to about \(1500m^{2}\). Like with H3, the lowest resolution is selected to ensure the entire space is covered with no gaps with at least one bus per tile, along with the next two higher resolutions. The highest resolution is only 16X smaller, meaning most of these tiles contain several buses. Note that S2 resolutions do not correspond 1:1 with H3 resolutions. Next, we compute the summary voltage statistics of mean and standard deviation (SD) for each S2 tile. The tiles are colored by their mean voltage, and plotted in layers, but this time with the lowest resolution on top. Then, any tile from the lower resolutions with a SD > 0.003 is filtered out. This exposes the higher resolution tiles beneath, providing more detail in the areas of high variation. Unlike with H3 tiles, the S2 tiles of higher resolutions can nest nearly inside lower resolutions. This allows for a single layer with multiple resolutions to cover the entire study area seamlessly. **Networked Contour Map:** First, we create a network distance matrix of all nodes in the grid using Dijkstra's algorithm. We then recalculate the voltage at each bus \(i\) as a weighted average of itself and its \(n\) network-nearest neighbors, including itself, that are \(h\) network hops away. This is expression in equation (1) below: \[V^{\prime}_{i}=\frac{\sum_{j=1}^{n}V_{j}w_{ij}}{\sum_{j=1}^{n}w_{ij}} \tag{1}\] where \(w_{ij}=\frac{1}{k_{ij}+1}\). Next, we overlay a regular grid of cells on the Fig. 3: Voltage represented in circular glyphs. Color represents the nominal voltage value, and glyph size represents the deviation from nominal. The visualized voltage distribution is shifted away from 1.0 p.u. Fig. 2: Voltage represented in a contour map. Values at each cell are computed with an inverse distance weighting function considering the 100 nearest neighbors. Below is a violin plot that compares the distribution of voltage values represented in the visualization (top) versus the distribution of voltage values in the actual data (bottom). The range of the distribution is reduced as values are smoothed in the contour map. study area. For this dataset, the grid has about 100,000 cells, making each cell have an area of roughly \(100m^{2}\), chosen to approximate the size of a display pixel in the final print. For each cell \(x\) in the grid, the nearest \(k\) bus values at euclidean distance from bus \(i\) at \(d_{xi}\), are averaged and mapped to a color. This is expressed in equation (2) below: \[V_{x}=\frac{\sum_{i=1}^{k}V_{i}^{\prime}w_{xi}}{\sum_{i=1}^{k}w_{xi}} \tag{2}\] where \(w_{xi}=\frac{1}{d_{xi}}\). The two key parameters here are \(n\) and \(k\). For \(n=1\) and \(k=1\), we have essentially a pixelated Voronoi tessellation. For \(n=1\) and \(k=100\), we have a regular colored contour map. Here, we want to emphasize network distance over spatial distance, and so we chose \(n=10\) and \(k=1\) for the result in the next section. **Figures:** The buses are represented geographically as small points to illustrate their spatial relationships. Thin gray lines represent the lines that connect them to illustrate their topological relationships, all overlaid on the colored tiles. The coloring is bounded to an area within 100 meters of the grid lines to avoid showing color where there are no buses. The color scale used is blue-red diverging with white in the middle for nominal values of 1.0 per unit [p.u.]. The colors correspond to absolute voltage values to allow for identification of anomalous values and for comparability with prior research. A background street map is added for visual scale. **Evaluation:** Visualizations methods are then compared with key statistics such as the range, median, and kurtosis. The distribution of the actual data is compared with the distribution of the visualized data using a kernel density estimate [1] for a visual comparison of the fidelity of each visualization. Computational complexity is determined both theoretically and experimentally by recording the duration in seconds that the visualization took to create, both before and after considering the voltage data. Durations are based on a 3.1 GHz Quad-Core Intel Core i7 processor. The list of evaluation criteria are: 1. Accurately depicts the statistical distribution of the real data in color space (median and kurtosis) 2. Preserves areas of high variability (standard deviation) 3. Highlights anomalous values (range) 4. Is computationally efficient to create (complexity and runtime) ## 4 Results The resulting figure from each method is included above (see Figures 4-7). For each figure, the summary statistics are compiled (see Table I for a quantitative comparison of each. Finally, the evaluation criteria are applied to each of the visualizations with a pass or fail (see Table II). Below, we discuss each of the resulting visualizations one at a time. We acknowledge that though we have tried to use as realistic of a dataset as possible, these results are specific to our dataset and would likely vary for a dataset with a different number of buses, different topology or geography, different voltage distributions, and for visualizations with different resolutions. In the **Voronoi tessellation** (see Figure 4), we see much more of the data variability preserved than in the contour map. Network-adjacent polygons blend together to form linear segments following the network branches, while sharp boundaries form between spatially-close but topologically-distant branches. These combine to help illustrate the topology of the grid. Outliers are not smoothed but are limited to the number of pixels in their respective polygons. The distribution mean shifts slightly left because some of the lower voltage buses are in spatially-sparse regions, leading to larger Voronoi polygons and, therefore, visual over representation. In the **H3 tessellation** (see Figure 5), we again see more of the data variability than in the contour map. However, some of the data's variability is lost, as outlying buses are aggregated. The maximum actual bus value of 1.073 is averaged with an adjacent bus of 1.020, bringing the maximum hex voltage down to 1.047. Without the grid lines overlaid, the network structure is not obvious. However, the regular hexagonal tiles do create a homogeneous spatial unit for understanding the variation throughout the grid. Though the algorithmic complexity is the same as the regular contour, the computation time is significantly less because the resolution is low. A comparably sized resolution would likely lead to the same runtime. In the **S2 tessellation** (see Figure 6), we see subareas of high variation represented with more cells. However, there is influential aggregation happening, and outlier voltages are aggregated with other buses, as with H3. The four-sided cells also do not communicate the topology of the grid. The distribution tails are slightly reduced. The computation time is essentially the same as that for the H3 tessellation. In the **networked contour map** (see Figure 7), we can see the branches of the grid emerge in a way similar to the Voronoi contour map, and we have smoother gradients along the network lines. However, Fig. 4: Voronoi tessellation showing the bus voltage value within each polygon. Each bus voltage is represented in its own polygon. Fig. 5: Hexagonal multi-layered tessellation created with H3 with each hexagon showing the mean bus voltage value of its area. High resolutions are plotted on top of low resolutions. this has come at the cost of aggregating some of the outliers (while preserving others). While the statistical distribution of data is more faithful than the original contour map, the distribution tails are still reduced and anomalous buses all but eliminated. The computation time is larger because of the network distance calculations that need to be done before computing the contour map. ## 5 Discussion Given the four criteria for evaluating the visualizations, the top performing visualization is the Voronoi tessellation for this dataset. Anomalies are depicted because each bus has its own polygon. The statistical distribution of the visualized versus actual data does shift the mean slightly but does not bias against the tails, which allows for the identification of areas with high variability. Finally, the Voronoi structure is static; once calculated we can efficiently render new bus values onto the polygons. The potential limitation, however, is for small-scale maps, where multiple polygons may form a single pixel or small pixels that are not visually discernible. We would then be prone to the same over-plotting limitations that apply to glyphs, and a certain degree of smoothing would become necessary. Nonetheless, the Voronoi tessellation is able to both faithfully represent bus values while providing a full contour-like covering for quick assessment of network trends, making it a strong alternative to contour maps and glyph maps. The H3 and S2 tessellations offer promising alternatives to contour maps for their improved representation of the statistical data distribution and regular spatial units, but they are prone to the same issues of aggregating outlying values and failing to relay the structure of the network. And in our judgment, the S2 mapping is aesthetically noisy. Further research could try selecting the highest deviation value for each cell in order to preserve outliers. Finally, the network-weighted contour map builds on the regular contour map by allowing for discontinuities that convey the network structure, however is prone to smoothing some of the larger outliers based on the choice of n and k, which might be inevitable in small-scale maps. Further research should compare the networked contour map side by side with the traditional contour map for given values of \(k\) and \(n\) to control for effective size of the kernel. As mentioned, these results hold true for this dataset of about 24,000 buses in an urban area with a relatively high but variable density. Lower density regions or smaller grids, where there are many pixels per bus, can afford to depict each bus with methods such as glyphs or Voronoi polygons, which preserves the full variability of the original data. However, larger grids with 100,000+ buses may have more buses than pixels, and are forced to aggregate the data in some way. Here, we can look to tiling or to contour maps. Using a small kernel or high resolution tiles helps preserve the variability of the original dataset. Using tiles or networked contour maps allows for discrete boundaries in the visualization, helping to preserving the spatial discontinuities in the data. Future research should evaluate these visualization methods on a large dataset that necessitates aggregation. Additionally, future research should also explore adding in time-series data to the network through animations or other methods. Further topological information could be conveyed by overlaying multiple types of visualizations or by designing linked views. Finally, follow up research should conduct a human factors study by surveying the preferences of power systems users or by testing human readability of each of these visualizations. We hope to inspire further investigation into the proposed methods and new alternative methods to better display power systems data in an intuitive, efficient, and beautiful way. ## 6 Conclusion In summary, we took on the challenge of visualizing the current state of an urban electrical grid with over 24,000 buses. Instead of the standard method of colored contour maps, we tried alternative methods that vary the tile shape and size, add multiple layers, and consider the network topology. We evaluated the effectiveness of each of these visualizations by their ability to accurately depict the data with the same statistical distribution and to do so with computational efficiency. We found that a Voronoi tessellation works well for this dataset that has more pixels than buses, and that a networked contour map holds promise this and larger datasets. We look forward to further research that considers time series, linked views, and a human factors study. ###### Acknowledgements. The authors would like to thank Kristi Potter for her contribution to this research. This work was authored by the National Renewable Energy Laboratory, managed and operated by Alliance for Sustainable Energy, LLC for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-086028308. Funding provided by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and Fig. 6: Quadrilateral multi-resolution tessellation created with S2 with each four-sided tile showing the mean bus voltage value of its area. Areas with high voltage variance are broken down into higher resolutions. Fig. 7: Networked contour map created by first taking a weighted average of the voltage value at each bus with its 10 closest topological neighbors. After voltages are recalculated, the study area with gridded with cells. Each cell is colored with the voltage value from its closest spatial neighbor. the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes.
2303.13279
Parameterized Algorithms for Topological Indices in Chemistry
We have developed efficient parameterized algorithms for the enumeration problems of graphs arising in chemistry. In particular, we have focused on the following problems: enumeration of Kekul\'e structures, computation of Hosoya index, computation of Merrifield-Simmons index, and computation of graph entropy based on matchings and independent sets. All these problems are known to be $\# P$-complete. We have developed FPT algorithms for bounded treewidth and bounded pathwidth for these problems with a better time complexity than the known state-of-the-art in the literature. We have also conducted experiments on the entire PubChem database of chemical compounds and tested our algorithms. We also provide a comparison with naive baseline algorithms for these problems, along with a distribution of treewidth for the chemical compounds available in the PubChem database.
Giovanna K. Conrado, Amir K. Goharshady, Harshit J. Motwani, Sergei Novozhilov
2023-03-23T14:00:29Z
http://arxiv.org/abs/2303.13279v1
# Parameterized Algorithms for Topological Indices in Chemistry # Parameterized Algorithms for Topological Indices in Chemistry Giovanna K. Conrado, Amir K. Goharshady, Harshit J. Motwani, Sergei Novozhilov **Abstract:** We have developed efficient parameterized algorithms for the enumeration problems of graphs arising in chemistry. In particular, we have focused on the following problems: enumeration of Kekule structures, computation of Hosoya index, computation of Merrifield-Simmons index, and computation of graph entropy based on matchings and independent sets. All these problems are known to be \(\#P\)-complete. We have developed FPT algorithms for bounded treewidth and bounded pathwidth for these problems with a better time complexity than the known state-of-the-art in the literature. We have also conducted experiments on the entire PubChem database of chemical compounds and tested our algorithms. We also provide a comparison with naive baseline algorithms for these problems, along with a distribution of treewidth for the chemical compounds available in the PubChem database. **Keywords:** Computational Chemistry, Parameterized Algorithms, Topological Indices. **Acknowledgment:** The research was partially supported by the Hong Kong RGC ECS Project 26208122, the HKUST-Kaisa Grant HKJRI3A-055, the HKUST Startup Grant R9272, UGent Grant BOF21/DOC/182 and the Sofina-Boel Fellowship Program. Authors are ordered alphabetically. ###### Contents * 1 Introduction * 1.1 Enumeration problems in chemistry * 1.2 Importance * 1.3 Our contribution * 1.3.1 Parameterized Algorithms * 1.3.2 Pathwidth and Treewidth * 1.4 Related works * 1.5 Structure of the paper * 2 Notations and Preliminaries * 2.1 Basic definitions * 2.2 Technical Lemmas * 3 Our Algorithms * 3.1 Counting Perfect Matchings * 3.1.1 Bounded Pathwidth * 3.1.2 Bounded Treewidth * 3.2 Counting Matchings * 3.2.1 Bounded Pathwidth * 3.2.2 Bounded Treewidth * 3.3 Counting Independent Sets * 3.3.1 Bounded Pathwidth * 3.3.2 Bounded Treewidth * 3.4 Graph Entropy based on Matchings * 3.4.1 Bounded Pathwidth * 3.4.2 Bounded Treewidth Graph Entropy Based on Independent Sets * 3.5.1 Bounded Pathwidth * 3.5.2 Bounded Treewidth * 3.6 Naive Baseline Algorithms * 4 Experimental Results * 4.1 Statistical analysis of molecules * 4.2 Performance of our Algorithms * 4.3 Comparision with baseline algorithms * 4.4 Implementation Details * 4.4.1 Workflow * 4.4.2 System details * 5 Conclusion ## 1 Introduction ### Enumeration problems in chemistry We studied the enumeration problems on the molecular graphs in chemistry. We focused on the following problems: * The Kekule structure for a molecule is essentially a perfect matching of the underlying graph. Therefore, enumeration of Kekule structures is equal to computing total number of perfect matchings of the graph [11]. * The Hosoya index, also known as Z index, of a graph is the total number of matchings of the graph [10]. * The Merrifield-Simmons index of a graph is the total number of independent sets of the graph [14]. * The computation of graph entropy based on matchings and independent sets is equivalent to determining all matchings and independent sets for every possible size [1]. All these problems are known to be \(\#P\)-complete. The existing theoretical results for these problems are not useful for real-world applications in chemistry where graphs may have a large number of vertices and edges. ### Importance Topological indices are numerical graph invariants that characterizes the topology of graph [11]. These indices play a vital role in computational chemistry. In particular, they are used as molecular descriptors for QSAR (Quantitative structure-activity relationship) and QSPR (Quantitative structure property relationships) [15]. In other words, the topology of hydrogen-suppressed graphs of molecule can be used to quantitatively describe the physical and chemical properties. The problems under considerations are some of the well studied examples of molecular descriptors over the years in chemistry literature. ### Our contribution We provide new algorithms for the above mentioned problems that works in polynomial time for graphs that have small treewidth and small pathwidth. We can handle more than 99.9% of molecules in the PubChem database. Moreover, we also provide a statistical distribution of treewidth for the entire PubChem database [10]. Hence, we think this result would serve as an invitaion to both chemists and computer scientists to use parameterized algorithms more often for the problems arising in computational chemistry and biology. #### 1.3.1 Parameterized Algorithms An efficient parameterized algorithm solves a problem in polynomial time with respect to the size of input, but possibly with non-polynomial dependence on a specific aspect of the input's structure, which is called a "parameter". A problem that can be solved by an efficient parameterized algorithm is called fixed-parameter tractable (FPT). #### 1.3.2 Pathwidth and Treewidth Pathwidth and treewidth are well studied parameters for graphs. Informally, pathwidth is the measure of path-likeness of a graph and treewidth is a measure of tree-likeness of a graph. Many hard graph problems in computer science are shown to have efficient solutions when restricted to graphs with bounded treewidth and bounded pathwidth. In this work we show that more than \(99.9\%\) molecules in the entire PubChem database have treewidth less than \(5\), less than \(100\) molecules have treewidth greater than \(20\), in the enitre database of more than \(100\) million compounds. For more detailed statistics for the entire database refer to Section 4. Therefore, supporting our argument of developing parameterized algorithms for bounded treewidth and pathwidth for the above mentioned problems. ### Related works * In theory, there are known FPT algorithms using bounded treewidth as parameter for computing matchings and independent using monadic second order logic [1]. But it is impractical to implement and use them for graphs with large number of vertices and edges arising in chemistry. * In [14], the authors have provided explicit parameterized algorihtms for computing graph entropies corresponding to Hosoya Index and Merrifield-Simmons Index. But our algorithms, are better than their approach by a linear factor for computing the graph entropies and by a quadratic factor for computing the corresponding indices. We also provide a practical implementaion for all the graphs obtained from PubChem database. ### Structure of the paper The paper is divided into three main sections. We first start with basic notations and preliminaries in Section 2. Our main algorithms and theoretical reults are described in Section 3. We provide all the experimental results, statistics and implementation details in Section 4. Finally, we end the paper with conclusion in Section 5. ## 2 Notations and Preliminaries ### Basic definitions **Definition 1** (Path Decomposition [15]).: _A path decomposition of a graph \(G=(V,E)\) is a sequence of bags \(\mathcal{P}=\{X_{1},\ldots,X_{r}\}\), where each \(X_{i}\subseteq V\) such that following conditions hold:_ 1. _For each_ \(v\in V\)_, there exists a pair of indices_ \(1\leq l(v)\leq r(v)\leq r\) _such that_ \[v\in X_{i}\Longleftrightarrow l(v)\leq i\leq r(v).\] _In other words each graph vertex maps to continuous subpath of the decomposition._ 2. _For each_ \(uv\in E\)_, there exists an_ \(i\) _such that_ \(\{u,v\}\subseteq X_{i}\)_._ **Definition 2** (Nice Path Decomposition [15]).: _A nice path decomposition is a path decomposition, where additional conditions hold:_ 1. \(X_{1},X_{r}=\varnothing\)_._ 2. _Each bag, except_ \(X_{1}\)_, either introduce or forget node._ 3. _If_ \(X_{i+1}\) _is a forget node, then there exists_ \(v\in V\) _such that_ \(X_{i+1}=X_{i}\setminus\{v\}\) _._ 4. _If_ \(X_{i+1}\) _is an introduce node, then there exists_ \(v\in V\) _such that_ \(X_{i+1}=X_{i}\cup\{v\}\)_._ **Definition 3** (Pathwidth [15]).: _The width of a path decomposition \(\mathcal{P}=\{X_{1},\ldots,X_{r}\}\) is \(max_{1\leq i\leq r}|X_{i}|-1\). The pathwidth of a graph \(G\), denoted by pw(G), is the minimum possible width of a path decomposition of \(G\)._ **Definition 4** (Tree Decomposition [15]).: _A tree decomposition of a graph \(G\) is a pair \(\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})\), where \(T\) is a rooted tree with a root \(r\), each bag \(X_{t}\subseteq V(G)\) and the following conditions hold:_ 1. _For every_ \(uv\in E(G)\)_, there exists a node_ \(t\in V(T)\) _such that_ \(\{u,v\}\subseteq X_{t}\)_._ 2. _For every_ \(v\in V(G)\)_, the set_ \(T_{v}:=\{t\in V(T):v\in X_{t}\}\)_, induced graph_ \(T[T_{v}]\) _is nonempty subtree of the_ \(T\)_._ **Definition 5** (Nice Tree Decomposition [15]).: _A nice tree decomposition is a tree decomposition, where additional conditions hold:_ 1. _The root bag is empty:_ \(X_{r}=\varnothing\)_._ 2. _If_ \(l\) _is a leaf of the_ \(T\)_, then_ \(X_{l}=\varnothing\)_._ 3. _Each non-leaf node of tree_ \(T\) _is of one of the three types: introduce, forget or join node._ 4. _If_ \(X_{b}\) _is a forget node, it has exactly one child_ \(X_{c}\) _and there is a vertex_ \(v\in X_{c}\) _such that_ \(X_{b}=X_{c}\setminus\{v\}\)_._ 5. _If_ \(X_{b}\) _is an introduce node, it has exactly one child_ \(X_{c}\) _and there is a vertex_ \(v\in V(G)\setminus X_{c}\) _such that_ \(X_{b}=X_{c}\cup\{v\}\)_._ 6. _If_ \(X_{b}\) _is a join node, it has exactly two children_ \(X_{c_{1}}\) _and_ \(X_{c_{2}}\) _such that_ \(X_{b}=X_{c_{1}}=X_{c_{2}}\)_._ **Definition 6** (Treewidth [15]).: _The width of tree decomposition \(\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})\) equals \(max_{t\in V(T)}|X_{t}|-1\), that is, the maximum size of its bag minus 1. The treewidth of a graph \(G\), denoted by \(tw(G)\), is the minimum possible width of a tree decomposition of \(G\)._ **Remark 1**.: _Notice that (nice) path decomposition is a special case of a (nice) tree decomposition. For a path decomposition \(\mathcal{P}=\{X_{1},\ldots,X_{r}\}\) we think that the \(X_{1}\) is a leaf bag and the \(X_{r}\) is a root bag._ **Notation 1**.: _For each \(t\in G(T)\) we denote subtree rooted at this vertex as \(T_{t}\), and denote a graph induced by all vertices in all bags correspond to \(T_{t}\):_ \[G_{t}^{\downarrow}:=G\left[\bigcup\{X_{t}:t\in T_{t}\}\right].\] **Example 1**.: _In this example, we explain the above defintions by explicitly showing tree decomposition for underlying graph of Caffeine molecule (See Figure 1)._ _We can see from Figure 2, that Caffeine molecule has treewidth \(2\)._ Figure 1: Caffeine ### Technical Lemmas We now present two technical lemmas that are crucial for development of algorithms in Section 3. These lemmas are consequences of separation properties of the tree decomposition. **Lemma 1**.: _If \(X_{b}\) is an introduce nodes, \(X_{b}=X_{c}\cup\{v\}\), then \(N(v)\cap G_{b}^{\downarrow}\subseteq X_{c}\)._ Proof.: Let \(T^{\prime}\) be a subtree of the tree \(T\) rooted at \(b\). Observe that the corresponding tree decomposition \(\mathcal{T}^{\prime}=\{T^{\prime},\{X_{t}\}_{t\in V(T^{\prime})}\}\) is a tree decomposition of \(G_{b}^{\downarrow}\). Notice that by Definition 4, \(T^{\prime}_{v}\) forms an induced subtree of the \(T^{\prime}\). As \(v\notin X_{c}\) and \(c\) is the only vertex in the open neighbourhood of \(b\), this implies that \(T^{\prime}_{v}=\{b\}\). Let \(u\) be a vertex in \(N(v)\cap G_{b}^{\downarrow}\), then by Definition 4, if there is an edge \(uv\) in \(G_{b}^{\downarrow}\), then there exists a bag containing both \(u\) and \(v\). But, as we have just shown that the only bag containing \(v\) is \(X_{b}\). As \(u\) is a vertex in \(N(v)\), this implies that \(u\in X_{c}\). **Lemma 2**.: _If \(X_{b}\) is a join node with two children \(X_{b}=X_{c_{1}}=X_{c_{2}}\), then in \(G_{b}^{\downarrow}\) there is no edge between \(V(G_{c_{1}}^{\downarrow})\setminus X_{b}\) and \(V(G_{c_{2}}^{\downarrow})\setminus X_{b}\)._ Proof.: Let \(T^{\prime},T^{\prime\prime},T^{\prime\prime\prime}\) be the subtrees of \(T\) rooted at \(b,c_{1},c_{2}\) respectively. Let \(\mathcal{T}^{\prime},\mathcal{T}^{\prime\prime},\mathcal{T}^{\prime\prime\prime}\) be the corresponding tree decompositions. We will prove the given statement by contradiction. Therefore, let us assume that \(u\in V(G_{c_{1}}^{\downarrow})\setminus X_{b}\), \(v\in V(G_{c_{2}}^{\downarrow})\setminus X_{b}\), and \(uv\in E(G_{b}^{\downarrow})\). As \(\mathcal{T}^{\prime}\) is a tree decomposition of \(G_{b}^{\downarrow}\), there exists at least one bag \(X_{t}\), such that \(t\in\mathcal{T}^{\prime}\) and \(u,v\in X_{t}\). From our initial assumption, we know that \(u,v\notin X_{b}\), this implies that \(t\neq b\). Therefore, either \(t\in T^{\prime\prime}\) or \(t\in T^{\prime\prime\prime}\). Without loss of generality, let us assume that \(t\in T^{\prime\prime}\). As \(v\in V(G_{c_{2}}^{\downarrow})\setminus X_{b}\), there exists at least one bag \(X_{s}\) such that, \(s\in T^{\prime\prime\prime}\) and \(v\in X_{s}\). Observe that \(v\) appears in both bags \(X_{s}\) and \(X_{t}\). Now by Definition 4, we know that \(T^{\prime}_{v}\) is a connected subtree and any path connecting \(s\) and \(t\) goes through \(b\). This implies that \(X_{b}\in T^{\prime}_{v}\) and \(v\in X_{b}\). This contradicts our initial assumption that \(v\notin X_{b}\), hence completes the proof. ## 3 Our Algorithms ### Counting Perfect Matchings Enumeration of Kekule structures in organic molecules is equivalent to finding total number of perfect matchings for the underlying chemical graph of the molecule. We present parametrized algorithms for the cases when the underlying chemical graph have bounded pathwidth and bounded treewidth respectively. #### 3.1.1 Bounded Pathwidth We will use a dynamic programming approach over the nice path decomposition of the given graph \(G\). First we will define the dynamic programming state as follows: **Definition 7** (Respectful Perfect Matchings).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathcal{RP}(b,M)\) as the set of all perfect matchings \(F\) in a \(G_{b}^{\downarrow}\setminus M\) such that each matching edge \(uv\in F\) has at least one point in \(G_{b}^{\downarrow}\setminus X_{b}\): \(u\notin X_{b}\) or \(v\notin X_{b}\)._ Figure 2: Graph representation and tree decomposition of Caffeine. **Definition 8** (Dynamic Programming State for Perfect Matchings).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\operatorname{PerfMatch}[b,M]\) to be the number of elements in the set \(\mathcal{RP}(b,M)\)._ **Remark 2**.: _Notice that if \(X_{r}=\varnothing\) is the root node, then \(G^{\downarrow}_{r}=G\), and perfect matchings of \(G\) are in one-to-one correspondence with respectful perfect matchings \(\mathcal{RP}(r,\varnothing)\), which implies \(\operatorname{PerfMatch}[r,\varnothing]\) is the total number of perfect matchings of \(G\)._ We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. We need to consider three cases when \(X_{b}\) is: leaf node, introduce node and forget node respectively. We refer the reader to follow proof along with Example 2 for better clarity of the dynamic programming approach. * **Leaf node:** If \(X_{l}\) is a leaf bag then \(\mathcal{RP}(b,M)\) contains only empty matching as \(X_{l}=\varnothing\). Therefore, \(\operatorname{PerfMatch}[l,\varnothing]=1\). * **Introduce node:** Let \(X_{b}\) be an introduce bag such that \(X_{b}=X_{c}\cup\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relations: \[\operatorname{PerfMatch}[b,M]=\begin{cases}\operatorname{PerfMatch}[c,M\setminus v ],&v\in M,\\ 0,&v\not\in M.\end{cases}\] In order to derive the above recurrence relations, we have to consider two possibilities for \(v\). 1. If \(v\in M\), then by Definition 7, all the elements of the set \(\mathcal{RP}(b,M)\) are in one-to-one correspondence with all the elements of the set \(\mathcal{RP}(c,M\setminus v)\). 2. If \(v\notin M\), then for every matching \(F\in\mathcal{RP}(b,M)\), \(v\) should be covered by some edge \(vw\in F\). By Lemma 1, \(w\in X_{c}\subset X_{b}\), which contradicts the Definition 7. Therefore, \(\operatorname{PerfMatch}[b,m]=0\) for this case, as there are no matchings satisfying the given conditions. This transition from \(X_{c}\) to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. * **Forget node:** Let \(X_{b}\) be the forget node such that \(X_{b}=X_{c}\setminus\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relation: \[\operatorname{PerfMatch}[b,M]=\operatorname{PerfMatch}[c,M]+\sum_{u\in X_{b} \setminus M:\ uv\in E(G)}\operatorname{PerfMatch}[c,M\cup\{v,u\}]\] In order to derive the above recurrence relation, let \(F\) be a matching from the set \(\mathcal{RP}(b,M)\). Note that \(v\notin M\), as \(M\subseteq X_{b}\). Therefore, \(v\) will be matched by some edge \(uv\in F\). Now there are two possible cases i.e., either \(u\notin X_{b}\) or \(u\in X_{b}\). For the former case when \(u\notin X_{b}\), it is clear that there is a bijection between all such matchings \(F\) and all elements of \(\mathcal{RP}(c,M)\). Therefore, number of such matchings correspond to the first term \(\operatorname{PerfMatch}[c,M]\) of the recurrence relation. Now let's consider the latter case when \(u\in X_{b}\), then the matching \(F\setminus uv\in\mathcal{RP}(c,MM\cup\{u,v\})\), and number of such matchings correspond to the second term \(\sum_{u\in X_{b}\setminus M:\ uv\in E(G)}\operatorname{PerfMatch}[c,M\cup\{v, u\}]\) of the recurrence relation. This transition to \(X_{b}\) can be computed in \(\mathcal{O}(|X_{b}|)=\mathcal{O}(\operatorname{pw}(G))\) time. **Proposition 3**.: _The time complexity for finding the total number of perfect matchings for a graph \(G=(V,E)\), with \(|V|=n\) and pathwidth \(\operatorname{pw}\), using the above algorithm is \(\mathcal{O}(n\cdot\operatorname{poly(pw)}\cdot 2^{\operatorname{pw}})\)._ Proof.: For each bag, we have at most \(2^{\operatorname{pw}}\) dynamic programming states, and for each state we spend at most \(\mathcal{O}(\operatorname{poly(pw)})\) time. It is known that there are at most \(\mathcal{O}(n\cdot\operatorname{pw})\) bags for any nice path decomposition of \(G\)[14, Chapter 7]. Therefore, we spend at most \(\mathcal{O}(n\cdot\operatorname{poly(pw)}\cdot 2^{\operatorname{pw}})\) time to completely fill the dynamic programming table. #### 3.1.2 Bounded Treewidth We will use a dynamic programming approach over the nice tree decomposition of the given graph \(G\). The definitions of respectful perfect matchings and dynamic programming states used here are same as Section 3.1.1. Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\) under consideration. We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. There are four possible cases for the type of bags, i.e., when \(X_{b}\) is: leaf node, introduce node, forget node and join node respectively. The reccurrence relations for leaf node, introduce node and forget node remains the same as Section 3.1.1. Therefore, we only need to consider the case of join node which is described as follows: * **Join node:** Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children. Then the dynamic programming state corresponding to \((b,M)\) will satisfy the following recurrence relation: \[\mathrm{PerfMatch}[b,M]=\sum_{H_{1}\sqcup H_{2}=X_{b}\setminus M}\mathrm{PerfMatch }[c_{1},M\cup H_{2}]\cdot\mathrm{PerfMatch}[c_{2},M\cup H_{1}]\] where \(H_{1}\sqcup H_{2}=X_{b}\setminus M\) means \(H_{1}\cup H_{2}=X_{b}\setminus M\) and \(H_{1}\cap H_{2}=\varnothing\). In order to derive the above recurrence relation, let \(F\) be a matching from the set \(\mathcal{RP}(b,M)\). By Lemma 2, \(F\) doesn't have any edges between \(G^{\downarrow}_{c_{1}}\setminus X_{b}\) and \(G^{\downarrow}_{c_{2}}\setminus X_{b}\). Therefore, it can be split into two matchings, \(F_{1}:=E(G^{\downarrow}_{c_{1}})\cap F\) and \(F_{2}:=E(G^{\downarrow}_{c_{2}})\cap F\). Let \(H_{1}:=V(F_{1})\cap X_{b}\) and \(H_{2}:=V(F_{2})\cap X_{b}\). It is clear from the Definition 7, that \(F_{1}\in\mathcal{RP}(c_{1},M\cup H_{2})\) and \(F_{2}\in\mathcal{RP}(c_{2},M\cup H_{1})\). If we choose \(H_{1}\) and \(H_{2}\) such that \(H_{1}\sqcup H_{2}=X_{b}\setminus M\), then for all such matchings \(F_{1}\in\mathcal{RP}(c_{1},M\cup H_{2})\) and \(F_{2}\in\mathcal{RP}(c_{2},M\cup H_{1})\), we get a matching \(F=F_{1}\cup F_{2}\), such that \(F\in\mathcal{RP}(b,M)\). This leads to our recurrence relation. **Proposition 4**.: _The time complexity for finding the total number of perfect matchings for a graph \(G=(V,E)\), with \(|V|=n\) and treewidth \(\mathrm{tw}\), using the above algorithm is \(\mathcal{O}(n\cdot\mathrm{poly(tw)}\cdot 3^{\mathrm{tw}})\)._ Proof.: The transition for a join node is the most expensive operation for this algorithm, the rest of the cases are similar to the dynamic programming on nice path decomposition in Section 3.1.1. Note that the recurrence relation for the dynamic programming state of join node can be rewritten as: \[\sum_{H\subseteq X_{b}\setminus M}\mathrm{PerfMatch}[c_{1},M\cup H]\cdot \mathrm{PerfMatch}[c_{2},X_{b}\setminus H]\] For each bag \(X_{b}\) and for each \(M\subseteq X_{b}\), we spend at most \(\mathcal{O}(\mathrm{poly(tw)}\cdot 2^{|X_{B}|-|M|})\) time. By using the binomial theorem, the total time spend for each bag \(X_{b}\) for all possible subsets \(M\) is \(\mathcal{O}(\mathrm{poly(tw)}\cdot 3^{|X_{B}|})\). It is known that there are at most \(\mathcal{O}(n\cdot\mathrm{tw})\) bags for any nice tree decomposition of \(G\)[15, Chapter 7]. Therefore, we spend at most \(\mathcal{O}(n\cdot\mathrm{poly(tw)}\cdot 3^{\mathrm{tw}})\) time to completely fill the dynamic programming table. **Example 2**.: _In this example, we explain the above algorithm explicitly with figures for all cases of nodes in the tree decomposition. Red color nodes in the following figures are elements of the set \(M\) defined in Definition 7._ * _Introduce Node:_ Figure 3: Example for introduce node. * _Forget Node:_ * _Join Node:_ ### Counting Matchings #### 3.2.1 Bounded Pathwidth We will use a dynamic programming approach over the nice path decomposition of the given graph \(G\). First we will define the dynamic programming state as follows: **Definition 9** (Respectful Matchings).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathcal{RM}(b,M)\) as the set of all matchings \(F\) in a \(G^{1}_{b}\setminus M\) such that each matching edge \(uv\in F\) has at least one point in \(G^{1}_{b}\setminus X_{b}\): \(u\notin X_{b}\) or \(v\notin X_{b}\) and each point from \(X_{b}\setminus M\) is covered by \(F\)._ **Definition 10** (Dynamic Programming State for Matchings).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathrm{Match}[b,M]\) to be the number of elements in the set \(\mathcal{RM}(b,M)\)._ Figure 4: Example for forget node. Figure 5: Example for join node. **Remark 3**.: _Notice that if \(X_{r}=\varnothing\) is the root node, then \(G_{r}^{1}=G\), and all matchings of \(G\) are in one-to-one correspondence with respectful matchings \(\mathcal{RM}(r,\varnothing)\), which implies \(\operatorname{Match}[r,\varnothing]\) is the total number of matchings of \(G\)._ We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. We need to consider three cases when \(X_{b}\) is: leaf node, introduce node and forget node respectively. * **Leaf node:** If \(X_{l}\) is a leaf bag then \(\mathcal{RM}(b,M)\) contains only empty matching as \(X_{l}=\varnothing\). Therefore, \(\operatorname{Match}[l,\varnothing]=1\). * **Introduce node:** Let \(X_{b}\) be the introduce bag such that \(X_{b}=X_{c}\cup\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relations: \[\operatorname{Match}[b,M]=\begin{cases}\operatorname{Match}[c,M\setminus v],&v\in M,\\ 0,&v\not\in M.\end{cases}\] In order to derive the above recurrence relations, we have to consider two possibilities for \(v\). 1. If \(v\in M\), then by Definition 9, all the elements of the set \(\mathcal{RM}(b,M)\) are in one-to-one correspondence with all the elements of the set \(\mathcal{RM}(c,M\setminus v)\). 2. If \(v\notin M\), then for every matching \(F\in\mathcal{RP}(b,M)\), \(v\) should be covered by some edge \(vw\in F\). By Lemma 1, \(w\in X_{c}\subset X_{b}\), which contradicts the Definition 9. Therefore, \(\operatorname{Match}[b,m]=0\) for this case, as there are no matchings satisfying the given conditions. This transition from \(X_{c}\) to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. * **Forget node:** Let \(X_{b}\) be the forget node such that \(X_{b}=X_{c}\setminus\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relation: \[\operatorname{Match}[b,M]=\operatorname{Match}[c,M]+ \operatorname{Match}[c,M\cup\{v\}]\] \[+\sum_{u\in X_{b}\setminus M:\;uv\in E(G)}\operatorname{Match}[c,M \cup\{v,u\}]\] In order to derive the above recurrence relation, let \(F\) be a matching from the set \(\mathcal{RM}(b,M)\). Note that \(v\notin M\), as \(M\subseteq X_{b}\). There are two possibilities for \(v\) i.e., either it will be covered by some edge in \(F\) or it remains unmatched. For the latter case, all such matchings \(F\) for which \(v\) remains unmatched are in one-to-one correpondence with all matchings of the set \(\mathcal{RM}(c,M\cup\{v\}\), and number of such matchings correspond to the second term \(\operatorname{Match}[c,M\cup\{v\}]\) of the recurrison. Now let us consider the second case, where \(v\) will be matched by some edge \(uv\in F\). Now there are again two possible cases to consider i.e., either \(u\notin X_{b}\) or \(u\in X_{b}\). For the former case when \(u\notin X_{b}\), it is clear that there is a bijection between all such matchings \(F\) and all elements of \(\mathcal{RM}(c,M)\). Therefore, number of such matchings correspond to the first term \(\operatorname{Match}[c,M]\) of the recurrence relation. Now let's consider the latter case when \(u\in X_{b}\), then the matching \(F\setminus uv\in\mathcal{RM}(c,MM\cup\{u,v\})\), and number of such matchings correspond to the last term \(\sum_{u\in X_{b}\setminus M:\;uv\in E(G)}\operatorname{Match}[c,M\cup\{v,u\}]\) of the recurrence relation. This transition to \(X_{b}\) can be computed in \(\mathcal{O}(|X_{b}|)=\mathcal{O}(\operatorname{pw}(G))\) time. **Proposition 5**.: _The time complexity for finding the total number of matchings for a graph \(G=(V,E)\), with \(|V|=n\) and pathwidth \(\operatorname{pw}\), using the above algorithm is \(\mathcal{O}(n\cdot\operatorname{poly}(\operatorname{pw})\cdot 2^{\operatorname{pw}})\)._ Proof.: Same proof as Proposition 3. #### 3.2.2 Bounded Treewidth We will use a dynamic programming approach over the nice tree decomposition of the given graph \(G\). The definitions of respectful matchings and dynamic programming states used here are same as Section 3.2.1. Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\) under consideration. We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. There are four possible cases for the type of bags, i.e., when \(X_{b}\) is: leaf node, introduce node, forget node and join node respectively. The recurrence relations for leaf node, introduce node and forget node remains the same as Section 3.2.1. Therefore, we only need to consider the case of join node which is described as follows: * **Join node:** Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children. Then the dynamic programming state corresponding to \((b,M)\) will satisfy the following recurrence relation: \[\operatorname{Match}[b,M]=\sum_{H_{1}\sqcup H_{2}=X_{b}\setminus M} \operatorname{Match}[c_{1},M\cup H_{2}]\cdot\operatorname{Match}[c_{2},M\cup H _{1}]\] where \(H_{1}\sqcup H_{2}=X_{b}\setminus M\) means \(H_{1}\cup H_{2}=X_{b}\setminus M\) and \(H_{1}\cap H_{2}=\varnothing\). In order to derive the above recurrence relation, let \(F\) be a matching from the set \(\mathcal{RM}(b,M)\). By Lemma 2, \(F\) doesn't have any edges between \(G^{\downarrow}_{c_{1}}\setminus X_{b}\) and \(G^{\downarrow}_{c_{2}}\setminus X_{b}\). Therefore, it can be split into two matchings, \(F_{1}:=E(G^{\downarrow}_{c_{1}})\cap F\) and \(F_{2}:=E(G^{\downarrow}_{c_{2}})\cap F\). Let \(H_{1}:=V(F_{1})\cap X_{b}\) and \(H_{2}:=V(F_{2})\cap X_{b}\). It is clear from the Definition 9, that \(F_{1}\in\mathcal{RM}(c_{1},M\cup H_{2})\) and \(F_{2}\in\mathcal{RM}(c_{2},M\cup H_{1})\). If we choose \(H_{1}\) and \(H_{2}\) such that \(H_{1}\sqcup H_{2}=X_{b}\setminus M\), then for all such matchings \(F_{1}\in\mathcal{RM}(c_{1},M\cup H_{2})\) and \(F_{2}\in\mathcal{RM}(c_{2},M\cup H_{1})\), we get a matching \(F=F_{1}\cup F_{2}\), such that \(F\in\mathcal{RM}(b,M)\). This leads to our recurrence relation. **Proposition 6**.: _The time complexity for finding the total number of matchings for a graph \(G=(V,E)\), with \(|V|=n\) and treewidth \(\operatorname{tw}\), using the above algorithm is \(\mathcal{O}(n\cdot\operatorname{poly}(\operatorname{tw})\cdot 3^{\operatorname{tw}})\)._ Proof.: Same proof as Proposition 4. ### Counting Independent Sets #### 3.3.1 Bounded Pathwidth We will use a dynamic programming approach over the nice path decomposition of the given graph \(G\). First we will define the dynamic programming state as follows: **Definition 11** (Respectful Independent Sets).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathcal{RI}(b,M)\) as the set of all independent sets \(I\) in a \(G^{\downarrow}_{b}\) such that each \(I\cap X_{b}=M\)._ **Definition 12** (Dynamic Programming State for Independent Sets).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\operatorname{Ind}[b,M]\) to be the number of elements in the set \(\mathcal{RI}(b,M)\)._ **Remark 4**.: _Notice that if \(X_{r}=\varnothing\) is the root node, then \(G^{\downarrow}_{r}=G\), and independent sets of \(G\) are in one-to-one correspondence with respectful independent sets \(\mathcal{RP}(r,\varnothing)\), which implies \(\operatorname{Ind}[r,\varnothing]\) is the total number of independent sets of \(G\)._ We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. We need to consider three cases when \(X_{b}\) is: leaf node, introduce node and forget node respectively. * **Leaf node:** If \(X_{l}\) is a leaf bag then \(\mathcal{RI}(b,M)\) contains only empty independent set as \(X_{l}=\varnothing\). Therefore, \(\operatorname{Ind}[l,\varnothing]=1\). * **Introduce node:** Let \(X_{b}\) be the introduce bag such that \(X_{b}=X_{c}\cup\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relations: \[\operatorname{Ind}[b,M]=\begin{cases}\operatorname{Ind}[c,M],&v\notin M,\\ \operatorname{Ind}[c,M\setminus v],&v\in M\text{ and }N(v)\cap M\neq \varnothing,\\ 0,&v\in M\text{ and }N(v)\cap M=\varnothing.\end{cases}\] In order to derive the above recurrence relations, we have to consider two possibilities for \(v\). 1. If \(v\in M\), now there are again two subcases to consider i.e., \(N(v)\cap M\neq\varnothing\) and \(N(v)\cap M=\varnothing\). For the former subcase, there is a one-to-one correspondence with all such independent sets \(I\) with elements of the set \(\mathcal{RI}(c,M\setminus v)\). Therefore, \(\operatorname{Ind}[b,M]=\operatorname{Ind}[c,M\setminus v]\) for this subcase. For the latter subcase, there cannot exist any independent set \(I\) in \(\mathcal{RI}(b,M)\), such that it contains both \(v\) and its neighbours. Therefore, \(\operatorname{Ind}[b,M]=0\) for this subcase. 2. If \(v\notin M\), then there is a bijection between \(\mathcal{RI}(b,M)\) and \(\mathcal{RI}(c,M)\). Therefore, number of indepedent sets \(\operatorname{Ind}[b,M]\) is equal to \(\operatorname{Ind}[c,M]\). This transition from \(X_{c}\) to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. * **Forget node:** Let \(X_{b}\) be the forget node such that \(X_{b}=X_{c}\setminus\{v\}\). Then the dynamic programming state corresponding to \((b,M)\) satisfies the following recurrence relation: \[\operatorname{Ind}[b,M]=\operatorname{Ind}[c,M]+\operatorname{Ind}[c,M\cup\{v\}]\] In order to derive the above recurrence relation, let \(I\) be an independent set from the set \(\mathcal{RI}(b,M)\). Now there are two possible cases i.e., either \(v\notin I\) or \(v\in I\). For the former case when \(v\notin I\), there is a bijection for all such independent sets \(I\) to all the elements of the set \(\mathcal{RI}(c,M)\). Therefore, number of such independent sets is equal to \(\operatorname{Ind}[c,M]\), and this corresponds to the first term of the recurrence relation. For the latter case when \(v\in I\), there is a bijection between all such independent sets \(I\) to all the elements of the set \(\mathcal{RI}[c,M\cup\{v\}]\). Therefore, number of such independent sets is equal to \(\operatorname{Ind}[c,M\cup\{v\}]\), which is the second term of the recurrence relation. This transition to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. **Proposition 7**.: _The time complexity for finding the total number of independent sets for a graph \(G=(V,E)\), with \(|V|=n\) and pathwidth \(\operatorname{pw}\), using the above algorithm is \(\mathcal{O}(n\cdot\operatorname{poly}(\operatorname{pw})\cdot 2^{\operatorname{pw}})\)._ Proof.: Proof same as Proposition 3. #### 3.3.2 Bounded Treewidth We will use a dynamic programming approach over the nice tree decomposition of the given graph \(G\). The definitions of respectful matchings and dynamic programming states used here are same as Section 3.3.1. Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\) under consideration. Before we proceed with the dynamic programming algorithm, we will first prove the following technical lemma which will be required for the derivation of recurrence relations. **Lemma 8**.: _Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children, then \(|\mathcal{RI}(b,M)|=|\mathcal{RI}(c_{1},M)|\cdot|\mathcal{RI}(c_{2},M)|\)._ Proof.: Let \(f\) be the map from \(\mathcal{RI}(b,M)\) to \(\mathcal{RI}(c_{1},M)\times\mathcal{RI}(c_{2},M)\) such that \(f\) is defined as follows: \[f(I)\mapsto(I_{1},I_{2})\] where \(I_{1}=I\cap V(G^{\downarrow}_{c_{1}})\) and \(I_{2}=I\cap V(G^{\downarrow}_{c_{2}})\) then we will show that \(f\) is a bijection. First, we will show that \(f\) is injective. Let \(I,J\in\mathcal{RI}(b,M)\) such that \(f(I)=f(J)\), then we need to show that \(I=J\). Let us assume that \(I\neq J\), this implies that there exists \(u\in I\), such that \(u\notin J\). As \(f(I)=f(J)\), we get \((I_{1},I_{2})=(J_{1},J_{2})\), this implies \(I=J\), as \(I=I_{1}\cup I_{2}\) and \(J=J_{1}\cup J_{2}\). Therefore, we get a contradiction to our initial assumption that \(I\neq J\), this implies that \(f\) is injective. Now, we will show that \(f\) is surjective. Let \((J,K)\in\mathcal{RI}(c_{1},M)\times\mathcal{RI}(c_{2},M)\) then we need to show that there exists an \(I\in\mathcal{RI}(b,M)\) such that \(f(I)=(J,K)\). Let \(I\) be \(J\cup K\). In order to complete the proof, we need to show that \(I\in\mathcal{RI}(b,M)\) and \(f(I)=(J,K)\). For all \(u,v\in I\), we need to show that there doesn't exist any edge between \(u\) and \(v\). We need to consider the following cases for \(u\) and \(v\): * \(u,v\in J\) or \(u,v\in K\): In both the cases, there doesn't exist any edge between \(u\) and \(v\) as both \(J\) and \(K\) are independent sets of \(G^{\downarrow}_{c_{1}}\) and \(G^{\downarrow}_{c_{2}}\) respectively. * \(u\in J\setminus J\cap K\) and \(v\in K\setminus J\cap K\): We know using Lemma 2, that there is no edge between \(V(G^{1}_{c_{1}})\setminus X_{b}\) and \(V(G^{1}_{c_{2}})\setminus X_{b}\). Therefore, there is no edge between \(u\) and \(v\). This completes the proof that \(I\) is an independent set of \(G^{1}_{b}\). As \(I=J\cup K\) and \(J\cap X_{b}=K\cap X_{b}=M\), this implies that \(I\cap X_{b}=M\). Therefore, this completes the proof that \(I\in\mathcal{RI}(b,M)\) and shows that \(f\) is a surjective map. We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. There are four possible cases for the type of bags, i.e., when \(X_{b}\) is: leaf node, introduce node, forget node and join node respectively. The recurrence relations for leaf node, introduce node and forget node remains the same as Section 3.3.1. Therefore, we only need to consider the case of join node which is described as follows: * **Join node:** Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children. Then the dynamic programming state corresponding to \((b,M)\) will satisfy the following recurrence relation: \[\mathrm{Ind}[b,M]=\mathrm{Ind}[c_{1},M]\cdot\mathrm{Ind}[c_{2},M]\] This recurrence relation follows from Definition 12 and Lemma 8. This transition to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. **Proposition 9**.: _The time complexity for finding the total number of independent sets for a graph \(G=(V,E)\), with \(|V|=n\) and treewidth \(\mathrm{tw}\), using the above algorithm is \(\mathcal{O}(n\cdot\mathrm{poly(tw)}\cdot 2^{\mathrm{tw}})\)._ Proof.: Same proof as Proposition 3. ### Graph Entropy based on Matchings In this section, we will introduce dynamic programming algorithms for computing matchings of all sizes for the given graph \(G\) for bounded pathwidth and bounded treewidth respectively. We need to slightly adapt the dynamic programming states defined in Section 3.2, in order to count matchings of all sizes. The derivation of the corresponding recurrence relations of dynamic programming states remain almost the same as Section 3.2. So, we will provide proofs only for the cases which are significantly different from the earlier sections. #### 3.4.1 Bounded Pathwidth **Definition 13** (Respectful Matchings of size \(k\)).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathcal{RM}(b,k,M)\) as the set of all matchings \(F\) in a \(G^{1}_{b}\setminus M\) such that \(|F|=k\) each matching edge \(uv\in F\) has at least one point in \(G^{1}_{b}\setminus X_{b}\): \(u\notin X_{b}\) or \(v\notin X_{b}\) and each point from \(X_{b}\setminus M\) is covered by \(F\)._ **Definition 14** (Dynamic Programming State for Matchings of size \(k\)).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathrm{Match}[b,k,M]\) to be the number of elements in the set \(\mathcal{RM}(b,k,M)\)._ **Remark 5**.: _Notice that if \(X_{r}=\varnothing\) is the root node, then \(G^{1}_{r}=G\), and all matchings of \(G\) are in one-to-one correspondence with respectful matchings \(\mathcal{RM}(r,k,\varnothing)\), which implies \(\mathrm{Match}[r,k,\varnothing]\) is the total number of matchings of size \(k\) of \(G\)._ We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. We need to consider three cases when \(X_{b}\) is: leaf node, introduce node and forget node respectively. * **Leaf node:** If \(X_{l}\) is a leaf bag then \(\mathcal{RM}(b,k,M)\) contains only empty matching as \(X_{l}=\varnothing\). Therefore, \(\mathrm{Match}[l,0,\varnothing]=1\) and for any other \(k>0\)\(\mathrm{Match}[l,k,\varnothing]=0\). * **Introduce node:** Let \(X_{b}\) be the introduce bag such that \(X_{b}=X_{c}\cup\{v\}\). Then the dynamic programming state corresponding to \((b,k,M)\) satisfies the following recurrence relations: \[\mathrm{Match}[b,k,M]=\begin{cases}\mathrm{Match}[c,k,M\setminus v],&v\in M,\\ 0,&v\not\in M.\end{cases}\] This transition from \(X_{c}\) to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. * **Forget node:** Let \(X_{b}\) be the forget node such that \(X_{b}=X_{c}\setminus\{v\}\). Then the dynamic programming state corresponding to \((b,k,M)\) satisfies the following recurrence relation: \[\mathrm{Match}[b,k,M]=\mathrm{Match}[c,k,M] +\mathrm{Match}[c,k,M\cup\{v\}]\] \[+\sum_{u\in X_{b}\setminus M:\;uv\in E(G)}\mathrm{Match}[c,k-1,M \cup\{v,u\}]\] This transition to \(X_{b}\) can be computed in \(\mathcal{O}(|X_{b}|)=\mathcal{O}(\mathrm{pw}(G))\) time. **Proposition 10**.: _The time complexity for finding the total number of matchings of all possible sizes for a graph \(G=(V,E)\), with \(|V|=n\) and pathwidth \(\mathrm{pw}\), using the above algorithm is \(\mathcal{O}(n^{2}\cdot\mathrm{poly}(\mathrm{pw})\cdot 2^{\mathrm{pw}})\)._ Proof.: Proof same as Proposition 3. #### 3.4.2 Bounded Treewidth We will use a dynamic programming approach over the nice tree decomposition of the given graph \(G\). The definitions of respectful matchings of fixed size and dynamic programming states used here are same as Section 3.4.1. Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\) under consideration. We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. There are four possible cases for the type of bags, i.e., when \(X_{b}\) is: leaf node, introduce node, forget node and join node respectively. The recurrence relations for leaf node, introduce node and forget node remains the same as Section 3.4.1. Therefore, we only need to consider the case of join node which is described as follows: * **Join node:** Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children. Let us define \(n_{b}=|V(G_{b}^{\downarrow})|\) and also assume without loss of generality that \(n_{c_{1}}\leq n_{c_{2}}\). Then the dynamic programming state corresponding to \((b,k,M)\) will satisfy the following recurrence relation: \[\mathrm{Match}[b,k,M]=\] \[\sum_{0\leq k_{c_{1}}\leq n_{c_{1}}}\sum_{H_{1}\sqcup H_{2}=X_{b} \setminus M}\mathrm{Match}[c_{1},k_{c_{1}},M\cup H_{2}]\cdot\mathrm{Match}[c_{ 2},k-k_{c_{1}},M\cup H_{1}]\] where \(H_{1}\sqcup H_{2}=X_{b}\setminus M\) means \(H_{1}\cup H_{2}=X_{b}\setminus M\) and \(H_{1}\cap H_{2}=\varnothing\). In order to determine the time taken to compute the above transition from \(X_{c}\) to \(X_{b}\), we need the following definition and lemma. **Definition 15** (Smaller Subtrees).: _Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\). For all join nodes \(X_{b}\) of \(\mathcal{T}\), let \(X_{c_{1}}\) and \(X_{c_{2}}\) be its children, and without loss of generality assume \(n_{c_{1}}\leq n_{c_{2}}\). We define the set of smaller subtrees as the set of all such \(T_{c_{1}}\)'s. More precisely, it can be defined as follows:_ \[\mathrm{SStrees}(\mathcal{T})=\{\mathrm{SmallestChild}(X_{b}):\forall X_{b} \in\mathrm{JoinNodes}(\mathcal{T})\}\] _where \(\mathrm{SmallestChild}\) for any join node \(X_{b}\in\mathcal{T}\) with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children, is defined as follows:_ \[\mathrm{SmallestChild}(X_{b})=\{T_{c_{1}}:n_{c_{1}}\leq n_{c_{2}}\}\] **Lemma 11**.: _Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\). Then \(\sum n_{a}\) for all smaller subtrees \(T_{a}\) in \(\mathrm{SStrees}(\mathcal{T})\) is at most \(\mathcal{O}(n\cdot\log n)\)._ Proof.: Let \(x_{v}\) be the number of times any vertex \(v\) appears in the smallest subtree of some node with more than one child. \[\sum_{T_{a}\in\mathrm{SStrees}(\mathcal{T})}n_{a}=\sum_{v\in T}x_{v}\] Observe that \(v\) can have at most \(\log n\) ancestors with degree more than two in which it is part of the smallest subtree. This can be shown by keeping track of the current subtree and iterating through the ancestors of \(v\). Every time \(v\) is in the smallest subtree of a node with more than one child, the size of such subtree doubles, and this can happen at most \(\log n\) times. Therefore, we get the following bound on number of vertices for all smaller subtrees: \[\sum_{T_{\alpha}\in\operatorname{SSStees}(\mathcal{T})}n_{a}=\sum_{v\in T}x_{v }\leq\sum_{v\in T}\log n\leq n\log n\] **Proposition 12**.: _The time complexity for finding the total number of matchings of all possible sizes for a graph \(G=(V,E)\), with \(|V|=n\) and treewidth \(\operatorname{tw}\), using the above algorithm is \(\mathcal{O}(n^{2}\cdot\log n\cdot\operatorname{poly}(\operatorname{tw})\cdot 3^{ \operatorname{tw}})\)._ Proof.: The transition for join node is the most expensive operation for this algortihm, rest of the cases are similar to the dynamic programming on nice path decomposition in Section 3.4.1. We will fill the dynamic programming table in the bottom-up manner such that we compute all states for \(k\) in increasing order. For each \(k\), the time complexity for computing \(\operatorname{Match}[b,k,M]\) for all \(M\) is \(\mathcal{O}(n_{c_{1}}2^{\operatorname{tw}})\), where \(n_{c_{1}}\) is size of smallest child of node \(b\). Now using the above Lemma 11 we get the following bound: \[\sum_{b\in\operatorname{Join\ nodes}}\mathcal{O}(n_{c_{1}}\cdot 2^{ \operatorname{tw}})=\mathcal{O}(2^{\operatorname{tw}}n\log n).\] Now we need to compute this for all \(1\leq k\leq n\). Therefore, the total time complexity of the algorithms is \(\mathcal{O}(n^{2}\cdot\log n\cdot\operatorname{poly}(\operatorname{tw})\cdot 2 ^{\operatorname{tw}})\). ### Graph Entropy Based on Independent Sets Similarly to the previous section, in this section we will introduce dynamic programming algorithms for computing independent sets of all sizes for the given graph \(G\) for bounded pathwidth and bounded treewidth respectively. We need to slightly adapt the dynamic programming states defined in Section 3.3, in order to count independent sets of all sizes. The derivation of the corresponding recurrence relations of dynamic programming states remain almost the same as Section 3.3. So, we will provide proofs only for the cases which are significantly different from the earlier sections. #### 3.5.1 Bounded Pathwidth **Definition 16** (Respectful Independent Sets of size \(k\)).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\mathcal{RI}(b,M)\) as the set of all independent sets \(I\) in a \(G_{b}^{\downarrow}\) such that each \(I\cap X_{b}=M\) and \(|I|=k\)._ **Definition 17** (Dynamic Programming State for Independent Sets of size \(k\)).: _Let us fix a nice path decomposition \(\mathcal{P}=\{X_{1},\ldots X_{r}\}\) for the given graph \(G\). For each \(b\in[r]\) and each \(M\subseteq X_{b}\), we define \(\operatorname{Ind}[b,k,M]\) to be the number of elements in the set \(\mathcal{RI}(b,k,M)\)._ **Remark 6**.: _Notice that if \(X_{r}=\varnothing\) is the root node, then \(G_{r}^{\downarrow}=G\), and independent sets of \(G\) are in one-to-one correspondence with respectful independent sets \(\mathcal{RP}(r,k,\varnothing)\), which implies \(\operatorname{Ind}[r,k,\varnothing]\) is the total number of independent sets of size \(k\) in \(G\)._ We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. We need to consider three cases when \(X_{b}\) is: leaf node, introduce node and forget node respectively. * **Leaf node:** If \(X_{l}\) is a leaf bag then \(\mathcal{RI}(b,k,M)\) contains only empty independent set as \(X_{l}=\varnothing\). Therefore, \(\operatorname{Ind}[l,0,\varnothing]=1\) and for any other \(k>0\)\(\operatorname{Ind}[l,k,\varnothing]=0\). * **Introduce node:** Let \(X_{b}\) be the introduce bag such that \(X_{b}=X_{c}\cup\{v\}\). Then the dynamic programming state corresponding to \((b,k,M)\) satisfies the following recurrence relations: \[\mathrm{Ind}[b,k,M]=\begin{cases}\mathrm{Ind}[c,k,M],&v\notin M,\\ \mathrm{Ind}[c,k-1,M\setminus v],&v\in M\text{ and }N(v)\cap M\neq\varnothing, \\ 0,&v\in M\text{ and }N(v)\cap M=\varnothing.\end{cases}\] * **Forget node:** Let \(X_{b}\) be the forget node such that \(X_{b}=X_{c}\setminus\{v\}\). Then the dynamic programming state corresponding to \((b,k,M)\) satisfies the following recurrence relation: \[\mathrm{Ind}[b,k,M]=\mathrm{Ind}[c,k,M]+\mathrm{Ind}[c,k-1,M\cup\{v\}]\] This transition to \(X_{b}\) can be computed in \(\mathcal{O}(1)\) time. **Proposition 13**.: _The time complexity for finding the total number of independent sets for a graph \(G=(V,E)\), with \(|V|=n\) and pathwidth \(\mathrm{pw}\), using the above algorithm is \(\mathcal{O}(n^{2}\cdot\mathrm{poly}(\mathrm{pw})\cdot 2^{\mathrm{pw}})\)._ Proof.: Proof same as Proposition 3. #### 3.5.2 Bounded Treewidth We will use a dynamic programming approach over the nice tree decomposition of the given graph \(G\). The definitions of respectful matchings and dynamic programming states used here are same as Section 3.5.1. Let \(\mathcal{T}=\{T,\{X_{t}\}_{t\in V(T)}\}\) be a nice tree decomposition of the graph \(G=(V,E)\) under consideration. We now provide a bottom-up approach for filling up the dynamic table. The dynamic table relations depend on the type of the bag. There are four possible cases for the type of bags, i.e., when \(X_{b}\) is: leaf node, introduce node, forget node and join node respectively. The reccurrence relations for leaf node, introduce node and forget node remains the same as Section 3.5.1. Therefore, we only need to consider the case of join node which is described as follows: * **Join node:** Let \(X_{b}\) be the join node with \(X_{c_{1}}\) and \(X_{c_{2}}\) as its children. Let us define \(n_{b}=|V(G_{b}^{1})|\) and also assume without loss of generality that \(n_{c_{1}}\leq n_{c_{2}}\). Then the dynamic programming state corresponding to \((b,k,M)\) will satisfy the following recurrence relation: \[\mathrm{Ind}[b,k,M]=\sum_{0\leq k_{c_{1}}\leq n_{c_{1}}}\mathrm{Ind}[c_{1},k_{ c_{1}},M]\cdot\mathrm{Ind}[c_{2},k-k_{c_{1}},M]\] **Proposition 14**.: _The time complexity for finding the total number of independent sets of all possible sizes for a graph \(G=(V,E)\), with \(|V|=n\) and treewidth \(\mathrm{tw}\), using the above algorithm is \(\mathcal{O}(n^{2}\cdot\log n\cdot\mathrm{poly}(\mathrm{tw})\cdot 2^{\mathrm{tw}})\)._ Proof.: Proof same as Proposition 12. ### Naive Baseline Algorithms We now provide a brief treatment for naive baseline algorithms for computing total number of perfect matchings, matchings and independent set. We will give a complextiy analysis for the naive baseline algorithms. We used these baseline algorithms as a benchmark for our experiments and give a detailed comparision with our algorithms in Section 4. **Notation 2**.: _We denote number of perfect matchings, matchings and independent sets of \(G\) by \(\mathrm{PerfMatch}(G),\mathrm{Match}(G)\) and \(\mathrm{Ind}(G)\) respectively._ We will now present the baseline algorithms for computing perfect matchings, matchings and independent sets: * Perfect matchings: \[\mathrm{PerfMatch}(G)=\begin{cases}\mathrm{PerfMatch}(G\setminus\{u,v\})+\\ \mathrm{PerfMatch}(G-(u,v)),&E(G)\neq\varnothing,\\ 1,&E(G)=\varnothing,V(G)=\varnothing,\\ 0,&E(G)=\varnothing,V(G)\neq\varnothing,\end{cases}\] * Matchings: \[\operatorname{Match}(G)=\begin{cases}\operatorname{Match}(G\setminus\{u,v\})+ \operatorname{Match}(G-(u,v)),&E(G)\neq\varnothing,\\ 1,&E(G)=\varnothing.\end{cases}\] * Independent sets: \[\operatorname{Ind}(G)=\begin{cases}\operatorname{Ind}(G\setminus N[v])+ \operatorname{Ind}(G\setminus v),&E(G)\neq\varnothing,\\ 2^{|E(G)|},&E(G)=\varnothing,\end{cases}\] **Theorem 15**.: _Let \(G\) be the graph with \(|E(G)|=m\) and \(|V(G)|=n\), then the time complexity for computing all the above recurrence relations is \(\mathcal{O}(2^{m}\cdot\operatorname{poly}(n))\)._ Proof.: The above mentioned baseline algorithms implement the following branching strategy: Consider an arbitrary edge or vertex of the graph \(G\). Now we get a choice for it i.e., whether it belongs to the structure we are counting or not. This implies that branching for the above mentioned recurrence formulas have degree at most two. Also, observe that after execution of each branch the number of edges of graph decrease by at least one. Therefore, the depth of the recursion trees is at most \(m\), and the number of leaves of the recursion trees are at most \(2^{m}\). Hence, completing the proof that the time complexity for all the above mentioned baseline algorithms is \(\mathcal{O}(2^{m}\cdot\operatorname{poly}(n))\). ## 4 Experimental Results ### Statistical analysis of molecules In this section, we provide a detailed a distribution of treewidth for all the PubChem compounds. We observed that more than 99.9% compunds have treewidth less than 5. We also provide distribution of treewidths for handpicked data sources in PubChem and observed the same trend again. In particular, molecules with treewidth greater than 15 are rare in the entire PubChem dataset. ### Performance of our Algorithms We now present the time vs treewidth characteristics of our algoritms for computing number of perfect matchings, matchings and independet sets (See Figure 7). The following characteristics are for selected datasets from PubChem. We have also conducted our experiments on the entire PubChem dataset. The experimental dataset will be available in the Zenedo repository. Figure 6: Distribution of treewidth for molecules in PubChem database. ### Comparision with baseline algorithms In this section, we present a comparision of our parameterized algorithms with naive baseline algorithms (See Figure 8). We present time vs edges characteristics for graphs selected uniformly at random from selected PubChem datasources. For conducting this comparision, we limited ourselves to graphs with at most 40 edges, and for each edge size we sampled 50 graphs from the datasources, and reported the average time for each edge size. ### Implementation Details #### 4.4.1 Workflow We first downloaded SMILES [14] for all PubChem compounds available using FTP bulk download option. We used open source libraries (RDKit [10] and pysmiles [11]) to parse these SMILES to Networkx graphs [12] in Python3. Later, we used tree decomposition solver (FlowCutter) to obtain the tree decomposition for these graphs. Finally, we implemented our dynamic programming algorithms in C++ using these tree decompositions. All the statistical analysis for experiments are performed in Python3 and a detailed notebook will be provided in the resepective Zenedo repository. #### 4.4.2 System details It took us nearly 3 complete days to finish all the computations using a Linux based laptop with the following specifications: System Ubuntu 20.04.5 LTS and CPU AMD Ryzen 5 4600H with Radeon Graphics. Our entire experimental datasets and code repository will be available in Zenedo and GitHub. ## 5 Conclusion In summary, we presented parameterized algorithms for \(\#P\)-complete problems corresponding to topological indices and graph entropies. Our algorithms, beat the known state-of-the-art methods Figure 7: Time vs Treewidth characteristics for algortihms in both computer science and chemistry literature. We also provide a first-of-its kind statistical distribution of treewidths along with a comparision of our algorithms with the baseline algorithms for the entire PubChem database of chemical compounds. We hope that the techniques and analysis in this paper would serve as an invitation to both chemists and computer scientists to use parameterized methods for computationally challenging problems in chemistry.
2303.14516
OVeNet: Offset Vector Network for Semantic Segmentation
Semantic segmentation is a fundamental task in visual scene understanding. We focus on the supervised setting, where ground-truth semantic annotations are available. Based on knowledge about the high regularity of real-world scenes, we propose a method for improving class predictions by learning to selectively exploit information from neighboring pixels. In particular, our method is based on the prior that for each pixel, there is a seed pixel in its close neighborhood sharing the same prediction with the former. Motivated by this prior, we design a novel two-head network, named Offset Vector Network (OVeNet), which generates both standard semantic predictions and a dense 2D offset vector field indicating the offset from each pixel to the respective seed pixel, which is used to compute an alternative, seed-based semantic prediction. The two predictions are adaptively fused at each pixel using a learnt dense confidence map for the predicted offset vector field. We supervise offset vectors indirectly via optimizing the seed-based prediction and via a novel loss on the confidence map. Compared to the baseline state-of-the-art architectures HRNet and HRNet+OCR on which OVeNet is built, the latter achieves significant performance gains on three prominent benchmarks for semantic segmentation, namely Cityscapes, ACDC and ADE20K. Code is available at https://github.com/stamatisalex/OVeNet
Stamatis Alexandropoulos, Christos Sakaridis, Petros Maragos
2023-03-25T16:52:42Z
http://arxiv.org/abs/2303.14516v2
# OVeNet: Offset Vector Network for Semantic Segmentation ###### Abstract Semantic segmentation is a fundamental task in visual scene understanding. We focus on the supervised setting, where ground-truth semantic annotations are available. Based on knowledge about the high regularity of real-world scenes, we propose a method for improving class predictions by learning to selectively exploit information from neighboring pixels. In particular, our method is based on the prior that for each pixel, there is a seed pixel in its close neighborhood sharing the same prediction with the former. Motivated by this prior, we design a novel two-head network, named Offset Vector Network (OVeNet), which generates both standard semantic predictions and a dense 2D offset vector field indicating the offset from each pixel to the respective seed pixel, which is used to compute an alternative, seed-based semantic prediction. The two predictions are adaptively fused at each pixel using a learnt dense confidence map for the predicted offset vector field. We supervise offset vectors indirectly via optimizing the seed-based prediction and via a novel loss on the confidence map. Compared to the baseline state-of-the-art architectures HRNet and HRNet\(+\)OCR on which OVeNet is built, the latter achieves significant performance gains on two prominent benchmarks for semantic segmentation of driving scenes, namely Cityscapes and ACDC. Code is available at [https://github.com/stamatisalex/OVeNet](https://github.com/stamatisalex/OVeNet). ## 1 Introduction Semantic segmentation is one of the most central tasks in computer vision. In particular, it is the task of assigning a class to every pixel in a given image. It has lots of applications in a variety of fields, such as autonomous driving [7, 32], robotics [3, 71], and medical image processing [1, 64], where pixel-level labeling is critical. The adoption of convolutional neural networks (CNNs) [54] for semantic image segmentation has led to a tremendous improvement in performance on challenging, large-scale datasets, such as Cityscapes[19], MS COCO [49] and ACDC [65]. Most of the related works [12, 15, 58, 74, 85] focus primarily on architectural modifications of the employed networks in order to better combine global context aggregation and local detail preservation, and use a simple loss that is computed on individual pixels. The design of more sophisticated losses [35, 56, 86] that take into account the structure which is present in semantic labelings has received significantly less attention. Many supervised techniques utilize a pixel-level loss function that handles predictions for individual pixels independently of each other. By doing so, they ignore the high regularity of real-world scenes, which can eventually profit the final model's per Figure 1: Real-world scenes have a high degree of regularity. We propose a semantic segmentation method which can exploit this regularity, by learning to selectively leverage information from neighboring _seed_ pixels. Our proposed Offset Vector Network (OVeNet) can improve upon state-of-the-art architectures [74] by estimating the offset vectors to such seed pixels and using them to refine semantic predictions. formance by leveraging information from adjacent pixels. Thus, these methods misclassify several pixels, primarily near semantic boundaries, which leads to major losses in performance. Based on knowledge about the high regularity of real scenes, we propose a method for improving class predictions by learning to selectively exploit information from neighboring pixels. In particular, the general nature of this idea is applicable on SOTA models like HRNet [74] or HRNet \(+\) OCR [86] and can extend these models by adding a second head to them capable of achieving this goal. The architecture of our Offset Vector Network (OVeNet) is shown in Fig. 2. In particular, following the base architecture of the backbone model (e.g HRNet or HRNet \(+\) OCR), the first head of our network outputs the initial pixel-level semantic predictions. In general, two pixels \(\mathbf{p}\) and \(\mathbf{q}\) that belong to the same category share the same semantic outcome. If the pixels belong to the same class, using the label of \(\mathbf{q}\) for estimating class at the position of \(\mathbf{p}\) results in a correct prediction. We leverage this property by learning to identify _seed pixels_ which belong to the same class as the examined pixel, whenever such pixels exist, in order to selectively use the prediction at the former for improving the prediction at the latter. This idea is motivated by a prior which states for every pixel \(\mathbf{p}\) associated with a 2D semantically segmented image, there exists a seed pixel \(\mathbf{q}\) in the neighborhood of \(\mathbf{p}\) which shares the same prediction with the former. In order to predict classes with this scheme, we need to find the regions where the prior is valid. Furthermore, in order to point out the seed pixels in these regions, we must predict the offset vector \(\mathbf{o}(\mathbf{p})=\mathbf{q}-\mathbf{p}\) for each pixel \(\mathbf{p}\). As a result, we design a second head that generates a dense offset vector field and a confidence map. The predicted offsets are used to resample the class predictions from the first head and generate a second class prediction. The outcomes from the two heads are fused adaptively using the learnt confidence map as fusion weights, in order to down-weigh the offset-based prediction and rely primarily on the basic class prediction in regions where the prior is not valid. Thanks to using seed pixels for prediction, our network classifies several pixels with incorrect initial predictions, e.g., boundary pixels, to the correct classes. Thus, it improves the shape as well as the form of the corresponding segments, leading to more realistic results. Last but not least, we propose a confidence loss which supervises the confidence map explicitly and further improves performance. An illustrative example of this concept is depicted in Fig. 1, where OVeNet outperforms the baseline HRNet [74] model, since it enlarges correctly the road and the car segment (red and yellow frame correspondingly) and reduces the total number of misclassified pixels. We evaluate our method extensively on 2 primary benchmarks for semantic segmentation of driving scenes: Cityscapes [19] and ACDC [65]. We implement our offset vector branch both on HRNet [74] and HRNet\(+\)OCR [87]. Our approach significantly improves the initial models' output predictions by achieving better mean and per-class results. We conduct a thorough qualitative and quantitative experimental comparison to show the clear advantages of our method over previous SOTA techniques. ## 2 Related Work Semantic segmentation architectures.Fully convolutional networks [66, 67] were the first models that re-architected and fine-tuned classification networks to direct dense prediction of semantic segmentation. They generated low-resolution representations by eliminating the fully-connected layers from a classification network (e.g AlexNet [38], VGGNet [68] or GoogleNet [69]) and then estimating coarse segmentation maps from those representations. To create medium-resolution representations [11, 12, 13, 42, 85], fully convolutional networks were expanded using dilated/atrous convolutions, which replaced a few strided convolutions and their associated ones. Following, in order to restore high-resolution representations from low-resolution representations an upsample process was used. This process involved a subnetwork that was symmetric to the downsample process (e.g VGGNet [68]), and included skipping connections between mirrored layers to transform the pooling indices (e.g. DeconvNet [58]). Other methods include duplicating feature maps, which is used in architectures like U-Net [64] and Hourglass [6, 18, 20, 33, 57, 70, 80, 82] or encoder-decoder architectures [2, 61]. Lastly, the process of asymmetric upsampling [5, 16, 30, 48, 60, 72, 78, 91] has also been extensively researched. The models' representations were then enhanced to include multi-scale contextual information [8, 9, 92]. PSPNet [93] utilized regular convolutions on pyramid pooling representations to capture context at multiple scales, while the DeepLab series [12, 13] used parallel dilated convolutions with different dilation rates to capture context from different scales. Recent research [27, 40, 81] proposed extensions, such as DenseASPP [81], which increased the density of dilated rates to cover larger scale ranges. Other studies [15, 25, 44] used encoder-decoder structures to exploit the multi-resolution features as the multi-scale context. Here belongs the HRNet [74], the baseline model of our method. HRNet connects high-to-low convolution streams in parallel. It ensures that high-resolution representations are maintained throughout the entire process and creates dependable high-resolution representations with accurate positional information by repeatedly merging the representations from various resolution streams. Applying additionally the OCR [86] method, HRNet \(+\) OCR is one of the leading models in the task of semantic segmentation. Lately, transformers have been successful in computer vision tasks demonstrating their effectiveness. ViT [22] was the first attempt to use the vanilla transformer architecture [73] for image classification without extensive modification. Unlike later methods, such as PVT [76] and Swin [53], that incorporated vision-specific inductive biases into their architectures, the plain ViT suffers inferior performance on dense predictions due to weak prior assumptions. To tackle this problem, the ViT-Adapter [17] was introduced, which allowed plain ViT to achieve comparable performance to vision-specific transformers and achieves the SOTA performance on this task. **Semantic segmentation loss functions.** Image segmentation has highly correlated outputs among the pixels. Converting pixel labeling problem into an independent problem can lead to problems such as producing results that are spatially inconsistent and have unwanted artifacts, making pixel-level classification unnecessarily challenging. To solve this problem, several techniques [14, 37, 52, 95] have been developed, such as integrating structural information into segmentation. For instance, Chen et al. [12] utilized denseCRF [37] for refining the final segmentation result. Following, Zheng et al. [95] and Liu et al. [52] made the CRF module differentiable within the deep neural network. Other methods that have been used to encode structures include pairwise low-level image cues like grouping affinity [51, 55] and contour cues [4, 10]. GANs [63] are another alternative for imposing structural regularity in the neural network output. However, these methods may not work well in cases where there are changes in visual appearance or may require expensive iterative inference procedures. Thus, Ke et. al. [35] introduced AAFs, which are easier to train than GANs and more efficient than CRF without run-time inference. There has been also proposed another loss function [56] suitable for in real time applications that pulls the spatial embeddings of pixels belonging to the same instance together. **Offset Vector-Based methods** are essential for image analysis tasks that involve adjacent pixels. In particular, they can effectively exploit the information contained in neighbouring pixels, handle image distortions and noise, and improve the accuracy of various image analysis tasks. They can be used in applications such as depth estimation [59] or semantic segmentation [56, 88]. Based on knowledge about the high regularity of real 3D scenes, P3Depth [59] is a method used for 3D depth estimation learns to selectively leverage information from coplanar pixels to improve the predicted depth. In semantic segmentation, SegFix [88] is a model-agnostic post-processing scheme that improved the boundary quality for the segmentation result. Motivated by the empirical observation that the label predictions of interior pixels are more reliable, SegFix replaced the originally unreliable predictions of boundary pixels by the predictions of interior pixels. OVeNet provides another perspective to use offset vectors and structure modeling by matching the relations between neighbouring pixels in the label space. Although our approach is inspired by the P3Depth idea, we focus on a different task in the 2D world which is semantic segmentation. Moreover, the key difference between our method and SegFix is the timing of when they are applied. Essentially, OVeNet integrates the offset vector learning process into the model training, while SegFix applies the offset correction as a separate post-processing step. ## 3 Method In this section, we will analyze our method shown in Fig. 2. Firstly, in Sec. 3.1 we give some basic notation and terminology of semantic segmentation. As we mentioned before in Sec. 1, our network estimates semantic labels by selectively combining information from each pixel and its corresponding seed pixel. The intuition and the advantages of using seed pixels to improve the initial prediction of a model are described analytically in Sec. 3.2. Lastly, in Sec. 3.3 we introduce an additional confidence loss, which further enhances our method. ### Terminology Semantic segmentation requires learning a dense mapping \(f_{\theta}:I(u,v)\to S(u,v)\) where \(I\) is the input image with spatial dimensions \(H\times W\), \(S\) is the corresponding output prediction map of the same resolution, \((u,v)\) are pixel coordinates in the image space and \(\theta\) are the parameters of the mapping \(f\). In supervised semantic segmentation, a ground-truth semantically segmented map \(H\) is available for each image during training. The aim is to optimize the function parameters \(\theta\) such that the predicted output map is as close as possible to the ground-truth map across the entire training set \(T\). This can be achieved by minimizing the difference between the predicted and ground-truth images: \[\min_{\theta}\sum_{(I,H)\in T}\mathcal{L}(f_{\theta}(I),H) \tag{1}\] where \(\mathcal{L}\) is a loss function that penalizes variations between the prediction and the ground truth. ### Seed Pixel Identification Let us assume we have one pixel \(\mathbf{p}\) which belongs to segment of a semantically segmented image. By definition, every other pixel on this segment has the same class value. Thus, ideally, in order to get all of the class values accurate, the network only has to predict the class at one of these pixels, \(\mathbf{q}\). This pixel can be interpreted as the seed pixel that describes the segment-class. Finally, we let the network find this seed pixel and the corresponding region. Let us define a prior which is a relaxed version of the previous idea. **Definition 1**: _For every pixel \(\mathbf{p}\) associated with a 2D semantically segmented image, there exists a seed pixel \(\mathbf{q}\) in the neighborhood of \(\mathbf{p}\) which shares the same prediction with the former._ In general, there may be numerous seed pixels for \(\mathbf{p}\) or none at all. Given that the Definition 1 holds, semantic segmentation task for \(\mathbf{p}\) can be solved by identifying \(\mathbf{q}\). For this reason, we let our network predict the offset vector \(\mathbf{o}(\mathbf{p})=\mathbf{q}-\mathbf{p}\). Thus, we design our model so that it features a second, offset head and let this offset head predict a dense offset vector field \(\mathbf{o}(u,v)\). The two heads of the network share a common main body and then they follow different paths. We resample the initial logits \(\mathbf{C}\), being predicted by the first head, using the estimated offset vector field via: \[\mathbf{C_{s}}(\mathbf{p})=\mathbf{C}(\mathbf{p}+\mathbf{o}(\mathbf{p})) \tag{2}\] To manage fractional offsets, bilinear interpolation is used. The resampled logits are then used to compute a second semantic segmentation prediction: \[S_{s}(u,v)=h(\mathbf{C_{s}}(u,v),u,v)\] \[\implies S_{s}(\mathbf{p})=S_{i}(\mathbf{p}+\mathbf{o}(\mathbf{p})) \tag{3}\] based on the seed locations. In our experiment, \(h=softmax\). Due to the fact that the prior is not always correct, the initial semantic prediction \(S_{i}\) may be preferred to the seed-based prediction \(S_{s}\). To account for such cases, the second head additionally predicts a confidence map \(F(u,v)\epsilon[0,1]\), which represents the model's confidence in adopting the predicted seed pixels for semantic segmentation via \(S_{s}\). By adaptively fusing \(S_{i}\) and \(S_{s}\), the confidence map is used to compute the final prediction: \[S_{f}(\mathbf{p})=(1-F(\mathbf{p}))S_{i}(\mathbf{p})+F(\mathbf{p})S_{s}( \mathbf{p}) \tag{4}\] We apply supervision to each of \(S_{f}\), \(S_{s}\), and \(S_{i}\) in our model, by optimizing the following loss: \[\mathcal{L}_{\text{semantic}}=\mathcal{L}(S_{f},H)+\kappa\mathcal{L}(S_{s},H) +\lambda\mathcal{L}(S_{i},H) \tag{5}\] with \(\kappa\) and \(\lambda\) being hyperparameters and H denotes the GT train ids of each class for each pixel. In this way, we encourage the initial model's head to output an accurate representation across all pixels, even when they have a high confidence value, and the offset vector head to learn high confidence values for pixels for which the definition 1 holds and low confidence values for pixels for which the prior does not. This formulation, however, comes with a drawback. The model is not supervised directly on the offsets. In fact, it could just predict zero offsets everywhere and yet give Figure 2: **Overview of OVeNet.** OVeNet is a two-headed network. The first head outputs semantic logits (\(\mathbf{C}\)), while the second head outputs a dense offset vector field (\(\mathbf{o}\)) identifying positions of seed pixels along with a confidence map (\(F\)). The logits are passed through a softmax function and output the initial class prediction (\(S_{i}\)) of the model. Then, the offset vectors are used to resample the logits from the first head and generate a second class prediction (\(S_{s}\)). The two predictions are adaptively fused using the confidence map resulting in the final prediction \(S_{f}\). For the visualization of the offset vectors we use the optical flow color coding from [29]. Smaller vectors are lighter and color represents the direction. valid \(S_{s}\) and \(S_{f}\) predictions that are equal to \(S_{i}\). Since the initial predictions \(S_{i}\) are erroneously smoothed around semantic boundaries due to the regularity of the mapping \(f_{\theta}\) in the case of neural networks, this undesirable behavior is avoided in practice. As a result, predicting a non-zero offset that points away from the boundary provides a lower value for \(\mathcal{L}_{\text{semantic}}\) for pixels on either side of the boundary. This happens due to the fact that such an offset utilizes a seed pixel for \(S_{s}\) that is further away from the border and suffers from reduced inaccuracy due to smoothing. These non-zero offsets are also transmitted from the boundaries to the inner sections of areas with smooth segments, helping the network in predicting non-trivial offsets due to the regularity of the mapping that forms the offset vector field. ### Confidence-Based Loss Our confidence loss is based on the concept that given a pixel coordinate, its surrounding pixels should be in the same segment. For each pixel \(\mathbf{p}\), we define the confidence loss as follows: \[\mathcal{L}_{f}(\mathbf{p}) =-\mathbb{1}_{H(\mathbf{p})=H(\mathbf{p}+\mathbf{o}(\mathbf{p})) }\log F(\mathbf{p})\] \[\quad-\mathbb{1}_{H(\mathbf{p})\neq H(\mathbf{p}+\mathbf{o}( \mathbf{p}))}\log(1-F(\mathbf{p})) \tag{6}\] This idea is motivated by the fact that confidence should have large value,for those pixels whose offset vector points to seed pixels with the same class. Similarly, confidence should have small value, for those pixels whose offset vector points to seed pixels with different class. To sum up, the complete loss is: \[\mathcal{L}_{final}=\mathcal{L}_{\text{semantic}}+\mathcal{L}_{f} \tag{7}\] ## 4 Experiments To evaluate the proposed method, we carry out comprehensive experiments on Cityscapes and ACDC dataset. Experimental results demonstrate that our method achieved the highest performance, outperforming all other baselines. Following, we first introduce the datasets, evaluation metrics and implementation details in Sec. 4.1. We then compare our method to SOTA approaches in Sec. 4.2. Finally, we perform a series of ablation experiments on Cityscapes dataset in Sec. 4.3. ### Experimental Setup In this section, we present Cityscapes and ACDC datasets, which are used to evaluate our approach. Evaluation on these datasets is performed using standard semantic segmentation metrics explained below. **Cityscapes**[19] is a challenging dataset, tasked for urban scene understanding. It includes semantic, instance-wise, and dense pixel annotations for 30 classes divided into eight groups (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). From these 30 classes only 19 classes are used for parsing evaluation. Around 5000 high quality pixel-level finely annotated images and 20000 coarsely annotated images make up the collection. During multiple months, daytimes, and ideal weather circumstances, data was collected in 50 cities. The finely annotated 5000 images are divided into 2.975, 500, 1.525 images for training, validation and testing respectively. **ACDC**[65] is a demanding dataset, used for training and testing semantic segmentation methods on adverse visual conditions. It consists of a big collection of 4006 images divided evenly between four frequent unfavorable conditions: fog, dark, rain, and snow. It consists of 19 semantic classes, coinciding exactly with the evaluation classes of the Cityscapes dataset. ACDC is manually split into four sets corresponding to the examined conditions: 1000 foggy, 1006 nighttime, 1000 rainy and 1000 snowy images. Each set of each adverse condition is split into 400 training, 100 validation and 500 test images, except the nighttime set with 106 validation images. This results in a total of 1600 training and 406 validation images with public annotations and 2000 test images with annotations withheld for benchmarking purposes. **Evaluation Metrics.** The mean of class-wise intersection over union (mIoU) is adopted as the evaluation metric. In addition to the mean of class-wise intersection over union (mIoU), we report other three scores on the test set: IoU category (cat.), iIoU class (cla.) and iIoU category (cat.) **Implementation Details.** Our network consists of two heads. The first head outputs 19 channels, one for each class. The second head outputs three channels: one for each coordinate of the offset vectors and one for confidence. These two heads follow the structure of HRNet. Both OWeNet and the baseline HRNet are initialized with pre-trained ImageNet [38] weights. This initialization is important to achieve competitive results as in [74]. Following the same training protocol as in [74], the data are augmented by random cropping (from 1024 x 2048 to 512 x 1024 in Cityscapes and from 1080 x 1920 to 540 x 960 in ACDC), random scaling in the range of [0.5, 2], and random horizontal flipping.We use the SGD optimizer with the base learning rate of 0.01, the momentum of 0.9 and the weight decay of 0.0005. The number of epochs used for training is 484. For lowering the learning rate, a poly learning rate policy with a power of 0.9 is applied. The offset vectors are restricted via a tanh layer to have a maximum length of \(\tau\) in normalized image coordinates. After an ablation study shown in Sec. 4.3, we set \(\tau\) to 0.5 by default and the branch will occur on the last (\(4^{th}\)) stage of HRNet. The confidence map is predicted through a sigmoid layer. For \(S_{i}\), \(S_{s}\) and \(S_{f}\) predictions, Ohem Cross Entropy Loss is used. It is remarkable that, at first, the model computed the \(S_{i}\), \(S_{s}\), predictions directly but that led to low mIoU results. On the other hand, the final model with the best results outputted the \(S_{i}\), \(S_{s}\), \(S_{f}\) logits. The loss weights \(\lambda\) and \(\mu\) are set to 0.5. In addition, the confidence based loss is applied using the final semantic prediction \(S_{f}\). ### Comparison with the State of the Art **Cityscapes.** The results on Cityscapes which is the major database focusing on semantic understanding of urban street scenes are shown below. We achieve better results on Cityscapes than both the initial HRNet and HRNet \(+\) OCR model under similar training time, outperforming prior SOTA methods. Table 1 compares our method with SOTA methods on the Cityscapes test set. All the results are with six scales and flipping. Two cases w/o using coarse data are evaluated: One is about the model learned on the train set, and the other is about the model learned on the train \(+\) val set. In both cases our offset vector - based model achieves the superior performance. To become more specific, we achieve a superior relative performance gain of \(1.4\%\) in mIoU, \(2.4\%\) in iIoU cla., \(0.4\%\) in IoU cat. and \(1.4\%\) in iIoU cat when the model is learned only on train set and a gain of \(0.5\%\) in mIoU when the model is learned on both train \(+\) val set. We can also observe that OVeNet built on HRNet and learned only on the training set, outperforms initial HRNet's mIoU learned only on training set as well as both the training and validation sets. Table 2 analytically compares our approach with HRNet's per class results. As we can see our method achieves better results in the majority of classes. Our offset vector-based model learns an implicit representation of different objects which can benefit the overall semantic segmentation estimation capability of the network. Regarding val set results, HRNet achieves \(81.8\%\) mIoU while OVeNet built on it surpasses it reaching \(82.4\%\) mIoU. Qualitative results on Cityscapes support the above findings, as shown in Fig. 3. To be more specific, from left to right, we depict the RBG input image, GT image, CC-Net's [28], DANet's [24], HRNet's [87] and our model's prediction. Specifically, our model demonstrates successful classification of false predicted pixels (identified by a red frame) in the first row of the image. Furthermore, the model accurately eliminates the terrain segment that had been previously incorrectly classified in the image (blue frame). In the second row, OVeNet exhibits superior performance compared to previous models, as it accurately expands the sidewalk and the pole depicted in the blue and yellow frames, respectively. Furthermore, not only it successfully eliminates discontinuities in the pink frame, but also results in a better representation of the bicycle's shape. As far as the third set of images is concerned, our model reduces the false prediction made by HRNet in the sidewalk segment achieving a better final result. Last but not least, in the last set of images, our model effectively reduces the pole (pink frame) in the segmentation output. Despite the presence of some errors that were not present in the initial HRNet, our model significantly increases the number of pixels that correctly belong to the 'bus' category, resulting in a superior prediction compared to the baseline. To sum up, OVeNet surpasses the HRNet's performance. **ACDC.** The results on ACDC, which targets semantic understanding of driving scenes in adverse visual conditions, are shown below. We also achieve far better results than the initial HRNet under similar training time, outperforming prior SOTA methods. Table 4 compares our approach with SOTA models not only on all methods but also on different conditions of the ACDC test set. OVeNet improves the initial HRNet model by \(2.5\%\) mIoU in "All" conditions. Additionally, we can observe from the per class results shown on Table 3 on different conditions, that our approach outperforms the HRNet's one in the vast majority of classes as well as in the total mIoU score. Specifically, on foggy days, performance on classes containing small-instance instances, such person, rider, and bicycle, is very low. This is likely because of the combined effect of contrast reduction \begin{table} \begin{tabular}{l|c|c c c c} \hline & backbone & mIoU & iIoU cla. & IoU cat. & iIoU cat. \\ \hline \multicolumn{6}{l}{_Model learned on the train set_} \\ \hline PSPNet [92] & D-ResNet-\(101\) & 78.4 & 56.7 & 90.6 & 78.6 \\ PSANet [94] & D-ResNet-\(101\) & 78.6 & - & - & - \\ PAN [39] & D-ResNet-\(101\) & 78.6 & - & - & - \\ AAF [34] & D-ResNet-\(101\) & 79.1 & - & - & - \\ HRNet [74] & HRNetV2-W48 & 80.4 & 59.2 & 91.5 & 80.8 \\ \hline OVeNet & HRNetV2-W48 & **81.8** & **61.6** & **91.9** & **82.2** \\ \hline \multicolumn{6}{l}{_Model learned on the train-val set_} \\ \hline GridNet [23] & - & 69.5 & 44.1 & 87.9 & 71.1 \\ LRR-As [26] & - & 69.7 & 48.0 & 88.2 & 74.7 \\ DeepLab [8] & D-ResNet-\(101\) & 70.4 & 42.6 & 86.4 & 67.7 \\ LC [41] & - & 71.1 & - & - & - \\ Piecewise [47] & VGG-16 & 71.6 & 51.7 & 87.3 & 74.1 \\ FRRN [62] & - & 71.8 & 45.5 & 88.9 & 75.1 \\ RefineNet [46] & ResNet-\(101\) & 73.6 & 47.2 & 87.9 & 70.6 \\ PEARL [31] & D-ResNet-\(101\) & 75.4 & 51.6 & 89.2 & 75.1 \\ DSSPN [43] & D-ResNet-\(101\) & 76.6 & 56.2 & 89.6 & 77.8 \\ LKM [60] & ResNet-\(152\) & 76.9 & - & - & - \\ DUC-HDC [75] & - & 77.6 & 53.6 & 90.1 & 75.2 \\ SAC [90] & D-ResNet-\(101\) & 78.1 & - & - & - \\ DepthSeg [36] & D-ResNet-\(101\) & 78.2 & - & - & - \\ ResNet38 [77] & WResNet-\(38\) & 78.4 & 59.1 & 90.9 & 78.1 \\ BiSeNet [83] & ResNet-\(101\) & 78.9 & - & - & - \\ DFN [84] & ResNet-\(101\) & 79.3 & - & - & - \\ PSANet [94] & D-ResNet-\(101\) & 80.1 & - & - & - \\ PADNet [79] & D-ResNet-\(101\) & 80.3 & 58.8 & 90.8 & 78.5 \\ CFNet [89] & D-ResNet-\(101\) & 79.6 & - & - & - \\ Auto-DeepLab [50] & - & 80.4 & - & - & - \\ DenseNetNetNet-\(161\) & 80.6 & 59.1 & 90.9 & 78.1 \\ SVCNet [21] & ResNet-\(101\) & 81.0 & - & - & - \\ ANN [96] & D-ResNet-\(101\) & 81.3 & - & - & - \\ CCNet [28] & D-ResNet-\(101\) & 81.4 & - & - & - \\ DANet [24] & D-ResNet-\(101\) & 81.5 & - & - & - \\ HRNet [74] & HRNetV2-W48 & 81.6 & 61.8 & 92.1 & 82.2 \\ HRNet + OCR [87] & HRNetV2-W48 & 81.9 & **62.0** & **92.0** & **82.5** \\ \hline OVeNet (_HRNet + OCR_) & HRNetV2-W48 & **82.4** & 61.6 & 91.9 & 82.2 \\ \hline \end{tabular} \end{table} Table 1: **Semantic segmentation results on Cityscapes test set.** We compare our method against SOTA methods as in [74].D-ResNet-101 = Dilated-ResNet-101. By default, OVeNet is built on HRNet, unless stated otherwise. and poor resolution for examples of these classes that are far from the camera. There is also a huge performance boost of approximately \(10\%\) and \(15\%\) on the "bus" and "truck" class respectively. Moreover, it is more difficult to separate classes at night that are often dark or poorly lit, such as buildings, vegetation, traffic signs, and the sky. This behaviour is observed also in offset vector performance as they have small values when the visibility is limited. Lastly, during night and snow conditions, road and sidewalk performance is at its lowest, which can be attributed to misunderstanding between the two classes as a result of their similar look. As for val set results, HRNet achieves \(75.5\%\) mIoU while our OVeNet surpasses it reaching \(75.9\%\) mIoU. Qualitative results on ACDC support the above findings, as shown in Fig. 4. To be more specific, from up to down, we depict the input RBG image, the GT, the initial HRNet's output and our model's output. In the first column, we can underline that our model tries to enlarge correctly the sidewalk segments in both red and green frames and reduces the erroneously terrain segment predicted by the initial model. As far as the second set of images is concerned, HRNet classifies incorrectly the sign of the house into the traffic sign class (red frame). On the contrary, our model corrects not only this mistake, but also a discontinuity occurred in the yellow frame. Last but not least, regarding the last set of materials, our offset vector - based model eliminates correctly the sidewalk area (red framework), as in the initial image there is no such area. To sum up, OVeNet surpasses HRNet's performance. ### Ablation study In order to experimentally confirm our design choices for the offset vector - based model, we performed an ablation study, as shown in Table 5. We trained and evalu \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c} \hline \hline Method & road & road & road & road & road & road & road & road & road & road & road & road & road & road & road & road & road & mIoU \\ \hline HRNet [74] & \(98.73\) & \(\mathbf{87.49}\) & \(\mathbf{93.65}\) & \(\mathbf{56.48}\) & \(61.57\) & \(\mathbf{71.57}\) & \(78.76\) & \(81.81\) & \(93.99\) & \(74.11\) & \(95.68\) & \(87.95\) & \(73.72\) & \(96.35\) & \(69.94\) & \(82.52\) & \(76.93\) & \(70.88\) & \(\mathbf{78.02}\) & \(80.40\) \\ OVeNet & \(\mathbf{98.74}\) & \(87.41\) & \(\mathbf{93.79}\) & \(\mathbf{61.65}\) & \(\mathbf{64.00}\) & \(71.35\) & \(\mathbf{78.98}\) & \(\mathbf{81.65}\) & \(\mathbf{94.00}\) & \(\mathbf{73.42}\) & \(\mathbf{95.81}\) & \(\mathbf{87.99}\) & \(\mathbf{74.36}\) & \(\mathbf{96.42}\) & \(\mathbf{74.76}\) & \(\mathbf{87.70}\) & \(\mathbf{82.83}\) & \(\mathbf{71.77}\) & \(77.86\) & \(\mathbf{81.82}\) \\ \hline HRNet + OCR & \(98.77\) & \(\mathbf{87.85}\) & \(93.72\) & \(57.75\) & \(63.92\) & \(\mathbf{71.74}\) & \(\mathbf{78.56}\) & \(\mathbf{81.77}\) & \(\mathbf{94.06}\) & \(\mathbf{73.69}\) & \(95.68\) & \(88.04\) & \(74.64\) & \(96.46\) & \(76.40\) & \(88.78\) & \(84.63\) & \(71.79\) & \(\mathbf{78.63}\) & \(81.90\) \\ OVeNet (_HRNet + OCR_) & \(\mathbf{98.79}\) & \(87.47\) & \(\mathbf{93.86}\) & \(\mathbf{62.97}\) & \(\mathbf{64.41}\) & \(70.80\) & \(78.45\) & \(81.13\) & \(93.99\) & \(73.31\) & \(\mathbf{95.72}\) & \(\mathbf{88.08}\) & \(\mathbf{74.90}\) & \(\mathbf{96.47}\) & \(\mathbf{76.95}\) & \(\mathbf{89.95}\) & \(\mathbf{88.48}\) & \(\mathbf{71.87}\) & \(78.43\) & \(\mathbf{82.42}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Per Class Results on Cityscapes test set** Figure 4: **Qualitative results of selected examples on ACDC**. Up to down: RGB, GT, HRNet [74], OVeNet. Best viewed on a screen and zoomed in. Figure 3: **Qualitative results of selected examples on Cityscapes**.From left to right: RGB Input Image, GT, CCNet [28], DANet [24], HRNet [74], OVeNet. Best viewed on a screen and zoomed in. ated 7 different variations on Cityscapes. The performance of each model variation in relation to the ground truth images was calculated by means of the mIoU ( as defined above). At first, we initialized both heads of the network with the pre-trained Imagenet weights and set the offset vector length equal to \(0.5\). Secondly, we freezed both main body's and initial head's weights. The freezed part of our model was initialized with the corresponding Cityscapes final pre-trained weights. The only part trained was the second head, which was initialized with pre-trained ImageNet weights. As shown in Table 5, although the performance of our model is higher than the initial single head model's one, it still remains lower than the case were both heads are trained simultaneously. Then, we deactivated the "Freezed" feature and changed the offset vector's length logarithmically, setting both the values \(1\) and \(0.2\). We observed that in both cases the performance is lower than that of \(0.5\). This is due the fact that larger offset vectors point to more distant objects that may affect erroneously the final prediction, while smaller vectors do not exploit too much information from neighbouring classes. Furthermore, we deactivated the OHEM Cross Entropy Loss and enabled the simple Cross Entropy Loss. As expected, the performance of the model was lower. OHEM penalizes more high loss values and leads to a better training of the model. Lastly, HRNet [74] consists of \(4\) stages. In all the previous cases, the branch occured on the last (\(4^{th}\)) stage so as not to overload the new network with many extra parameters. When the branch occured on the \(3^{rd}\) stage the performance did not became better. ## 5 Conclusion All in all, we have presented a supervised method for semantic segmentation which selectively exploits information from neighboring pixels. OVeNet has enhanced by far the initial model's output predictions, since it has achieved not only better total results but also better per class results in the majority of classes under similar training time. It has learned an implicit representation of different objects which benefits the overall semantic segmentation estimation capability of the network. In fact, not only our idea has achieved highly satisfactory results close to GT images, but also has outperformed the predictions outputted by the initial model's head. Classifying some false predicted pixels in the correct classes, it has eliminated discontinuities and improved the shape as well as the form of the corresponding segments, leading to more realistic results. This is a strong contribution which opens new pathways for real world applications, such as self-driving cars or medical imaging and diagnostics. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Method** & **Fog** & **Night** & **Rain** & **Snow** & **All** \\ \hline RefineNet [45] & \(65.7\) & \(55.5\) & \(68.7\) & \(65.9\) & \(65.3\) \\ DeepLabv2 [12] & \(54.5\) & \(45.3\) & \(59.3\) & \(57.1\) & \(55.3\) \\ DeepLabv3+ [15] & \(69.1\) & \(60.9\) & \(74.1\) & \(69.6\) & \(70.0\) \\ HRNet [74] & \(69.3\) & \(60.6\) & \(74.5\) & \(71.5\) & \(70.5\) \\ \hline OVeNet & \(\mathbf{72.1}\) & \(\mathbf{62.8}\) & \(\mathbf{76.6}\) & \(\mathbf{74.1}\) & \(\mathbf{73.0}\) \\ \hline \hline \end{tabular} \end{table} Table 4: **Comparison of the models on different conditions of ACDC.** \begin{table} \begin{tabular}{l|l l l l l l l l l l l l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Condition} & \multirow{2}{*}{Method} & \multirow{
2309.01626
Order and chain polytopes of maximal ranked posets
For a class of posets we establish that the f-vector of the chain polytope dominates the f-vector of the order polytope. This supports a conjecture of Hibi and Li. Our proof reveals a combinatorial description of the faces of the corresponding chain polytopes.
Ibrahim Ahmad, Ghislain Fourier, Michael Joswig
2023-09-04T14:17:57Z
http://arxiv.org/abs/2309.01626v1
# Order and chain polytopes of ###### Abstract. For a class of posets we establish that the \(f\)-vector of the chain polytope dominates the \(f\)-vector of the order polytope. This supports a conjecture of Hibi and Li. Our proof reveals a combinatorial description of the faces of the corresponding chain polytopes. ## 1. Introduction For a given finite poset \(P\) with \(n\) elements, Stanley [14] introduced the _order polytope_ \[\mathcal{O}(P)=\left\{x\in\mathbb{R}_{\geq 0}^{n}\mid x_{p}\leq 1\text{ and }x_{p}\leq x_{q}\text{ if }p\prec q\right\} \tag{1}\] and the _chain polytope_ \[\mathcal{C}(P)=\left\{x\in\mathbb{R}_{\geq 0}^{n}\mid x_{p_{1}}+\cdots+x_{p_{ s}}\leq 1\text{ for all }p_{1}\prec p_{2}\prec\cdots\prec p_{s}\right\}\;. \tag{2}\] Both polytopes have full dimension \(n\). The main result of [14] states that these two polytopes are Ehrhart-equivalent, i.e., they share the same Ehrhart polynomials. In particular, the number of lattice points in \(\mathcal{O}(P)\) equals the number of lattice points in \(\mathcal{C}(P)\). Here we are concerned with combinatorial properties of the order and chain polytopes. For an \(n\)-dimensional polytope the _\(f\)-vector_\((f_{0},f_{1},\ldots,f_{n-1})\) counts the faces per dimension. Stanley [14, Theorem 3.2] showed that \(f_{0}(\mathcal{O}(P))=f_{0}(\mathcal{C}(P))\). Hibi and Li [13] proved \(f_{n-1}(\mathcal{O}(P))\leq f_{n-1}(\mathcal{C}(P))\). Moreover, they proved that \(\mathcal{O}(P)\) and \(\mathcal{C}(P)\) are affinely isomorphic if any only if the poset \(P\) does not contain five pairwise distinct elements \(a,b,c,d,e\) such that \(a,b\prec c\prec d,e\). Subsequently, Hibi et al. [12] established \(f_{1}(\mathcal{O}(P))\leq f_{1}(\mathcal{C}(P))\). These results lead to the natural conjecture that the \(f\)-vector of the chain polytopes always dominates the \(f\)-vector of the order polytope [13, Conjecture 2.4]. Our main result confirms the latter conjecture for a certain class of posets. A poset is _ranked_ if each maximal chain has the same length. For a vector \(\tau=(\tau_{1},\tau_{2},\ldots,\tau_{\ell})\) of positive integers we define the ranked poset \(P_{\tau}\) which has \(\tau_{i}\) elements in rank \(i\), and any two elements in distinct rank are comparable. All ranked posets with \(\tau_{i}\) elements in rank \(i\) are subposets of \(P_{\tau}\). Therefore, we call \(P_{\tau}\) a _maximal ranked poset_. **Theorem**.: _For any vector \(\tau\) of positive integers we have \(f_{i}(\mathcal{O}(P_{\tau}))\leq f_{i}(\mathcal{C}(P_{\tau}))\) for all \(0\leq i\leq n-1\)._ The proof uses an induction along chain-order polytopes, which form a class of polytopes interpolating between order and chain polytopes. It follows from our proof that the \(f\)-vector grows monotonously along the entire interpolation. While the face structure of the order polytopes is known, the faces of the chain polytopes are notoriously difficult to describe. Our proof provides a normal form for each face of \(\mathcal{C}(P_{\tau})\). The paper is organized as follows, we introduce in Section 2 the objects of interest, in Section 3 the normal form for faces of chain-order polytopes and in Section 4 the proof of the main theorem. In Section 5 algorithmic aspects are discussed. **Acknowledgement:** The work of all authors is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): "Symbolic Tools in Mathematics and their Application" (TRR 195, project-ID 286237555). GF was further supported by "Sparsity and Singular Structures" (SFB 1481, project-ID 442047500). MJ was further supported by The Berlin Mathematics Research Center MATH\({}^{+}\) (EXC-2046/1, project ID 390685689). ## 2. Preliminaries We briefly recall facts on the order and chain polytopes. The face lattice of \(\mathcal{O}(P)\) can be described in terms of partitions. Let \(P\) be a poset and \[\pi=\{T_{i}\mid 1\leq i\leq s\}\] be a partition of \(P\). We say that \(\pi\) is _\(P\)-compatible_ if the relation \(\leq\) on \(\pi\) defined as the transitive closure of \[B\leq C\,\text{ if and only if }p\leq q\,\text{ for some }p\in B\text{ and }q\in C\] is antisymmetric. In this case, \(\leq\) is a partial order on \(\pi\). We also call \(\pi\)_connected_ if its blocks are connected as induced subposets. Let \(P\) be a poset. A \(P\)-compatible partition \(\pi\) of \(P\) gives rise to a poset \((P/\pi,\leq)\) where \((P/\pi,\leq)\) is the poset of blocks in \(\pi\). Furthermore, the quotient map \[q:(P,\leq) \to(P/\pi,\leq)\] \[x \mapsto B,\,\text{s.t. }x\in B\] is an order-preserving map, i.e. whenever \(x\leq y\) in \(P\), then \(q(x)\leq q(y)\) in \(P/\pi\). Thus, we can now state the combinatorial description of the faces of the order polytope. We extend \(P\) by a minimal and a maximal element \(\hat{P}=P\cup\{\hat{0},\hat{1}\}\). The first result in this direction goes back to Geissinger [1] and Stanley [14], see also [10]. **Theorem 2.1**.: _Let \(P\) be a poset. Then every non-empty face \(F\subseteq\mathcal{O}(P)\) can be identified via a partition \(\pi_{F}\) of \(\hat{P}\) that is connected, \(\hat{P}\)-compatible and \(\hat{0}\) and \(\hat{1}\) being in different blocks. Further, the map_ \[\mathcal{O}(\hat{P}/\pi_{F}) \to F\] \[\left(x_{B}\right)_{B\in(\hat{P}/\pi_{F})} \mapsto(x_{q(p)})_{p\in\hat{P}}\] _is an affine isomorphism. In particular, we have \(\dim(F)=|\pi_{F}|-2\)._ The chain-order polytopes, first introduced in [11] and then generalised in [13, 12], provide an Ehrhart-equivalent family of polytopes that interpolate between the order and chain polytope. These will turn out to be a key ingredient for the inductive argument for proving our main theorem. For a poset \(P\) consider a partition \(P=C\sqcup O\). Throughout we assume that \(\hat{0}\in C\) and \(\hat{1}\in O\). We are going to refer to the elements of \(C\) as the _chain elements_ and and the elements of \(O\) as _order elements_. **Definition 2.2**.: Then the _chain-order polytope_\(\mathcal{O}_{C,O}(P)\subseteq\mathbb{R}^{|P|}\) is given by 1. for each covering relation \(p\prec q\in O\) an inequality \(x_{p}\leq x_{q}\), 2. for each \(p\in C\) an inequality \(0\leq x_{p}\), 3. for each maximal chain \(p_{1}\prec p_{2}\prec\cdots\prec p_{r}\prec a\) with \(p_{i}\in C\), \(a\in O\) and \(r\geq 0\) an inequality \[x_{p_{1}}+\cdots+x_{p_{r}}\leq x_{a}\enspace.\] It is known that the inequalities (1) and (2) of the order and chain polytopes, respectively, are facet defining. In contrast the facet description of the chain-order polytopes are unknown in general; see [10, Conjecture 8.2]. However, in the case of the posets we are going to deal with, we can find a facet-defining description of the chain-order polytopes. **Proposition 2.3** ([10, Proposition 8.6]).: _Let \(P\) a ranked poset with partition \(P=C\sqcup O\). Then the inequality description for \(\mathcal{O}_{C,O}(P)\) in Definition 2.2 is facet defining._ The comparability graph of a poset \(P\) is the undirected graph \(\operatorname{Comp}(P)=(P,E)\) with \(\{p,q\}\in E\) if and only if \(p\) and \(q\) are comparable. It follows from the description (2) that the chain polytope \(\mathcal{C}(P)\) only depends on the graph \(\operatorname{Comp}(P)\). ## 3. Face normal form for chain-order polytope From now on, we consider maximal ranked posets, so let \(\tau=(\tau_{1},\tau_{2},\ldots,\tau_{\ell})\) with positive integers \(\tau_{i}\) and \(P_{\tau}\) the maximal ranked posets which has \(\tau_{i}\) elements in rank \(i\). For short, we denote the order and chain polytope for \(P_{\tau}\) by \(\mathcal{O}(\tau)\) and \(\mathcal{C}(\tau)\) respectively. For the inductive procedure, we define intermediate chain-order polytopes. **Definition 3.1** (\(k\)-Decomposition).: Let \(\tau\in\mathbb{Z}_{>0}^{\ell}\) be a tuple and \(0\leq k\leq\ell\). Then we define \(k\)_-Decomposition_ of \(P_{\tau}\) to be the decomposition \(P_{\tau}=U_{2}\sqcup U_{1}\) with \[U_{2}=\bigcup_{i=1}^{k}Y^{i},\quad U_{1}=\bigcup_{i=k+1}^{\ell}Y^{i},\] where \(Y^{i}\) refers to the elements of rank \(i\). We call the decompositions for \(k=0\) and \(k=\ell\) the _trivial decompositions_. Otherwise we call the \(k\)-Decomposition _non-trivial_. Consequently, we denote \(\mathcal{O}_{U_{2},U_{1}}(P_{\tau})\) by \(\mathcal{O}_{U_{2},U_{1}}(\tau)\) Our aim is now to describe the face lattice of \(\mathcal{O}_{U_{2},U_{1}}(\tau)\). To do so, we will rephrase the description of Definition 2.2 for our class of posets \(P_{\tau}\). **Lemma 3.2**.: _Let \(\tau\in\mathbb{Z}_{>0}^{\ell}\) and let \(P_{\tau}=U_{2}\sqcup U_{1}\) be a \(k\)-Decomposition. Then the chain-order polytope \(\mathcal{O}_{U_{2},U_{1}}(\tau)\) has the following facet-defining description_ 1. \(x_{\hat{0}}=0\) _and_ \(x_{\hat{1}}=1\)__ 2. _for each_ \(p\in U_{2}\) _an inequality_ \(0\leq x_{p}\)__ 3. _for each_ \(a,b\in U_{1}\cup\{\hat{1}\}\) _with_ \(a\prec b\) _an inequality_ \(x_{a}\leq x_{b}\)__ 4. _for any_ \(p_{i}\in Y^{i}\) _and_ \(q_{k+1}\in Y^{k+1}\)__ \[x_{p_{1}}+\cdots+x_{p_{k}}\leq x_{q_{k+1}},\] _where in the case of \(k=\ell\), we set \(Y^{\ell+1}:=\{\hat{1}\}\). We are going to refer to the third and last class of inequalities as the order and the chain inequalities._ Proof.: The first two classes are just the same as in Definition 2.2. The last two classes from the lemma together form the third class in the beginning definition. The order inequalities arise in the case when \(r=0\) and the chain inequalities arise when \(r>0\) this can only happen if the minimum of the corresponding chain is exactly \(\hat{0}\). The inequalities being facet-defining results from Proposition 2.3. We are going to construct our normal form for the face lattice via the following steps 1. Check what happens if we only consider order inequalities and how to interpret those faces. This is Proposition 3.3. 2. Check what happens if we only consider chain inequalities. This is Propositions 3.6 and 3.7. 3. See how they can both be combined, which is Theorem 3.8. **Proposition 3.3**.: _Consider a \(k\)-Decomposition of \(P_{\tau}\). Let \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) be a face given only by order inequalities. Then \(F\) is given by a face partition of \(U_{1}\cup\{\hat{1}\}\)._ Proof.: We are going to project downwards onto the order polytope \(\mathcal{O}(\tilde{P})\), where \(\tilde{P}\) is the induced subposet \[\tilde{P}=\{\hat{1}\}\cup\{\hat{0}\}\cup U_{1}.\] I.e. the poset where we only consider the order part. Now consider the projection \[\phi:\mathcal{O}_{U_{1},U_{2}}(\tau) \to\mathcal{O}(\tilde{P})\] \[(x_{p})_{p\in P_{\tau}} \mapsto(x_{p})_{p\in U_{1}}\] The set \(\phi(F)\) is again a face of \(\mathcal{O}(\tilde{P})\), and the equations satisfied are precisely the same as for \(F\). Since \(F\) does not satisfy any chain inequality, we also have that it does not satisfy \(x_{q}=0\) for any \(q\in U_{1}\). This implies that \(\phi(F)\) only satisfies equalities of the form \(x_{p}=x_{q}\) with \(p,q\in U_{1}\) for \(p\leq q\) or \(x_{p}=1\) for some \(p\in Y^{n}\). Using Theorem 2.1, we see that \(\phi(F)\) is given by a face partition of \(\tilde{P}\). But since not all \(x\in\phi(F)\) satisfy \(x_{p}=0\) for any \(q\in U_{2}\), the element \(\hat{0}\) is in a singleton block. Thus, the face \(F\) given only by order inequalities is entirely determined by a face partition of \(U_{1}\cup\{\hat{1}\}\). **Proposition 3.4**.: _Consider a \(k\)-Decomposition of \(P_{\tau}\) and a face \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) given only by order inequalities and let \(\varphi\) be the face partition of \(U_{1}\cup\{\hat{1}\}\) and let \(\pi\) be the extended partition from \(\varphi\) by adding singletons. Then we have that_ \[\mathcal{O}_{U_{2},\varphi}(P_{\tau}/\pi)\cong F\] _In particular, the codimension of \(F\) is exactly \(|U_{1}|+1-|\varphi|\)_ Proof.: Checking that \(\pi\) is a face partition will be left out as it is a straight-forward verification. Then according to Theorem 2.1, using the projection onto the blocks \(q:P_{\tau}\to P_{\tau}/\pi\), we get the affine isomorphism \[\mathcal{O}_{U_{2},\varphi}(P_{\tau}/\pi) \to F\] \[(x_{p^{\prime}})_{p^{\prime}\in(P_{\tau}/\pi)} \mapsto(x_{q(p)})_{p\in P_{\tau}}\] The dimension of \(F\) is exactly \(|\pi|-2\), where we account for the blocks containing \(\hat{0}\) and \(\hat{1}\). However, \(\hat{0}\) and all elements of \(U_{2}\) are in singleton blocks, which means that we have \[|\pi|-2=|U_{2}|+|\varphi|-1\] where we account for the block in \(\varphi\) containing \(\hat{1}\). Since the chain-order polytope is of dimension \(|U_{1}|+|U_{2}|\), we get as a codimension \[|U_{1}|+|U_{2}|-(|U_{2}|+|\varphi|-1)=|U_{1}|+1-|\varphi|\] Lastly, we need to understand what the quotient poset \(P_{\tau}/\pi\) would look like. **Lemma 3.5**.: _Fix a \(k\)-Decomposition of \(P_{\tau}\) for some \(\tau\in\mathbb{Z}_{>0}^{\ell}\). Let \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) be a face given only by order inequalities and let \(\pi\) be the extended partition on \(P_{\tau}\). Then the quotient poset \(P_{\tau}/\pi\) is of the form \(P_{\tau^{\prime}}\) for some other tuple \(\tau^{\prime}\)._ Proof.: Instead of inspecting the whole partition, we will only consider what happens when turning an edge of the Hasse diagram into an equality. The rest follows by induction. Consider the edge \(\{a,b\}\) for \(a\in Y^{i}\), \(b\in Y^{i+1}\) and \(i<\ell\). Then for all \(x\in Y^{t}\) with \(t<i+1\) and \(x\neq b\) we have \[x<b\] Similarly, we get for all \(y\in Y^{s}\) with \(s>i\) and \(y\neq a\) that \[y>a\] So, all blocks satisfy the relations of \(P_{\tau^{\prime}}\) with \[\tau^{\prime}=(\tau,\dots,\tau_{i-1},\tau_{i}-1,1,\tau_{i+1}-1,\dots,\tau_{ \ell})\] and thus the quotient poset is just \(P_{\tau^{\prime}}\). In the case of \(i=\ell\), we get with the same arguments that the quotient poset is \(P_{\tau^{\prime}}\) with \[\tau^{\prime}=(\tau,\dots,\tau_{\ell}-1)\] Now, if it happens that we get a zero entry in the tuple, we will just skip that. Now we are at a point, where we need to devote ourselves to the chain inequalities and setting chain elements equal to \(0\). **Proposition 3.6**.: _Let \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) be a face given by \(x_{b}=0\) for some \(b\in U_{2}\) from a \(k\)-Decomposition \(P_{\tau}=U_{2}\sqcup U_{1}\). Assume that \(b\in Y^{i}\), then we have_ \[F\cong\mathcal{O}_{\tilde{U_{2}},U_{1}}(\tilde{\tau})\] _where_ \[\tilde{\tau}=(\tau_{1},\dots,\tau_{i-1},\tau_{i}-1,\tau_{i+1},\dots,\tau_{ \ell})\] _and_ \[\tilde{U_{2}}=U_{2}\setminus\{b\},\] _If \(\tilde{\tau}_{i}=0\), then we just delete \(\tilde{\tau}_{i}\) from \(\tilde{\tau}\)._ Proof.: The chain-inequalities using \(b\) are of the form \[\sum_{j=1}^{k}x_{p_{j}}\leq x_{a}\] with \(b=p_{i}\). If \(x_{b}=0\), then these simplify to \[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{k}x_{p_{i}}\leq x_{a}\] If there exists \(c\in Y^{i}\setminus\{b\}\), then these inequalities are redundant by instead considering \[x_{c}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{k}x_{p_{i}}\leq x_{a}\] Either way, we get that the defining inequalities of the face are exactly the defining inequalities of \(\mathcal{O}_{\tilde{U}_{2}U_{1}}(\tilde{\tau})\). Thus, the linear projection \[F \to\mathcal{O}_{\tilde{U}_{2},U_{1}}(\tilde{\tau})\] \[(x_{p})_{p\in P_{\tau}} \to(x_{p})_{p\in P_{\tau}}\] is an affine isomorphism. **Proposition 3.7**.: _Let \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) be the face for a \(k\)-Decomposition given by the equalities from the maximal chains_ \[\mathbf{p}=\{\hat{0},p_{1},\ldots,p_{k},a\},\quad\mathbf{p}^{\prime}=\{\hat{0 },p_{1}^{\prime},\ldots,p_{k}^{\prime},a^{\prime}\}\] _where \(a\) and \(a^{\prime}\) belong to the minimal antichain of \(U_{1}\). We define \(p_{k+1}:=a\) and \(p_{k+1}^{\prime}:=a^{\prime}\). Let \(I\subseteq\{1,\ldots,k+1\}\) be the set of indices, where we have \(p_{i}\neq p_{i}^{\prime}\) for \(i\in I\). By projecting the coordinates of \(\mathbf{p}^{\prime}\setminus\mathbf{p}\) away, we get the affine isomorphism_ \[\nu:F \to F^{\prime}\] \[(x_{p})_{p\in P_{\tau}} \mapsto(x_{p^{\prime}})_{p^{\prime}\in P_{\tau^{\prime}}}\] _where \(\tau^{\prime}=(\tau_{1}^{\prime},\ldots,\tau_{\ell}^{\prime})\) with_ \[\tau_{j}^{\prime}=\begin{cases}\tau_{j}-1&\text{ if }j\in I\\ \tau_{j}&\text{ otherwise}\end{cases}\] _and the face \(F^{\prime}\) is given by the equation of \(\mathbf{p}\). Each coordinate that gets projected away yields us exactly one codimension extra._ Proof.: Fix an \(i\in I\) with \(i<k+1\). Then we can form the two maximal chains \[\mathbf{q}=(\mathbf{p}\setminus\{p_{i}\})\cup\{p_{i}^{\prime}\},\quad\mathbf{ q}^{\prime}=(\mathbf{p}^{\prime}\setminus\{p_{i}^{\prime}\})\cup\{p_{i}\}\] Then, we get the inequalities \[x_{a}+x_{p_{i}^{\prime}}-x_{p_{i}} =\sum_{q\in\mathbf{q}^{\prime}\cap U_{2}}x_{q}\leq x_{a}\] \[x_{a^{\prime}}+x_{p_{i}}-x_{p_{i}^{\prime}} =\sum_{q\in\mathbf{q}^{\prime}\cap U_{2}}x_{q}\leq x_{a^{\prime}}\] In particular, these imply that \(x_{p_{i}^{\prime}}\leq x_{p_{i}}\) as well as \(x_{p_{i}}\leq x_{p_{i}^{\prime}}\). I.e., we have \(x_{p_{i}}=x_{p_{i}^{\prime}}\). If we also have \(a\neq a^{\prime}\), then using the chains \[\mathbf{r}=(\mathbf{p}\setminus\{a\})\cup\{a^{\prime}\},\quad\mathbf{r}^{ \prime}=(\mathbf{p}^{\prime}\setminus\{a^{\prime}\})\cup\{a\}\] we get \[x_{a} =\sum_{q\in\mathbf{p}^{\prime}\cap U_{2}}x_{q}=\sum_{q\in\mathbf{r} ^{\prime}\cap U_{2}}x_{q}\leq x_{a^{\prime}}\] \[x_{a^{\prime}} =\sum_{q\in\mathbf{p}^{\prime}\cap U_{2}}x_{q}=\sum_{q\in\mathbf{ r}^{\prime}\cap U_{2}}x_{q}\leq x_{a}\] and with that \(x_{a}=x_{a^{\prime}}\). So the face \(F\) satisfies the equation given by \(\mathbf{p}\) and the equations \[x_{p_{i}}=x_{p_{i}^{\prime}}\text{ for all }i\in I \tag{3}\] And the subset of \(F\) given by the equation by \(\mathbf{p}\) and the equalities (3) satisfies the equation given by \(\mathbf{p}^{\prime}\), i.e. the equations of (3) and \(\mathbf{p}\) satisfy \(F\). We can thus project the coordinates from \(\mathbf{p}^{\prime}\setminus\mathbf{p}\) away and get the affine isomorphism as stated in the proposition. With all the preliminary work done, we can now prove the normal form for the faces of the chain-order polytope. **Theorem 3.8**.: _Consider a \(k\)-Decomposition \(P_{\tau}=U_{2}\sqcup U_{1}\). Let \(F\subseteq\mathcal{O}_{U_{2},U_{1}}(\tau)\) be a face, then it is given by:_ 1. _A face partition of_ \(U_{1}\cup\{\hat{1}\}\)_, call it_ \(\pi\)__ 2. _A set of variables_ \(F_{0,i}\subseteq Y^{i}\) _we set equal to_ \(0\) _for all_ \(1\leq i\leq k\)__ 3. _If we use a chain, then we also have for each_ \(1\leq i\leq k\) _a set_ \(\emptyset\neq F_{\mathrm{eq},i}\subseteq Y^{i}\setminus F_{0,i}\) _whenever_ \(Y^{i}\setminus F_{0,i}\neq\emptyset\) _and a set_ \(F_{\mathrm{eq},k+1}\subseteq Y^{k+1}\) _consisting of singleton blocks of_ \(\pi\) _in_ \(Y^{k+1}\) _whenever these exist._ _The codimension then can be computed as follows_ _Without In this case, we get a codimension of_ \[|U_{1}\cup\{\hat{1}\}|-|\pi|+\sum_{i=1}^{n}|F_{0,i}|\] _With_ _In this case, we get a codimension of_ \[1+\sum_{i\in I}\left(|F_{\mathrm{eq},i}|-1\right)+|U_{1}\cup\{\hat{1}\}|-| \pi|+\sum_{i=1}^{n}|F_{0,i}|\] _where_ \(I\subseteq\{1,\ldots,k+1\}\) _is the set of indices for which we have defined_ \(F_{\mathrm{eq},i}\)_._ Proof.: The face partition results from Proposition 3.3. We now fix the set of variables equal to \(0\), which we call \(F_{0,i}\) for all \(1\leq i\leq n\). These can be projected away according to Proposition 3.6. If we now want to add chains, then each chain has to contain an element \(y\in Y^{i}\setminus F_{0,i}\) for all \(1\leq i\leq k\) whenever \(Y^{i}\setminus F_{0,i}\neq\emptyset\). Otherwise, there is no choice we can make for \(Y^{i}\). Then, all chains have to end in the minimal elements of \(P_{\tau}/\pi\) (rigorously speaking, we must put the remaining elements in singleton blocks to form the quotient). These minimal elements are the singletons blocks of \(Y^{k+1}\) if they exist. If all elements of \(Y^{k+1}\) are in blocks containing elements greater than them, and thus in the same block, there is also no choice for us to make. The codimension results from Propositions 3.4, 3.6 and 3.7. By considering the decomposition \(P_{\tau}=P_{\tau}\sqcup\emptyset\), we get **Corollary 3.9**.: _Let \(\tau=(\tau_{1},\dots,\tau_{\ell})\in\mathbb{Z}_{>0}^{\ell}\) and \(F\subseteq\mathcal{C}(\tau)\) be a face, then it is uniquely given by:_ 1. _A set of variables_ \(F_{0,i}\subseteq Y^{i}\) _we set equal to_ \(0\) _for all_ \(1\leq i\leq n\)__ 2. _If we use a chain, then we also have for each_ \(1\leq i\leq n\) _a set_ \(\emptyset\neq F_{\mathrm{eq},i}\subseteq Y^{i}\setminus F_{0,i}\) _whenever_ \(Y^{i}\setminus F_{0,i}\neq\emptyset\)_._ _The codimension then can be computed as follows_ _Without In this case, we get the codimension chain:_ \[\sum_{i=1}^{n}|F_{0,i}|\] _With In this case, we get that the codimension is chain:_ \[1+\sum_{i\in I}\left(|F_{\mathrm{eq},i}|-1\right)+\sum_{i=1}^{n}|F_{0,i}|\] _where_ \(I\subseteq\{1,\dots,n\}\) _is the set of indices for which we have defined_ \(F_{\mathrm{eq},i}\)_._ ## 4. Prove of the main theorem In this section we utilize our normal form from Theorem 3.8 to prove our main theorem. We proceed as follows: Figure 1. Normal form of a face of \(\mathcal{O}_{U_{2},U_{1}}((5,2,1,4,2,3))\) with \(k=3\) of codimension \(5\). Elements in \(U_{1}\) not marked are in singletons. 1. Prove an embedding between the \(i\)-codimensional normal forms given by the two non-trivial \(k\)- and \((k+1)\)-Decompositions of \(P_{\tau}\). This is Claim 4.1. 2. Use an induction argument going from the order polytope with the \(0\)-Decomposition and the chain polytope with the \(n\)-Decomposition. **Claim 4.1**.: _Let \(\tau\in\mathbb{Z}_{>0}^{\ell}\) and let \(0\leq k\leq n-1\) and consider the \(k\)-Decomposition \(P_{\tau}=U_{2}\sqcup U_{1}\) as well as the \((k+1)\)-Decomposition \(P_{\tau}=V_{2}\sqcup V_{1}\)._ _Then there exists an injective map from the set of faces of codimension \(i\) of \(\mathcal{O}_{U_{2},U_{1}}(\tau)\) into the set of faces of codimension \(i\) of \(\mathcal{O}_{V_{2},V_{1}}(\tau)\) for all \(2\leq i\leq|\tau|\)._ The proof of the main theorem follows from establishing the claim. **Theorem 4.2**.: _Let \(\tau\in\mathbb{Z}_{>0}^{\ell}\). Fix any \(0\leq k\leq n\) and the \(k\)-Decomposition \(P_{\tau}=U_{2}\sqcup U_{1}\). Then for all \(1\leq i\leq|\tau|\) we have that_ \[f_{i}(\mathcal{O}(\tau))\leq f_{i}(\mathcal{O}_{U_{2},U_{1}}(\tau))\] The way we are going to prove Claim 4.1 is by constructing the map on the sets of normal forms. The crucial insight is that the normal forms of the \(k\)- and \((k+1)\)-decompositions only differ on the set \(Y^{k+1}\cup Y^{k+2}\). That way we are going to construct based on the case distinction of the face partition of the \(k\)-Decomposition. ### Construction: Preliminary Choice As the normal forms only differ locally, we can provide already a construction for the most part of the embedding that would only be tweaked slightly in the case distinction. In order to provide a more uniform proof we will, similar to Lemma 3.2, set \(Y^{n+1}:=\{\hat{1}\}\). We fix a codimension \(t\) face \(F\) of \(\mathcal{O}_{U_{2},U_{1}}(\tau)\) and we want to construct a codimension \(t\) face \(F^{\prime}\) of \(\mathcal{O}_{V_{2},V_{1}}(\tau)\). #### 4.1.1. Face partition The idea is to 'cut off' the elements \(Y^{k+1}\) from all blocks \(\pi\) of the face partition of \(U_{1}\cup\{\hat{1}\}\). We acquire this via \[\pi^{\prime\prime}:=\{K\cap V_{1}\mid K\in\pi,K\not\subseteq Y^{k+1}\cup Y^{k +2}\}\] and the remaining elements of \(V_{1}\) are put into singletons: Let \(Z\subseteq V_{1}\) be the set of variables not in a block of \(\pi^{\prime\prime}\). We then set let \(\pi^{\prime}\) as the extended partition from \(\pi^{\prime\prime}\) by adding the singletons from \(Z\). To see that \(\pi^{\prime\prime}\), and thus \(\pi^{\prime}\), is connected, note the following. If \(K\cap V_{1}=K\) for \(K\in\pi\), then this block is, of course, connected. If \(K\cap V_{1}\neq K\), then \(K\) contains elements from \(Y^{k+1}\). However, by construction, we require that \(K\) also contains elements greater than \(Y^{k+2}\) and thus \(K\) contains \(Y^{k+2}\) (\(K\) is convex [2, Remark 3.8]), which allows it to be connected by forming the fence \(a>y<b\) for \(a,b\in K\cap V_{1}\) and some \(y\in Y^{k}\). Thus \(K\cap V_{1}\) is connected. Hence, we have that \(\pi^{\prime}\) is connected, and of course, it is again compatible and a face partition. One might ask why we cannot just intersect with \(V_{1}\) and leave out the condition on \(K\). The reason is that a block \(K\in\pi\) of height \(1\) would result in a disconnected block if we intersect it with \(V_{1}\). Hence, we must leave out the minimal blocks of height \(1\). Therefore, we need to put the restriction on the block \(K\). #### 4.1.2. Zero elements: Here, we just copy the zero elements. We set \[F^{\prime}_{0,i}=F_{0,i}\] for all \(1\leq i\leq k\). We consider the variables for \(i=k+1\) in the individual cases. #### 4.1.3. Chain elements: Similarly, as before, we just copy the chain elements. If \(F\) uses already chains, then we set \[F^{\prime}_{\mathrm{eq},i}=F_{\mathrm{eq},i}\] for all \(1\leq i\leq k+1\). The chain variables for \(i=k+2\) will be considered in the individual cases. ### Construction: Case distinction - Generic cases Now we are set up for the case distinction. We are first of all going to restrict ourselves to the case where \(F\) already uses chains as we can provide a uniform approach to this case. We have to consider which elements of \(Y^{k+1}\) are in singletons. As we will need to make a standard choice for the chain elements in \(Y^{k+2}\), we fix \(1\leq r\leq\tau_{k+2}\) that is minimal with \(y_{r}^{k+2}\) being in a singleton if it exists. We have thus two cases #### 4.2.1. All elements of \(Y^{k+1}\) are in singletons in \(\pi\) Here, we do not cut off any block. Thus we only have to naturally continue the chains and define \[F^{\prime}_{0,k+1}=\emptyset,\quad F^{\prime}_{\mathrm{eq},k+2}=\{y_{r}^{k+2}\}\] Of course in case of \(k=\ell-1\), the set \(F^{\prime}_{\mathrm{eq},k+2}\) cannot be defined and we only set the zero variables. The codimension also match as we did not alter the non-singleton blocks in \(F\) and didn't remove nor add chain or zero variables. #### 4.2.2. There are elements of \(Y^{k+1}\) not in singletons in \(\pi\) In this case, let \(\{a_{1},\ldots,a_{l}\}\subseteq Y^{k+1}\) be the set of elements contained in blocks with elements greater than them. Then these have to be in the same block, calling it \(K\in\pi\), as \(K\) is convex. [12, Remark 3.8] We set \[A:=K\cap Y^{k+1},\quad B:=K\cap Y^{k+2} \tag{4}\] We then set \[F^{\prime}_{0,k+1}=A,\] and in case we can also define \(F^{\prime}_{\mathrm{eq},k+2}\), i.e. \(K\) is of height \(1\), we set \[F^{\prime}_{\mathrm{eq},k+2}=B\] The codimension match again as we are setting the appropriate number of variables equal to \(0\) as well as setting the appropriate number of chain variables if also necessary. ### Construction: Case distinction - Degenerate cases Now we can come to the degenerate cases where the face \(F\) does not use any chains. We will see that the initial construction in the generic case would fail as they would yield us a face codimension \(+1\). Hence, we have to alter the techniques and rely more on the chain variables. We again distinguish the elements of \(Y^{k+1}\). #### 4.3.1. All elements of \(Y^{k+1}\) are in singletons in \(\pi\) Here, we only set \[F^{\prime}_{0,k+1}=\emptyset\] Again as we did not change anything about the normal forms the codimension match again. #### 4.3.2. There are elements of \(Y^{k+1}\) not in singletons in \(\pi\) Again, we refer to \(K\in\pi\) as the block containing the set \(A\subseteq Y^{k+1}\). Now, we have to be a bit more delicate with our construction based on \(K\). We distinguish now via its height and then its maximal elements. #### 4.3.2.1. \(K\) is of height at least \(2\) Here, we can set \[F^{\prime}_{0,k+1}=A\] And again the codimension match. #### 4.3.2.2. \(K\) is of height exactly \(1\) Here, we have \[K=A\cup B\] with \(A\) and \(B\) from (4). Now we have to consider the set \(B\) for our next construction. #### 4.3.2.2.1. \(B\neq\{y_{r}^{k+2}\}\) Here, we set \[F^{\prime}_{0,k+1}=\emptyset\] And we use a chain to identify \(K\): For each \(1\leq i\leq k\) choose \(c_{i}\in Y^{i}\setminus F_{0,i}\) if the set is non-empty and put it into \(F^{\prime}_{\text{eq},i}\). Then also set \[F^{\prime}_{\text{eq},k+1}=A,\quad F^{\prime}_{\text{eq},k+2}=B\] Using a chain yields us a codimension of \(1\), and using the sets \(A\) and \(B\) for equalities, we get the added codimension \[1+|A|-1+|B|-1=|A|+|B|-1=|K|-1\] Thus, \(F^{\prime}\) has the proper codimension. #### 4.3.2.2.2. \(B=\{y_{r}^{k+2}\}\) This case also covers when \(k=\ell-1\) and \(B=\{\hat{1}\}\). If we used the same approach as before, we would run into the problem that this would collide with the standard choice in Case 4.2.1. Thus, we set \[F^{\prime}_{0,k+1}=A\] Since \(|B|=1\), we get the correct codimension for \(F^{\prime}\). Figure 2. Example of construction in Case 4.2.2 when \(F\) already uses a chain and \(K\) is of height \(1\). ### Injectivity of the map Now that we managed to construct the mapping, we now need to verify that it is indeed an embedding. Let \(G^{\prime}\) for that be a face of codimension \(t\) in the image of the mapping. Let \(G\) be a face that maps to \(G^{\prime}\). Let \(\pi^{\prime}\) be the face partition of \(G^{\prime}\) and \(\pi\) be the face partition of \(G\). Then we can immediately reconstruct all blocks of \(G\) contained in \(V_{1}\). Now, we distinguish the set of variables set to \(0\). _1st case:_\(G^{\prime}_{0,k+1}=\emptyset\). In this case, only \(2\) things can occur 1. All elements of \(Y^{k+1}\) are in singletons in \(\pi\). (Cases 4.2.1 or 4.3.1) Thus, we can reconstruct \(\pi\), the sets \(G_{0,i}\) for \(1\leq i\leq k\), and if we use chains, we can fully reconstruct them as well. 2. \(G\) uses no chain and \(G\) has a block \(K\subseteq Y^{k+1}\cup Y^{k+2}\) of height \(1\) and the set of maximal elements of \(K\) does not coincide with \(\{y^{k+2}_{r^{\prime}}\}\) with \(y^{k+2}_{r^{\prime}}\) having minimal index \(r^{\prime}\) while in a singleton block in \(\pi\). (Case 4.3.2.2.1) This implies we can reconstruct \(K\) via \[K=G^{\prime}_{\mathrm{eq},k+1}\cup G^{\prime}_{\mathrm{eq},k+2}\] and thus the whole partition \(\pi\). Furthermore, we also have that we can reconstruct \(G_{0,i}\) for \(1\leq i\leq k\). We can distinguish both cases by considering \(G^{\prime}_{\mathrm{eq},k+2}\). The first case occurs only if \(G^{\prime}_{\mathrm{eq},k+2}=\{y^{k+2}_{r^{\prime}}\}\) or the set is not defined at all; the second case occurs otherwise. _2nd case:_\(G^{\prime}_{0,k+1}\neq\emptyset\). We have \(\{a_{1},\dots,a_{l}\}\subseteq Y^{k+1}\) contained in a block \(K\in\pi\) containing elements greater than them. We either have 1. \(K\) has height at least \(2\). (Cases 4.2.2 and 4.3.2.1) We can determine this by checking if all elements of \(Y^{k+2}\) are contained in the same block in \(\pi^{\prime}\). In this instance, \(G^{\prime}_{0,k+1}\) provides us exactly with the minimal elements of \(K\). Thus, we can reconstruct \(K\) via \[K=T\cup G^{\prime}_{0,k+1}\] where \(T\in\pi^{\prime}\) is the block containing \(Y^{k+2}\). That way, we can retrieve the whole partition \(\pi\). Similarly, we can reconstruct all chain elements and also all variables set equal to \(0\). Figure 3. Example of construction in Case 4.3.2.2.1. 2. \(K\) is of height exactly \(1\) and \(G\) already uses a chain. (Case 4.2.2) This can be checked by excluding the first case and checking if a chain is used. In this instance, we can reconstruct \[K=G^{\prime}_{0,k+1}\cup G^{\prime}_{\text{eq},k+2}\] and thus, again, the whole partition \(\pi\). We recover the chains and the remaining variables set to \(0\). 3. \(K\) is of height exactly \(1\) and \(G\) uses no chain or \(k=\ell-1\). (Cases 4.3.2.2) This can be determined by excluding the first case and checking if \(G^{\prime}\) uses no chains. Then we get that \[K=G^{\prime}_{0,k+1}\cup\{y^{k+2}_{r^{\prime}}\}\] where \(r^{\prime}\) is the minimal index with \(y^{k+2}_{r^{\prime}}\) being in a singleton in \(\pi^{\prime}\) or we have \[K=G^{\prime}_{0,k+1}\cup\{\hat{1}\}\] if \(k=\ell-1\). Anyhow, we can reconstruct all blocks and also the variables set equal to \(0\). ## 5. Computations Here we discuss methods of computing the face numbers of the order and chain polytopes of an arbitrary finite poset and their implementation. Order and chain polytopes of posets are special in that both a vertex and a facet description can be read off the poset by combinatorial means. Such a pair of a point set and a set of linear inequalities is also known as the _double description_ of a polytope. This means that convex hull computations, which can be tedious or even forbiddingly expensive, are unnecessary; see [10] for a survey and [1, 11] for computational experiments. The incidences between vertices and facets are directly available from a double description. Since the face lattice of a polytope is both atomic and coatomic [11, SS2.2], those vertex-facet incidences determine the entire face lattice. Kaibel and Pfetsch found an output-sensitive algorithm for computing the entire Hasse diagram of the face lattice from the vertex-facet incidences of a polytope [10]. This algorithm is implemented in polymake[11], which we used for our experiments. Kliem and Stump recently described a slightly faster method to compute the faces without the covering relations, i.e., the nodes of the Hasse diagram but not the edges [12]. Yet, for our purposes, we found the Kaibel-Pfetsch algorithm to be fast enough. This leaves the question of how to obtain the vertices and the facets of the order and chain polytopes in the first place. All individual steps in the procedure are standard graph algorithms. The initialization phase is the same for the order and the chain polytope. The input is the Hasse diagram of the extended poset \(\hat{P}\) as a directed graph; we choose the upward orientation of the arcs. Starting at the bottom element \(\tilde{0}\) we perform a graph traversal by depth first search to obtain all maximal chains. This gives the comparability graph \(\operatorname{Comp}(P)\), which is undirected, by enumerating all pairs of nodes in each maximal chain. The most interesting subtask is to enumerate the maximal antichains. These are precisely the maximal independent sets of \(\operatorname{Comp}(P)\), which in turn is the same as the set of maximal cliques in the complemet of \(\operatorname{Comp}(P)\). Computing maximal independent sets or, equivalently, maximal cliques is a key algorithmic problem in combinatorial optimization. The decision problem CLIQUE, which asks if a given graph has a maximal clique (or independent set) of size at least \(k\), is known to NP-complete [1, 1]. This means that there is no hope for a fast algorithm. Makino and Uno gave an algorithm for CLIQUE which performs well in practice [13], and this is implemented in polymake. With this information equipped, writing down double descriptions of \(\mathcal{O}(P)\) and \(\mathcal{C}(P)\) is straightforward. The details are spelled out in Algorithms A and B. ``` Input: Hasse diagram of extended poset \(\hat{P}\) Output: Double description of \(\mathcal{O}(P)\) 1 compute maximal chains of \(P\) 2 compute comparability graph \(\operatorname{Comp}(P)\) 3 compute maximal antichains = maximal independent sets in \(\operatorname{Comp}(P)\) 4\(V\leftarrow\emptyset\), \(F\leftarrow\emptyset\) 5foreachmaximal antichain \(A\)do 6foreachsubset\(S\subseteq A\)do 7 compute order filter \(O\) generated by \(S\) 8 add characteristic vector \(\chi_{O}\) to \(V\) 9foreacharc \((p,q)\) of Hasse diagramdo 10 add inequality \(x_{p}\leq x_{q}\) to \(F\) 11return\((V,F)\) ``` **Algorithm A:** Order polytope Of course, the maximal ranked posets \(P_{\tau}\) allow for considerable simplifications of the above methods. In particular, the maximal antichains are precisely the sets of elements of \(p\) of fixed rank. The polymake implementation of Algorithms A and B is available in version 4.11. With this computing the entire Table 4 takes less than a minute on a standard laptop. Within about 45 minutes we can also find (see Table 5) the \(f\)-vectors of our running example \(\tau=(5,2,1,4,2,3)\) from Figure 1.
2308.16073
3D-Var Data Assimilation using a Variational Autoencoder
Data assimilation of atmospheric observations traditionally relies on variational and Kalman filter methods. Here, an alternative neural-network data assimilation (NNDA) with variational autoencoder (VAE) is proposed. The three-dimensional variational (3D-Var) data assimilation cost function is utilised to determine the analysis that optimally fuses simulated observations and the encoded short-range persistence forecast (background), accounting for their errors. The minimisation is performed in the reduced-order latent space, discovered by the VAE. The variational problem is auto-differentiable, simplifying the computation of the cost function gradient necessary for efficient minimisation. We demonstrate that the background-error covariance ($\mathbf{B}$) matrix measured and represented in the latent space is quasi-diagonal. The background-error covariances in the grid-point space are flow-dependent, evolving seasonally and depending on the current state of the atmosphere. Data assimilation experiments with a single temperature observation in the lower troposphere indicate that the $\mathbf{B}$-matrix simultaneously describes both tropical and extratropical background-error covariances.
Boštjan Melinc, Žiga Zaplotnik
2023-08-30T14:50:49Z
http://arxiv.org/abs/2308.16073v3
# Neural-Network Data Assimilation using Variational Autoencoder ###### Abstract In numerical weather prediction, data assimilation of atmospheric observations traditionally relies on variational and Kalman filter methods. Here, we propose an alternative full neural-network data assimilation (NNDA) in the latent space with variational autoencoder (VAE). The 3D variational data assimilation (3D-Var) cost function is applied to find the latent space vector which optimally fuses simulated observations and the encoded short-range persistence forecast (background), accounting for their errors. We demonstrate that the background-error covariance matrix, measured and represented in the latent space, is quasi-diagonal. Data assimilation experiments with a single temperature observation in the lower troposphere indicate that the same set of neural-network-derived basis functions is able to describe both tropical and extratropical background-error covariances. The background-error covariances evolve seasonally and also depend on the current state of the atmosphere. Our method mimics the 3D variational data assimilation (3D-Var), however, it can be further extended to resemble 4D-Var by including the neural network forecast model. data assimilation, neural network, variational autoencoder, machine learning, background errors, 3D-Var, analysis increments ## 1 Introduction In recent years, significant progress has been made in the use of machine learning in weather prediction. These developments encompass various aspects of the numerical weather prediction (NWP) workflow, including the use of neural-network (NN) emulators for specific components, such as radiation schemes [e.g. Meyer et al., 2022], convection schemes [e.g. Yuval and O'Gorman, 2020], bias-correction and forecast postprocessing [e.g. Kim et al., 2021, Frmda et al., 2022], uncertainty estimation [e.g. Clare et al., 2021] and ensemble spread estimation [e.g. Brecht and Bihlo, 2023]. The availability of ERA5 reanalyses [Hersbach et al., 2020] through Copernicus Climate Data Store boosted an immense advance in full NN weather prediction models [e.g. Keisler, 2022, Pathak et al., 2022, Bi et al., 2023, Lam et al., 2022, Nguyen et al., 2023, Chen et al., 2023a,b]. Most of these weather prediction tools have demonstrated comparable root-mean-square errors (RMSEs) for various atmospheric variables at medium-range lead times to the high-resolution deterministic forecasts of the world-leading NWP system, the Integrated Forecast System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF). Despite these achievements, it is important to note that these NN models suffer from field smoothing and vanishing power spectra with increasing forecasts lead time [Ben Bouallegue et al., 2023]. Consequently, they cannot be considered as true weather emulators, and should be more fairly compared against the ensemble mean of the ensemble prediction system (EPS) [as in Chen et al., 2023c]. Moreover, these models inevitably inherit the biases from the ERA5 ground truth on which they are trained. Another limitation of pure NN models is their inability to independently produce an operational forecast, as they rely on initial conditions provided by the operational NWP centers through the data assimilation process. Data assimilation is a methodology that optimally fuses information from millions of recent Earth-system observations and short-range model simulations (Kalnay, 2002; Lahoz and Schneider, 2014) to obtain the most accurate estimate of the current state of the Earth-system, known as the _the analysis_, which serves as the initial state for the weather forecasts. The main motivation for such merging approach is that the observations are accurate, but also relatively sparse and typically biased. They do not cover the entire globe or capture all vertical slices of the atmosphere at each time instant. Furthermore, not all atmospheric, land, or ocean characteristics can be directly observed. For instance, instead of directly measuring profiles of temperature, humidity, clouds, and precipitation, satellites measure radiances that provide implicit information on these variables (Geer et al., 2018). While the observations have irregular sampling, the forecast models provide a complete representation of the Earth-system state. However, forecast models are often less accurate than observations as the forecast error grows with time. Therefore, the observations and the short-range model forecast are objectively merged based on Bayesian inference by considering their respective error statistics. In contrast to the NN forecast models, NNs have seen a limited application in NWP data assimilation despite (or because of) significant mathematical similarities between machine learning and data assimilation (Cheng et al., 2023). So far, NNs have been mostly used for specific tasks of the NWP data assimilation workflow. For instance, studies by Bonavita and Laloyaux (2020) and Laloyaux et al. (2022) have demonstrated that the NNs are similarly effective in estimating model biases as the weak-constraint formulation of 4D-Var data assimilation. NNs were also used to derive tangent linear model and adjoint model used in 4D-Var data assimilation (Hatfield et al., 2021), leveraging the advantageous property of easy auto-differentiation in NNs - a characteristic that we also exploit in this study. A full NN data assimilation in an NWP setting was recently demonstrated by de Almeida et al. (2022). They employed a dense NN and trained it to emulate analysis increments in Weather Research and Forecasting (WRF) model derived from the 3D-Var data assimilation of surface observations and atmospheric sounding data. In another study, Andrychowicz et al. (2023) performed _implicit_ data assimilation. They trained the convolutional-vision transformer NN model (MetNet-3) to predict the surface variables (temperature, dew point, wind speed and direction) up to 24 hours ahead by minimizing the difference between the model forecast and real surface observations. The training data used for forecast initialization included ground-based weather stations, GOES satellite observations, radar precipitation as well as the analysis from the High Resolution Rapid Refresh (HRRR) NWP model. The latter provided a complete initial representation of the state of the atmosphere (first guess), which is then further corrected by the observations. Their approach reaffirms the necessity to combine observations with a complete prior information of the earth-system state even in case of neural-network data assimilation (NNDA). NNDA has also been performed in a simplified model setting using augmented approach to determine both the model state and the model parameters (Brajard et al., 2020; Fablet et al., 2021; Legler and Janjic, 2022) or to correct the model dynamics (Farchi et al., 2021). In this study, we adhere to Bayesian fundamentals, and employ a variational autoencoder (VAE) to demonstrate NNDA within a reduced-order latent space. Our method resembles the three-dimensional variational (3D-Var) data assimilation, but can be extended to four-dimensional variational (4D-Var) data assimilation when coupled with the NN forecast model. While this represents the first example of employing the variational approach for neural-network latent space DA in NWP, prior attempts of variational DA have been conducted in reduced order latent spaces, using linear empirical orthogonal functions (Robert et al., 2005) or singular vectors (Chai et al., 2007; Cheng et al., 2010). Latent space DA has also been applied using Kalman filter methods with the Lorenz 96 model (Peyron et al., 2021) and air quality models (Amendola et al., 2021). The theoretical algorithm on variational latent space DA (although in a slightly different form) with a standard neural-network autoencoder has been described by Mack et al. (2020). The article is structured as follows. Section 2 outlines the methodology of data assimilation using VAE, along with the background-error estimation and minimisation algorithm. Section 3 presents the results of observing system simulation experiments (OSSEs) involving a single temperature observation at various locations, as well as an experiment with simulated global temperature observations. Discussion, conclusion and outlook is given in Section 4. ## 2 Data assimilation with variational autoencoder ### Data For training and evaluating our neural-network-based data assimilation, we used temperature data on 850 hPa pressure level (\(T_{850}\)) from ERA5 reanalysis (Hersbach et al., 2020). Daily-mean data were derived from hourly data on a regular latitude-longitude grid with 0.25\({}^{\circ}\) resolution. The data were additionally latitudinally regridded to exclude the poles. To produce a standardised input for the neural network training, we normalised the data by subtracting the 1981-2010 climatological mean and dividing it by the climatological standard deviation for each grid point and day of the year. The data were split as follows: the period 1979-2014 was used for NN training, 2015-2018 for validation, and 2019-2022 for testing. ### Representation of global 2D atmospheric field with variational autoencoder To perform efficient neural network data assimilation, we first learn the representation of the \(T_{850}\) atmospheric field in the latent space using convolutional VAE. Let us explain the VAE by first describing a standard autoencoder (AE). AE is a type of neural network which is trained to recreate its input as close as possible. It consists of two main parts: the encoder \(E\), which maps the input state \(\mathbf{x}^{\text{in}}\) to a state \(\mathbf{z}\) in the latent space of reduced dimension, i.e. \(\mathbf{z}=E(\mathbf{x}^{\text{in}})\), and the decoder \(D\), which mirrors the encoder and transforms the state in the latent space back to the input space, i.e. \(\mathbf{x}=D(E(\mathbf{x}^{\text{in}}))\). Therefore, an output of a _perfect_ AE is always identical to its input (\(\mathbf{x}=\mathbf{x}^{\text{in}}\)). In the case of meteorological fields, the input and output fields are typically described in a grid point space. As standard AE, the VAE includes the encoder-decoder network architecture, but it includes an extra layer, which randomly samples the latent space vector \(\mathbf{z}\). Consequently, the same input state results in different outputs, that are perturbed analogues of the input. If the VAE is _representative_, 1) each of the outputs should be physically plausible, and 2) the entire set of these outputs should represent the climatological variance of \(T_{850}\). Due to the stochastic nature of VAE, the loss function \(\mathcal{L}\) that is minimised during the training contains two parts (Kingma and Welling, 2019): the _reconstruction loss_\(\mathcal{L}^{\text{rec}}\), and the _regularisation loss_\(\mathcal{L}^{\text{reg}}\), so that the final loss function is \[\mathcal{L}=\mathcal{L}^{\text{rec}}+\mathcal{L}^{\text{reg}}\,. \tag{1}\] \(\mathcal{L}^{\text{rec}}\) measures how well the VAE reconstructs the input data from the learned latent space representation \(\mathbf{z}\). Learning the reconstruction is an optimisation problem, where we are searching for model parameters \(\boldsymbol{\theta}\), which maximise the probability \(p_{\boldsymbol{\theta}}(\mathbf{x})\) of the state \(\mathbf{x}\) matching the input data \(\mathbf{x}^{\text{in}}\), given the distribution \(p(\mathbf{z})\) of the latent space vector \(\mathbf{z}\), and a generative decoder model \(p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})\) parametrised by \(\boldsymbol{\theta}\), i.e. \[\max_{\boldsymbol{\theta}}p_{\boldsymbol{\theta}}(\mathbf{x})=\max_{ \boldsymbol{\theta}}\int_{\mathbf{z}}p_{\boldsymbol{\theta}}(\mathbf{x}| \mathbf{z})p(\mathbf{z})d\mathbf{z}\,. \tag{2}\] To simplify the optimisation problem, a recognition encoder model \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\) is introduced, with parameters \(\boldsymbol{\phi}\) determined such that \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\approx p_{\boldsymbol{\theta}}( \mathbf{z}|\mathbf{x})=p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})p( \mathbf{z})/p_{\boldsymbol{\theta}}(\mathbf{x})\). After some derivation and assuming gaussian multivariate distribution of \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\) with zero covariance, i.e. \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})=\mathcal{N}\left(\boldsymbol{ \mu},\boldsymbol{\Sigma}\right)\), \(\boldsymbol{\Sigma}=\operatorname{diag}\left(\boldsymbol{\sigma^{2}}\right)\), and \(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\), the reconstruction loss can be written as (Kingma and Welling, 2019): \[\mathcal{L}^{\text{rec}}_{\boldsymbol{\theta}}(\mathbf{x})=-\log p_{ \boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})\,. \tag{3}\] For this term, we chose Huber loss function, multiplied by the deterministic multiplier \(A>0\), i.e. \[\mathcal{L}^{\text{rec}}_{\boldsymbol{\theta}}(\mathbf{x})=A\left\langle L_{ \delta}(\mathbf{x},\mathbf{x}^{\text{in}})\right\rangle;\qquad L_{\delta}( \zeta,\xi)=\begin{cases}\frac{1}{2}(\zeta-\xi)^{2},&\text{if}\left|\zeta-\xi \right|\leq\delta,\\ \delta\left(\left|\zeta-\xi\right|-\frac{1}{2}\delta\right),&\text{otherwise}, \end{cases} \tag{4}\] where we used \(\delta=1\) and \(\left\langle\cdot\right\rangle\) denotes averaging over all grid points. The deterministic multiplier controls the balance of the two loss terms and therefore the amount of stochasticity. The second part of the loss function, \(\mathcal{L}^{\text{reg}}\), enforces a generation of the latent vector with a desired (\(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\)) distribution (Goodfellow et al., 2016; Kingma and Welling, 2019): \[\mathcal{L}^{\text{reg}}_{\boldsymbol{\theta}\boldsymbol{\phi}}=-\log p( \mathbf{z})+\log q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\,. \tag{5}\] During the NN training, the encoder \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\) produces vectors of means \(\boldsymbol{\mu}\) and log-variances \(\log\boldsymbol{\sigma}^{2}\) in the last stack of neurons (Fig. 1), representing the multivariate Gaussian distribution. Each latent vector element \(z_{i}\) is then randomly sampled from this distribution (Kingma and Welling, 2022) as \[z_{i}=\mu_{i}+\hat{z}_{i}\sigma_{i},\quad\hat{z}_{i}\sim\mathcal{N}(0,1)\,. \tag{6}\] Therefore the size of the encoder's output layer is twice the size of the latent space vector (compare the left and right column of neurons in the middle of VAE in Fig. 1). The latent vector \(\mathbf{z}\) then enters the decoder to produce a reconstructed vector \(\mathbf{x}\). During the loss function minimisation, the first term in (5) drives the encoder in a way that the statistics of the final sampled \(\mathbf{z}\) will resemble \(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\). On the other hand, the second term in (5) serves for enlarging the elements of \(\boldsymbol{\Sigma}\) and therefore stimulates the VAE's stochasticity. After training, the decoder can be used to generate instances of \(\mathbf{x}\) by sampling \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The regularisation terms in the loss function broaden the probability density function for random sampling and also ensure a smooth latent space. Consequently, when two latent vectors are in proximity, their respective decoded output fields typically exhibit close resemblance. This property is essential in our approach, enabling us to: 1) generate new physically plausible fields by randomly sampling latent vectors from a standard normal distribution, 2) perform data assimilation, and 3) generate ensembles of fields from a single ensemble member as demonstrated in Grooms (2021). The reason we opted not to use standard autoencoders for our purposes is that their design does not explicitly enforce the smoothness of their latent space. Consequently, even a minor correction in their latent space, e.g. due to data assimilation, could result in large changes in the decoded field (see e.g. Grooms, 2021). #### 2.2.1 VAE setup and training The architecture of our VAE network is described in Figure 1 and follows Brohan (2022) with some additional fine-tuning of the training parameters and network architecture. The size of the latent vector was \(N=100\) elements. The encoder and decoder employed 2D convolutional layers with \(3\times 3\) kernels and either \(2\times 2\) or \(1\times 2\) strides for field reshaping. In 2D convolutional layers, we applied periodic padding in the zonal direction and padding over the poles. The output layers of both the encoder and the decoder utilised the linear activation function. In the decoder's dense layer, a ReLU activation function was used, while the ELU activation function was applied elsewhere. As discussed in Section 2.1, the input fields of daily mean \(T_{850}\) were standardised by subtracting the climatological mean of the day-of-year and dividing by the climatological standard deviation of the day-of-year. We examined several setups with different deterministic multipliers \(A\), and trained the NN for 100 epochs per setup. When selecting the optimal \(A\) and corresponding weights we also assessed the mean world \(T_{850}\) anomaly (i.e. the output \(T_{850}\) minus the climatological mean of the day-of-year) for a reconstructed state \(\mathbf{x}\), which was generated from a latent vector with elements sampled from a standard normal distribution (\(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\)), and standardised following climatological statistics for four different dates, each representing its own season. We observed that a large deterministic multiplier \(A\) resulted in a relatively small Huber norm in comparison to smaller deterministic multipliers. However, the convergence rate of \(\log p(\mathbf{z})\) towards the values corresponding to a desired standard normal distribution was much slower than the rate at which overfitting to the training set enhanced. Additionally, the mean world \(T_{850}\) anomaly for 150 ensemble members from \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) exhibited values ranging between \(0.15\,^{\circ}\mathrm{C}\) and \(0.20\,^{\circ}\mathrm{C}\), which was a consequence of the decoder's imperfect expectation of its input \(\mathbf{z}\). Conversely, a very small \(A\) resulted in a minimal decrease in the Huber norm during training, making the VAE ineffective at reconstructing the input fields. For the optimal deterministic multiplier, the mean world \(T_{850}\) anomaly for 150 ensemble members of \(\mathbf{z}\) consistenly ranged between \(0.00\,^{\circ}\mathrm{C}\) and \(0.05\,^{\circ}\mathrm{C}\) after approximately 10 epochs of training. The \(p(\mathbf{z})\) approached Figure 1: Structure of our VAE model. The input field (gray colour) with \(720\times 1440\) grid points in the meridional and zonal direction enters the encoder. Through a series of 2D convolutional layers, the size of the field gradually decreases, while the number of channels increases. The intermediate fields (filters) are depicted in blue, with accompanying numbers indicating the feature size and the channel count. The final intermediate field then enters (indicated by the black arrow) a dense layer with neurons (represented by gray circles) which denote the mean (\(\mu\)) and the logarithm of variance (\(\log\sigma^{2}\)) for each element of the latent vector \(\mathbf{z}\). Each pair of parameters determines the Gaussian distribution of a single latent vector element, from which a value for a corresponding neuron in the decoder’s input layer is sampled (as indicated by the green dashed arrows). The input to the decoder is further transformed (indicated by the black arrow) by another dense layer into a field with dimensions of \(45\times 45\) and 80 channels. The intermediate fields in the decoder are shown in red (with numbers above them having the same meaning as in the encoder). As the fields pass through 2D transposed convolutional layers, their size gradually increases while the number of channels decreases. The output field from the decoder (pink colour) matches the shape of the input field to the encoder. \(\mathcal{N}(\mathbf{0},\mathbf{I})\) closely after 20 epochs, with convergence slowing thereafter. Although the loss function score for the validation set already reached its global minimum after the 11th epoch, we chose to use the weights from the 20th epoch for our subsequent experiments. At this point, the network had only slightly overfit to the training set, with the loss function score for the validation set increasing by only 0.8\(\%\). #### 2.2.2 Properties of VAE output In order to emphasise the details of the temperature fields, we have chosen to plot the temperature anomalies (\(\Delta T_{850}\)) rather than the raw temperature fields throughout the paper. Temperature anomalies are obtained by subtracting the climatological values from the destandarised decoder's output. To simplify the descriptions, we will refer to these anomalies as _decoded fields_. When presenting these fields, we shall even omit "decoded" as it is obvious that the gridded fields are not in the latent space. A notable characteristic of a VAE is that it produces different outputs when presented with the same input multiple times. In Figure 2, we illustrate the mean and the standard deviation of an ensemble of VAE outputs. Similar to many machine learning approaches [e.g. Weyn et al., 2020], our VAE tends to smooth the reconstructed input field and does not capture all its features, particularly those at small spatial scales. Additionally, a keen observer may notice rectangular features in both the mean and standard deviation of the output fields at very small scales. We identified that the patterns correspond to 45 rectangles in both zonal and meridional directions (Figure 2b), and these numbers correspond to the dimension of the smallest intermediate filters in our convolutional VAE (Figure 1). The smaller rectangular features in the output fields align with the shapes of the larger intermediate filters. Compared to the range of temperature anomalies for a given day of the year, the standard deviations of the output fields from our VAE were relatively small. To amplify them, a VAE should be trained using a smaller deterministic multiplier \(A\). #### 2.2.3 VAE basis functions The VAE basis functions determine the mapping between the latent space and the grid point space in the encoder and decoder. Figure 3 showcases two global basis functions that correspond to the first two elements of the latent vector. These functions can be inspected by altering the original value of only one (say \(j\)-th) element of latent vector, \(\mathbf{z}[j]\), to some different value, for example \(\mathbf{z}[j]\mapsto\mathbf{z}[j]+1\). The original latent vector and the modified latent vector are then mapped to the grid point space using a decoder, and the difference in resulting grid point states describes the shape of the basis function. These global basis functions are rather complex and encapsulate both large-scale and small-scale features. Note that the mappings \(\mathbf{z}[j]\mapsto\mathbf{z}[j]+1\) and \(\mathbf{z}[j]\mapsto\mathbf{z}[j]-1\) are not exactly opposite in the grid point space due to nonlinearity of the decoder. The basis functions vary on a daily basis. Most of these variations can be explained by the seasonal variations of the climatological standard deviation, which is used in the destandarisation of the decoded field. The lesser part of their variations is due to the dependence on the state vector itself. For example, an equal perturbation of some latent vector element yields different responses for different values of the element. A more thorough description of the basis function is given in Appendix 4. Figure 2: (a) The ground truth climatological temperature anomaly on April 15, 2019. (b) The ensemble mean of VAE output fields, generated by feeding the truth from (a) to the VAE 150 times. (c) The standard deviation of the VAE output fields, calculated from the same set of fields as in (b). ### Data assimilation methodology #### 2.3.1 3D-Var-like data assimilation in the latent space 3D-variational assimilation (3D-Var) seeks the state of the atmosphere (represented by vector \(\mathbf{x}\)), which optimally combines the prior knowledge of the state of the atmosphere denoted as _background_ (\(\mathbf{x}_{b}\)) and the _observations_ (\(\mathbf{y}\)) by minimising the cost function \(\mathcal{J}(\mathbf{x})\), which measures the distance of the state \(\mathbf{x}\) to the background with the \(\mathcal{J}_{b}\) term and the observations with the \(\mathcal{J}_{o}\) term (Lorenc, 1986; Kalnay, 2002): \[\begin{split}\mathcal{J}(\mathbf{x})&=\mathcal{J} _{b}+\mathcal{J}_{o}=\\ &=(\mathbf{x}-\mathbf{x}_{b})^{\mathrm{T}}\mathbf{B}^{-1}( \mathbf{x}-\mathbf{x}_{b})+\{\mathbf{y}-H(\mathbf{x})\}^{\mathrm{T}}\mathbf{R }^{-1}\{\mathbf{y}-H(\mathbf{x})\}\,.\end{split} \tag{7}\] \(\mathbf{B}\) is the background-error covariance matrix, \(\mathbf{R}\) is the observation-error covariance matrix, and \(H\) is the observation operator, which produces an equivalent of the atmospheric state \(\mathbf{x}\) in the observation space. The state which minimises the cost function (7) is denoted an _analysis_, \(\mathbf{x}_{a}=\arg\,\min_{\mathbf{x}}\mathcal{J}(\mathbf{x})\). In operational NWP variational data assimilation, the main challenge for minimisation is a large dimension (\(\sim\)\(10^{9}\)) of the state vector, which would result in \(\mathbf{B}\)-matrix with \(\sim\)\(10^{18}\) elements. The computation of the cost function \(\mathcal{J}\) and its gradient \(\nabla_{\mathbf{x}}\mathcal{J}\), and evaluating the inverse of \(\mathbf{B}\) along the way, would make the minimisation problem intractable. The issue is solved using a series of assumptions on the structure and properties of the B matrix, such as 1) spatial relations of background errors, such as spatial homogeneity of their correlation length-scale, and 2) physical relations (balances) of background errors (Bannister, 2008b, 2021), which can be described by simplified diagnostic equations of the atmospheric flow such as nonlinear balance equation, quasi-geostrophic omega equation, and thermodynamic balances (ECMWF, 2023). These effectively reduce the number of degrees of freedom in the minimisation and also decrease the condition number of the Hessian matrix \(\partial^{2}\mathcal{J}/\partial\mathbf{x}^{2}\), and consequently improve the convergence rate of minimisation. This transformed phase space for minimisation is commonly referred to as the _control space_. The 3D-Var cost function (7) can be analytically derived by the reasonable assumptions that the background and the observations are independent and that each of them possesses Gaussian errors. None of these assumptions is violated if the background state is defined in the latent space instead of the grid point space. Thus, we have defined cost function \(\mathcal{J}_{z}\) in the reduced dimension latent space that measures the distance of the latent state vector \(\mathbf{z}\) to the background latent Figure 3: (a) Decoded latent-vector-mean \(D(\mathbf{z})\) of the encoded ground truth for April 15, 2019. (b) Difference to (a) if the first element of the latent vector is increased by 1. (c) Same as (b) but with the first element decreased by 1. (d,e) Same as (b,c), but with the second element of the latent vector being increased or decreased by 1. Mind the different colour scales in (a) and (b-e). vector \(\mathbf{z}_{b}\) (term \(\mathcal{J}_{bz}\)) and the distance of the observations \(\mathbf{y}\) to the \(\mathbf{z}\), transformed into the observation space (term \(\mathcal{J}_{oz}\)): \[\mathcal{J}_{z}(\mathbf{z}) =\mathcal{J}_{bz}+\mathcal{J}_{oz}= \tag{8}\] \[=(\mathbf{z}-\mathbf{z}_{b})^{\mathrm{T}}\mathbf{B}_{z}^{-1}( \mathbf{z}-\mathbf{z}_{b})+[\mathbf{y}-H\{D(\mathbf{z})\}]^{\mathrm{T}} \mathbf{R}^{-1}[\mathbf{y}-H\{D(\mathbf{z})\}]\,,\] where \(\mathbf{B}_{z}\) is the error covariance matrix (see Sec. 2.3.3 for details), \(D\) stands for the decoder, and the observation operator \(H\) bilinearly interpolates the decoded field into the observation space. The optimal latent space analysis \(\mathbf{z}_{a}\) is then obtained by minimising the cost function, such that \(\mathbf{z}_{a}=\arg\,\min_{\mathbf{z}}\mathcal{J}_{z}(\mathbf{z})\). The minimisation algorithm, described in the next section, automatically computes the gradient of the cost function (8) \[\nabla_{z}\mathcal{J}_{z}=\mathbf{B}_{z}^{-1}(\mathbf{z}-\mathbf{z}_{b})+ \mathbf{G}\,\mathbf{R}^{-1}[\mathbf{y}-H\{D(\mathbf{z})\}]\,, \tag{9}\] where \(\mathbf{G}=(\partial H/\partial D)\,\partial D/\partial\mathbf{z}\) and the differentiation is done automatically. By transforming the minimisation problem to the latent space, we have decreased the expense of minimisation enormously. For example, the state vector \(\mathbf{x}\) has \(720\times 1440=1\,036\,800\) elements, which results in more than \(10^{12}\) elements in \(\mathbf{B}\). On the other hand, the latent vector \(\mathbf{z}\) has 100 elements, with full \(\mathbf{B}_{z}\)-matrix containing only \(10^{4}\) elements. Our latent space therefore resembles the control space in NWP variational data assimilation, with the distinction that our latent space is not constructed by purely mathematical functions, such as spherical harmonics or wavelets [e.g. Fisher, 2003]. Instead, it is defined by empirical neural-network learned basis functions (recall Figure 3), that capture the relationships between temperature data at distant grid points on the 850 hPa pressure level in the Earth's atmosphere. Similar to normal modes, which are the eigensolutions of the linearised atmospheric governing equations [e.g. Zagar et al., 2015, Vasylevych and Zagar, 2021], our basis functions incorporate key features of the atmospheric flow, including midlatitude Rossby waves and equatorial waves. On top of that, our neural-network-learned basis functions also account for the influences of orography and land-sea distribution on atmospheric dynamics. #### 2.3.2 Minimisation algorithm To minimise the cost function (8), we used the Adam optimizer [Kingma and Ba, 2017]. For single observation experiments, the initial learning rate was set to \(0.01\), while for other experiments, the learning rate was set to \(0.1\). The other properties of the Adam optimizer were set to their default settings as implemented in TensorFlow. Note that the weights of the decoder and encoder were not allowed to change during the minimisation of the DA cost function (8). During the minimisation process, if the value of the cost function did not improve within the last three steps, the learning rate was reduced by a factor of \(2\), but not allowed to drop below \(10^{-4}\). The minimisation was terminated if the criterion \[1-\frac{\mathcal{J}_{z}(\mathbf{z}_{i})}{\mathcal{J}_{z}(\mathbf{z}_{i-1})}<\varepsilon \tag{10}\] was fulfilled for 10 consecutive steps. Here \(\mathbf{z}_{i}\) represents the latent vector at the \(i\)-th iteration, and the threshold \(\varepsilon\) was set to \(0.01\). Alternatively, the algorithm would automatically terminate if it reached 100 computational steps. The \(\mathbf{z}\) corresponding to the lowest \(\mathcal{J}_{z}\) value was always chosen as the analysis. Typically, the minimisation converged in 10 to 30 iterations, while a single minimisation process lasted a couple of seconds on a single CPU. #### 2.3.3 Background-error covariance modeling One of the key elements for using VAE for data assimilation are the basis functions, which allow reduced order representation in the latent space. In accordance with the forward model described in Section 2.3.4, the latent space background-error covariance matrix was computed as a difference between pairs of unperturbed (\(z_{i}=\mu_{i}\) in Eq. 6) encoded ground truth states on two consecutive days (\(\mathbf{z}_{t}^{d}\) and \(\mathbf{z}_{t}^{d-1}\)): \[\mathbf{B}_{z} =\left\langle\left(\mathbf{z}_{t}-\mathbf{z}_{b}\right)\,\left( \mathbf{z}_{t}-\mathbf{z}_{b}\right)^{\mathrm{T}}\right\rangle \tag{11}\] \[=\left\langle\left(\mathbf{z}_{t}^{d}-\mathbf{z}_{t}^{d-1}\right) \,\left(\mathbf{z}_{t}^{d}-\mathbf{z}_{t}^{d-1}\right)^{T}\right\rangle\,.\] Latent vector \(\mathbf{z}_{t}\) is the unperturbed encoded ground truth, and \(\mathbf{z}_{b}\) stands for the encoded background, and \(\left\langle\cdot\right\rangle\) denotes averaging over multiple (\(\mathbf{z}_{t}^{d}\), \(\mathbf{z}_{t}^{d-1}\)) pairs. The diagnosed \(\mathbf{B}_{z}\) is shown in Figure 4 for both the validation set and the test set. Both matrices are quasi-diagonal, with diagonal elements several orders of magnitude larger than off-diagonal. Although the matrices for both periods appear similar, we opted to use the \(\mathbf{B}_{z}\) diagnosed from the validation set for our assimilation experiments to avoid using any information derived from the data of the test set. Computing an inverse of a full \(\mathbf{B}_{z}\) matrix with \(100\times 100\) elements is cheap in our case, but we opted for an approach required in the case of a large latent space (as in operational data assimilation), and only retained the large diagonal elements of \(\mathbf{B}_{z}\), which is easy to invert. The term of \(\mathbf{B}_{z}^{-1}\left(\mathbf{z}-\mathbf{z}_{b}\right)\) in (9) is then replaced by a simple elementwise division of \(\left(\mathbf{z}-\mathbf{z}_{b}\right)\) with the vector of background-error variances in the latent space, which further accelerates the minimisation. In the experiments we conducted, the \(\mathbf{B}\)-matrix was assumed diagonal, except if stated otherwise. The validity of using only diagonal elements is discussed in Section 3.1.1. Although \(\mathbf{B}_{z}\) was constant in the latent space, the background-error standard deviation in the grid point space varied daily, as shown in Figure 5. This variability can be attributed to 1) variations in the climatological standard deviation of each day of the year, which is used for the destandardisation of the decoded field (recall description in Section 2.1), and 2) differences in the state of the background latent vector \(\mathbf{z}_{b}\) that we perturb, and the nonlinearity of the decoder. Because of 1), the background-error standard deviations differ between seasons (Figures 5a,d-f). As expected, they are greater in the winter hemisphere and over the continents. Because of 2), the background-error standard deviations differ in a flow-dependent way, i.e. based on the current state of the latent vector background. For example, compare the background-error standard deviations over North America in panels (a,b) of Figure 5. In Appendix A, we prove, that the differences between the panels are not a consequence of a finite ensemble. Figure 4: Background-error covariance matrix \(\mathbf{B}_{z}\) represented in the latent space for (a) validation set and (b) test set. Absolute values of \(\mathbf{B}_{z}\)-matrix elements are shown. The diagonal elements represent the background-error variances, and off-diagonal elements represent the covariances of the errors of the background latent vector. Figure 5: Background-error standard deviation for different dates and the climatological standard deviation for April 15. The colourbar in the centre (unit \({}^{\circ}\)C) applies to all panels. The background-error standard deviations were computed from 150 decoded ensemble members. #### 2.3.4 Setup of observing system simulation experiments We have performed a series of observing-system simulation experiments (OSSEs). For simplicity, we have generated both background and the observations from ERA5 ground truth as depicted in Figure 6, and computed the analysis following minimisation of cost function (8). The observations on day \(d\) were simulated by adding random Gaussian perturbations with zero mean and standard deviation \(\sigma_{o}\), i.e. \(\mathbf{\varepsilon}\sim\mathcal{N}(0,\sigma_{o})\) to the ground truth values \(\mathbf{x}_{t}^{d}\), bilinearly interpolated with \(H\) to the location of the observations, \(H\left(\mathbf{x}_{t}^{d}\right)\), which yields the observation vector: \[\mathbf{y}=H\left(\mathbf{x}_{t}^{d}\right)+\mathbf{\varepsilon}\,.\] The observations were assumed uncorrelated and their error standard deviation was set to 1 K in all experiments. For simplicity, we choose the background latent space vector to remain constant in time, so the \(\mathbf{z}_{b}^{d}\) on day \(d\) is equal to the unperturbed encoded ground truth of the previous day, \(d-1\), i.e., \[\mathbf{z}_{b}^{d}=\mathbf{z}_{t}^{d-1}=E\left(\mathbf{x}_{t}^{d-1}\right)\,.\] This type of model is commonly referred to as a persistence model. The background-error covariance model was as described in Section 2.3.3. Due to inexpensive computations, we conducted all experiments using a setup that resembles the ensemble of data assimilation (EDA) approach (Bonavita et al., 2012) with 150 perturbed sets of observations and VAE-generated backgrounds based on the background-error statistics, described in section 2.3.3. As in Eq. 6, we perturbed the \(\mathbf{z}_{b}^{d}\) by adding Gaussian noise with standard deviation equal to the background-error standard deviation, i.e., the square root of the diagonal elements of \(\mathbf{B}_{z}\): \[z_{i}=\mu_{i}+\hat{z}_{i}\,\sigma_{bi}\,. \tag{12}\] Here, \(\sigma_{bi}\) represent the background-error standard deviation of \(i\)-th element of the latent vector. ## 3 Results ### Single observation experiments The background-error model differs significantly between the tropics and midlatitudes due to variations in atmospheric dynamics, primarily attributed to the latitude-varying Coriolis force. In the midlatitudes at synoptic scales, the prevailing thermal wind balance is continuously restored. Conversely, in equatorial areas, the main balance is between the vertical motions and the condensational heating. The latter leads to the excitation of equatorial waves, which propagate within the equatorial wave channel (Matsuno, 1966), and effectively couple the remote tropical areas. Previous studies have Figure 6: Setup of the observing system simulation experiments with neural network data assimilation system. Both observations and background are generated from ERA5 ground truth \(\mathbf{x}_{t}\). The observations \(\mathbf{y}\) on day \(d\) are fused with the latent-space background \(\mathbf{z}_{b}^{d}\), that is a 1-day persistence-based forecast issued on day \(d-1\) to form the latent space analysis \(\mathbf{z}_{a}\). explored different approaches to represent background-error covariances in these regions. For instance, Zagar et al. (2004, 2005) utilised equatorial waves as basis functions in a spectral representation of tropical background-error covariances, and Kornich and Kallen (2008) combined the midlatitude and equatorial balances using Hough modes at certain characteristic depth of atmospheric features and by extending the control vector for minimisation. Notably, all these attempts were carried out within a shallow-water model framework. Subsequently, Zagar et al. (2013) used normal modes of linearised atmospheric motions (Kasahara and Puri, 1980) to diagnose short-range forecast covariances from the ECMWF ensemble forecasts. However, in the operational data assimilation at ECMWF, there is no dedicated \(\mathbf{B}\)-model for the tropics, and the assimilation of dynamic variables is univariate. In the midlatitudes, on the other hand, the balance is defined by the linearised version of nonlinear balance equation and quasi-geostrophic \(\omega\) equation (ECMWF, 2023). In this study, we present a unified model for the background-error covariances in the tropics and midlatitudes. We employ a single set of quasi-orthogonal basis functions derived by neural-network, which effectively capture the spatial autocovariances of the \(T_{850}\)background errors. The background-error model is demonstrated in the following sections through several single observation experiments. #### 3.1.1 Observation in midlatitudes The first experiment demonstrates the impact of a single temperature observation with 3 K observation departure \(\mathbf{d}=\mathbf{y}-H(E(\mathbf{x}_{0}^{d}))\) and a standard deviation \(\sigma_{o}\) of 1 K. The observation is located above Ljubljana, Slovenia, at \(46.1\,^{\circ}\)N and \(14.5\,^{\circ}\)E. Figure (a)a shows that the mean analysis increment (i.e. the mean analysis minus background over 150 ensemble members) has its expected structure with respect to the typical atmospheric flow. Firstly, the increment peaks at the observation point, while it is also large over the continental part (Alps, Balkans and the central Europe), where the background-error standard deviation is relatively larger than over the Mediterranean (Figure (c)c). This is expected, as the climatological temperature variance over the Mediterranean is lesser than over the continent, due to the damping effect of sea surface temperature on \(T_{850}\) variability. Secondly, the shape of the positive increment is elongated in the south-west to north-east direction, which complies with the mean south-westerly winds in the region. Thirdly, the area of positive increment is surrounded by a shallower negative increment. Again, this is an expected result for the applied climatological background-error covariance model (Fisher, 2003), as the negative temperature perturbation can be associated with a spatial translation of a synoptic Rossby wave in the background with respect to reality. The amplitude of the increment becomes much smaller further away from the observation location. Panel (d)d shows the resulting analysis-error standard deviation \(\sigma_{a}\) after performing an ensemble of data assimilations. \(\sigma_{a}\) is significantly reduced with respect to \(\sigma_{b}\) (panel (c)c) only in the proximity of the observation location with a few hundred-kilometre elongation towards the southwest in compliance with the typical south-westerlies (panel (e)e). Consistently with small changes in the grid point space, the changes in the latent space vector are also barely visible (panel (b)b). We justify the use of diagonal \(\mathbf{B}_{z}\) for data assimilation by comparing the analysis increment with diagonal \(\mathbf{B}_{z}\) (Figure (a)a) to the analysis increment with full \(\mathbf{B}_{z}\), shown in Figure (a)a. The observation has a slightly larger impact in the case of the diagonal \(\mathbf{B}_{z}\) (Figure (b)b). Correspondingly, the analysis-error standard deviation in the case of the diagonal \(\mathbf{B}_{z}\) is also somewhat smaller in the proximity of the observation (the ratio in Figure (c)c is everywhere between 0.88 and 1.03). Nevertheless, the differences in the analyses and their standard deviations are significantly smaller than the impacts of the observation on the analyses and their standard deviations. Thus we can conclude that using only the diagonal elements of \(\mathbf{B}_{z}\) instead of full \(\mathbf{B}_{z}\) does not importantly harm the quality of the data assimilation. Another important property of data assimilation is that the analysis increment does not only depend on the observation departure but also on the structure and magnitude of the the background-error covariances, that are in our case mildly-dependent on \(\mathbf{z}_{b}\), i.e. flow-dependent, as described in Section 2.3.4. Figure 9 shows the analysis increments for the single observation experiments with the same temperature departure at the same location as in Figure 7, however for different dates, and so different backgrounds, and background-error covariances. The differences between the increments are small but clearly visible. Therefore, the change in the observation location should not only result in the spatial translation of the increment, but should also be sensitive to the \(\mathbf{z}_{b}\) and associated patterns of \(\sigma_{b}\). Figure 10 shows an example of assimilation of a single observation with a 3 K observation departure and \(\sigma_{o}\) of 1 K. The observation is located near the centre of the area with high \(\sigma_{b}\) in the southwestern Indian Ocean at \(50\,^{\circ}\)S,\(50\,^{\circ}\)E (Figure (b)b). Due to a large background-error standard deviation, the information coming from the observation is weighed more than the background information, resulting in an analysis increment of large magnitude, and strong reduction of \(\sigma_{a}/\sigma_{b}\) (Figure (c)c). The shape of the increment is anisotropic and elongated towards the north west, upstream of the temperature advection. #### 3.1.2 Observation in tropics As noted in Section 3.1, distinct atmospheric dynamics between the tropics and midlatitudes lead to contrasting background-error covariances. The correlation lengthscale is extended in the tropics, as the equatorial waves couple the remote areas, affecting analysis increments' shapes. The settings for the assimilation experiments with a single temperature observation were the same as for the midlatitudes. The observation, positioned above Singapore at \(1.3\,\mathrm{\SIUnitSymbolCelsius},103.9\,\mathrm{\SIUnitSymbolCelsius}\) has a temperature departure of 3 K and \(\sigma_{o}\) of 1 K. The resulting analysis increment is shown in Figure 11a. We observe several notable properties. Firstly, the magnitude of the analysis increment at the observation location is small, as \(\sigma_{o}\gg\sigma_{b}\), with \(\sigma_{b}\approx 0.19\,\mathrm{K}\). A small analysis increment in the grid point space is thus expected. Secondly, there are significant temperature increments in the midlatitudes. These stem from spatially extensive background error correlations in the tropics, excessive \(\sigma_{b}\) in the midlatitudes, and underestimated \(\sigma_{b}\) in the tropics, as commonly observed in climatology-based estimation of background-error Figure 8: Assimilation of a single temperature observation above Ljubljana, as in Figure 7, but with the full \(\mathbf{B}_{z}\) used for the assimilation. (a) The analysis increment. (b) The difference between the analyses here and in Figure 7. (c) The ratio between the standard deviations of the analyses from Figure 7 and the analysis for this experiment. Figure 7: Single observation experiment above Ljubljana with a 3 K departure and \(\sigma_{o}\) of 1 K, utilising the background temperature field for April 15, 2019. (a) Mean analysis increment (analysis mean minus background mean). (b) Latent vector distribution for background (red) and analysis (blue) with boxes spanning the 25th and 75th percentiles and whiskers from the 5th to 95th percentile. Outliers are not shown. (c) Background-error standard deviation, \(\sigma_{b}\). (d) Analysis-error standard deviation, \(\sigma_{a}\). (e) Ratio \(\sigma_{a}/\sigma_{b}\). The observation location is marked with a golden star in panels (a,c-e). covariances (Bannister, 2008a). The midlatitudes are thus only weakly constrained by the background. This increment pattern shares similarities with the temperature perturbation response over the Maritime continent (e.g. Trenberth et al., 1998; Kosovelj et al., 2019), where condensational heating generates a divergent pattern exciting Rossby wave train along the great circle, particularly evident over eastern Asia towards Japan and North America. The effect is, as in our case, amplified in the Northern Hemisphere due to low-latitude subtropical jet in the vicinity of the Himalayas. This means there is a solid physical reasoning for such pattern. Outside the tropics, the analysis increment is an order of magnitude smaller than \(\sigma_{b}\), and the assimilation only leads to a slight reduction of \(\sigma_{a}/\sigma_{b}\) (Figure 11b,c). Additionally, the analysis increment elongates eastwards from the observation location (also Figure 11a,c), corresponding to the mean easterly winds in the tropical lower troposphere. Figure 11: As Figure 10, but for an observation above Singapore. Note that the colour scales in (a,c) are different to those in previous experiments. Figure 10: Single observation experiment for a temperature observation above southwestern Indian Ocean with a 3 K departure and \(\sigma_{o}\) of 1 K, with the background temperature field for April 15, 2019. (a) The mean analysis increment. (b) Standard deviation of the background, \(\sigma_{b}\). (c) The ratio between the standard deviations of the analysis and of the background, \(\sigma_{a}/\sigma_{b}\). The golden star marks the observation location. Figure 9: Analysis increments for single observation experiments for an observation above Ljubljana (as in Figure 7) on different dates: a) July 15, 2019; b) October 15, 2019; c) April 15, 2020. Figure 12 provides another example featuring an observation in equatorial Africa at \(7.0\,^{\circ}\mathrm{N}\), \(21.0\,^{\circ}\mathrm{E}\), with \(\sigma_{b}\approx\sigma_{o}\). The magnitude of the analysis increment at the observation location is larger than in the previous case. In the tropics and subtropics, the shape of the increment resembles the Equatorial Rossby wave pattern with symmetric positive increments to the north and south of the Equator, west of the observation location. The increment's magnitude in these regions could also be attributed to large \(\sigma_{b}\) (Figure 12b). Again, the analysis increment includes the Rossby wave train, that can be observed over the Caspian Sea (Figure 12a). Having performed a plethora of single observation experiments, we have also tested whether the results satisfy an important theoretical property of a single observation DA, that the analysis increment at the observation location \(\delta T_{850}^{a}\) and its standard deviation \(\sigma_{a}\) are defined by: \[\delta T_{850}^{a}=\frac{\delta T_{850}^{a}/\sigma_{o}^{2}}{1/\sigma_{b}^{2}+1 /\sigma_{o}^{2}}\qquad\text{and}\qquad\sigma_{a}=\sqrt{\frac{1}{1/\sigma_{b}^{ 2}+1/\sigma_{o}^{2}}}, \tag{13}\] where \(\delta T_{850}^{a}\) signifies the observation departure. The outcomes of this evaluation for the presented experiments are listed in Table 1. Generally, the experimental values closely align with the theoretical expectations. Minor discrepancies were likely stem from finite ensemble members and an imperfect minimisation algorithm. ### Global data assimilation In our final experiment, we analyse the performance of NNDA in the case of many observations, evenly distributed across a global latitude-longitude grid (Figure 13a). These observations were simulated from the ground truth for day \(d\), while the background was based on encoded truth from day \(d-1\), as detailed in Section 2.3.4. The observations were spaced at \(4^{\circ}\) in both meridional and zonal direction, and \(\sigma_{o}\) was set to 1 K. Panels 13b,c depict the VAE's reconstruction of the truth and the background, while panel 13d illustrates the resulting analysis following the assimilation of global observations. As expected, the analysis aligns much more closely with the truth than the background (Figure 13g,h). Notably, the analysis distribution shifts toward the encoded truth for most latent vector elements (Figure 13e). Moreover, the analysis-error standard deviation is significantly reduced in contrast to the background-error standard deviation, except in the tropics, where \(\sigma_{o}>\sigma_{b}\) (Figure 13i,j,k). This feature is also evident in the latent space, with each latent vector's element distribution being narrower compared to the background (Figure 13e). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Location & \(\mathbf{\delta T_{850}^{a}}\) & \(\mathbf{\sigma_{o}}\) & \(\mathbf{\sigma_{b}}\) & Theo. \(\mathbf{\delta T_{850}^{a}}\) & Ex. \(\mathbf{\delta T_{850}^{a}}\) & Theo. \(\mathbf{\sigma_{a}}\) & Ex. \(\mathbf{\sigma_{a}}\) \\ \hline Ljubljana & 3.03 & 1.07 & 1.91 & 2.31 & 2.19 & 0.93 & 0.94 \\ SW Indian Ocean & 3.14 & 0.95 & 3.86 & 2.96 & 2.95 & 0.92 & 0.95 \\ Singapore & 3.11 & 0.99 & 0.19 & 0.11 & 0.08 & 0.18 & 0.18 \\ Equatorial Africa & 3.14 & 1.10 & 0.61 & 0.75 & 0.59 & 0.54 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the results from single observation experiments presented above with 150 ensemble members and preset \(\delta T_{850}^{a}=3\,\mathrm{K}\) and \(\sigma_{o}=1\,\mathrm{K}\) with the theoretical expectations based on Equation (13). All values are in K. The first two columns display deviations from preset values caused by finite ensemble sizes. Theoretically predicted values of \(\delta T_{850}^{a}\) and \(\sigma_{a}\) are labelled as _Theo._, while experimental values carry the label _Ex_. The values for \(\sigma_{b}\), \(\delta T_{850}^{a}\), and \(\sigma_{a}\) were obtained by bilinear interpolation from the grid to the observation location. Figure 12: As Figure 10, but for an observation above central equatorial Africa. Note that the colour scales in (a,c) are different to those in previous experiments. The convergence of the cost function \(\mathcal{J}_{z}\) (8) during the minimisation process is demonstrated in Figure 14. The minimisation stopping criterion (10) was achieved after 26 iterations in this case. Figure 13: Single cycle data assimilation experiment for April 15, 2019, with 4050 observations with the longitudinal and latitudinal spacing of \(4^{\circ}\) scattered around the globe. The observations were sampled as described in 2.3.4 and their \(\sigma_{o}\) was set to \(\mathbf{k}\). (a) The ground truth \(\mathbf{x}_{t}^{d}\) and the locations of the observations. (b) The mean of the VAE-reconstructed truth \(D\left(E\left(\mathbf{x}_{t}^{d}\right)\right)\), (c) The mean background, represented in the grid point space. (d) As (c), but for the analysis. (e) The distributions in the latent space for the encoded truth (orange), background (red) and analysis (blue). See Figure 7 for the box and whisker plot properties. (f) The analysis increment (i.e. panel (d) minus panel (c)). (g) The analysis with subtracted VAE of truth (i.e. panel (d) minus panel (b)). (h) The background with subtracted VAE of truth (i.e. panel (c) minus panel (b)). (i) The standard deviation of the background. (j) The standard deviation of the analysis. (k) The ratio between the standard deviations of background and analysis (\(<1\) is good). ## 4 Discussion, Conclusions and Outlook The primary objective of this study was to present a proof-of-concept for neural network data assimilation (NNDA) in numerical weather prediction. To achieve this, we trained a convolutional variational autoencoder (VAE) to represent global temperature fields at 850 hPa pressure level (\(T_{850}\)) from ERA5 reanalysis in a reduced-dimension latent space (Figs. 1, 2) using neural-network learned basis function (Fig. 3). This enabled the construction of a three-dimensional variational (3D-Var) cost function (8) in the latent space, which measures the distance of the latent vector to the observations and the background latent vector. We employed the Adam optimiser, a stochastic gradient descent method, for minimising the cost function to derive the optimal analysis, utilising auto-differentiation of the decoder (Eq. 9) in the process. The background-error covariance model was represented in the latent space from the 24-hour differences of encoded ground truth states. We have shown that the \(\mathbf{B}_{z}\)-matrix in the neural-network derived space is quasi-diagonal with diagonal elements an order of magnitude larger than off-diagonal elements (Fig. 4), which further accelerates the minimisation. While the resulting \(\mathbf{B}_{z}\)-matrix is static in the latent space, both background-error variances and covariances exhibit seasonal variations and mild dependence on the current latent-state-representation of the atmosphere, commonly referred to as flow-dependent behaviour (Fig. 5). We conducted a number of data assimilation experiments with a single assimilation cycle (Fig. 6), in which the persistence-based background temperature fields were corrected by assimilating simulated temperature observations. Each experiment consisted of an ensemble of data assimilations with perturbed background state and observations, resulting in an ensemble of analyses. Such an approach mimics the EDA method employed at ECMWF (Bonavita et al., 2012). Single observation experiments revealed the structure of the background-error covariances, which differs between the midlatitudes and the tropics (Figs. 7, 10-12). We also verified that the magnitude of the increments following NNDA minimisation matched the theoretical expectations (Table 1), affirming the correctness of our method and its implementation. Finally, we have performed a global data assimilation experiment, where we monitored the reduction of the analysis error with respect to background error in both grid point and latent spaces (Fig. 13). Our approach represents the first example of variational data assimilation approach in the neural-network-derived reduced latent space for numerical weather prediction (NWP). While previous applications of neural-network DA in latent spaces were confined to Kalman filter methods (Mack et al., 2020; Amendola et al., 2021; Peyron et al., 2021), and reduced-space variational DA relied on linear transformations like PCA or SVD (Robert et al., 2005; Chai et al., 2007; Cheng et al., 2010), our method employs nonlinear neural-network-derived basis functions, yielding a notably efficient reduced field representation. For instance, Peyron et al. (2021) highlighted that the reduced space obtained with neural networks is of much smaller dimension than linear model reduction techniques, resulting in reduced condition number and faster convergence. In most NNDA studies, authors evaluated the observational component of the cost function (\(\mathcal{J}_{zo}\)) in the latent space. This required a multistep transformation from the observation space to the grid point space and then to the latent space with the encoder network (e.g. Mack et al., 2020; Amendola et al., 2021). Learning such reconstruction is not trivial in the operational NWP data assimilation, given the complex relations between observed and prognostic variables, along with evolving biases in observation systems over time (Dee, 2004). Furthermore, the dimensionality of the Figure 14: Cost function convergence of a single ensemble member during the minimisation procedure when performing global DA (Figure 13). (a) The total value of the cost function \(\mathcal{J}\) (Equation (8)), (b) the value of the observation term \(\mathcal{J}_{o}\), (c) the value of the background term \(\mathcal{J}_{b}\), and (d) the Euclidean norm of the cost function gradient \(\nabla\mathcal{J}_{z}\). Note the difference in the order of magnitude between (a,b), (c) and (d). phase space (total variable count) greatly surpasses the observations by more than an order of magnitude, which would result in a highly underdetermined problem. Additionally, such transformation disseminates observational information throughout the domain, leading to correlated observation errors. Based on that, we believe that the observation operators, whether physical or derived via neural networks, will remain indispensable until a continuous, spatially and temporally comprehensive observation network is achieved. The results of Andrychowicz et al. (2023) further indicate that relying solely on observations for determining the analysis increments is unfeasible, underscoring the need to combine observations with complete prior representation of the atmospheric state. In this study, we selected VAE over the standard AE to ensure that the latent state vector follows the Gaussian distribution and to ensure the smoothness of the latent space. The Gaussian nature of the latent space variables in VAE proves highly advantageous for NNDA, similar to how the assumption of gaussianity of the control variables is central in the derivation of variational data assimilation (Lorenc, 1986). While standard AE might not guarantee smoothness, our sensitivity experiments using a convolutional AE with 24 variables across 4 atmospheric levels and the surface indicated practical achievement of smoothness in both latent and grid point spaces (Perkan and Zaplotnik, 2023). An important limitation of using VAE instead of standard AE is that most neural-network forecast models use standard autoencoders. Employing data assimilation and forecast model time integration within the same space would reduce the number of \(E(\mathbf{x})\) and \(D(\mathbf{z})\) transformations, thereby reducing the complexity of a single data assimilation cycle. Another drawback of VAE is somewhat increased computational complexity in comparison to AE due to the sampling layer. In the operational NWP DA, background-error covariance models presently predominantly capture large-scale atmospheric features characterised by low Rossby numbers, such as extratropical flows, not explicitly accounting for mesoscale and tropical balances (Bannister, 2021). The main novelty of our work is a unified representation of background-error covariances in the tropics and the midlatitudes. The background-error covariances in the midlatitudes demonstrate enhanced correlations in the south-west to north-east direction, in alignment with the prevailing south-westlies in the Northern Hemisphere and north-westlies in the Southern Hemisphere. Negative correlations at a distance of around 2000 km from the observation site align with the previous studies for the climatological \(\mathbf{B}\)(e.g. Fisher, 2003). In the tropics, background-error correlations exhibit larger correlation lengthscales in the zonal direction, in accordance with the mean easterlies in the equatorial lower troposphere and disturbances confined within the equatorial waveguide. The shapes of the analysis increments resemble the Gill-like response to the diabatic heating (Gill, 1980), showing an Equatorial Rossby wave to the west of the observation location and Kelvin wave to the east (Fig. 12). This temperature perturbation also induces the excitation of the Rossby wave trains, in line with the experiments presenting adjustment of global atmospheric flow to heating perturbation in the tropics (e.g. Trenberth et al., 1998; Kosovelj et al., 2019). Further single observation experiments, such as the one with the temperature observation over the Eastern equatorial Pacific (not shown), resulted in increments extending into the central Pacific, resembling the ENSO pattern. While these teleconnecting patterns are physically meaningful, it remains uncertain whether such large-scale increments would benefit the operational NWP DA. The impact of long-distance correlations was amplified in our study due to the excessive \(\sigma_{b}\) in the midlatitudes and underestimated \(\sigma_{b}\) in the tropics, a common issue in climatology-based estimation of background-error covariances (Bannister, 2008a). To fight this issue, we attempted to construct a flow-dependent (from the latent vector's perspective) \(\mathbf{B}_{z}\) from ERA5 ensemble members. However, the ERA5 dataset only provides 9 ensemble members and a control run, insufficient for obtaining adequate statistics. Additionally, resolution constraints of our VAE hindered the representation of the finely-structured flow-dependent background error variances from the model. Background-error covariance modeling typically combines the static climatological part with the flow-dependent component, derived from an ensemble of backgrounds used in the ensemble of data assimilations. The presented framework for background-error covariance estimation in the space discovered by a neural network could be tested in a more realistic environment like the ECMWF variational assimilation system, where the climatological component of the \(\mathbf{B}\) matrix is represented in wavelet space (Fisher, 2003). However, further improvements of the autoencoder are needed, which would include more variables, improve both horizontal resolution and add vertical slices, before relying on such \(\mathbf{B}\)-model in operations. Our NNDA method emulates 3D variational data assimilation (3D-Var), but we plan to extend it to resemble 4D-Var by including the convolutional-autoencoder-based forecast model of Perkan and Zaplotnik (2023), using the model trajectory as a strong constraint. While NN forecast models particularly excel at short forecast lead times, we will explore the prospect of extending the assimilation window. The 4D-Var approach is the way to go forward, as it allows to make full use of tracer data through the 4D-Var tracing effect and mass data (e.g. temperature) through the internal geostrophic adjustment process (Zaplotnik et al., 2018). The initialisation of the model forecast through data assimilation is the last missing link to perform standalone neural-network weather prediction and to generate new reanalysis training data for a new generation neural network models. In view of the looming climate crisis and substantial energy savings offered by NN prediction and training relative to traditional NWP models, a concerted effort should be made to develop a reliable NNLO as soon as possible. ## Acknowledgements Bostjan Melinc is supported by ARRS Programme P1-0188. The authors would like to thank Gregor Skok (UL FMF) for consistent support in acquiring the PhD funding. The authors acknowledge the work of Philip Brohan (UKMO) on Machine Learning for Data assimilation ([http://brohan.org/Proxy_20CR/](http://brohan.org/Proxy_20CR/)), who generously shared the initial code for this research ([https://github.com/philip-brohan/Proxy_20CR](https://github.com/philip-brohan/Proxy_20CR)). The authors would also like to thank Massimo Bonavita (ECMWF) for providing thoughtful suggestions on the early manuscript and Mariana Clare (ECMWF) for carefully reading the manuscript and fruitful discussions on the topic. ## Conflict of interest The authors declare no conflict of interest.
2304.08198
The Algebraic and Analytic Compactifications of the Hitchin Moduli Space
Following the work of Mazzeo-Swoboda-Weiss-Witt and Mochizuki, there is a map $\overline{\Xi}$ between the algebraic compactification of the Dolbeault moduli space of $\mathsf{SL}(2,\mathbb{C})$ Higgs bundles on a smooth projective curve coming from the $\mathbb{C}^\ast$ action, and the analytic compactification of Hitchin's moduli space of solutions to the $\mathsf{SU}(2)$ self-duality equations on a Riemann surface obtained by adding solutions to the decoupled equations, known as ``limiting configurations''. This map extends the classical Kobayashi-Hitchin correspondence. The main result of this paper is that $\overline{\Xi}$ fails to be continuous at the boundary over a certain subset of the discriminant locus of the Hitchin fibration. This suggests the possibility of a third, refined compactification which dominates both.
Siqi He, Rafe Mazzeo, Xuesen Na, Richard Wentworth
2023-04-17T12:19:09Z
http://arxiv.org/abs/2304.08198v2
# The algebraic and analytic compactifications ###### Abstract. Following the work of Mazzeo-Swoboda-Weiss-Witt [30] and Mochizuki [33], there is a map \(\overline{\Xi}\) between the algebraic compactification of the Dolbeault moduli space of \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundles on a smooth projective curve coming from the \(\mathbb{C}^{*}\) action, and the analytic compactification of Hitchin's moduli space of solutions to the \(\mathsf{SU}(2)\) self-duality equations on a Riemann surface obtained by adding solutions to the decoupled equations, known as "limiting configurations". This map extends the classical Kobayashi-Hitchin correspondence. The main result of this paper is that \(\overline{\Xi}\) fails to be continuous at the boundary over a certain subset of the discriminant locus of the Hitchin fibration. This suggests the possibility of a third, refined compactification which dominates both. ## 1. Introduction Let \(\Sigma\) be a closed Riemann surface of genus \(g\geq 2\). The coarse Dolbeault moduli space of \(\operatorname{SL}(2,\mathbb{C})\) semistable Higgs bundles on \(\Sigma\), denoted by \(\mathcal{M}_{\operatorname{Dol}}\), and Hitchin's moduli space of solutions to the \(\mathsf{SU}(2)\) self-duality equations on \(\Sigma\), denoted by \(\mathcal{M}_{\operatorname{Hit}}\), have been extensively studied since their introduction over 35 years ago. The Kobayashi-Hitchin correspondence, proved in [21], gives a homeomorphism between these two moduli spaces: \[\Xi:\mathcal{M}_{\operatorname{Dol}}\xrightarrow{\ \sim\ }\mathcal{M}_{ \operatorname{Hit}}\.\] The space \(\mathcal{M}_{\operatorname{Dol}}\) is naturally a quasiprojective variety [35, 45], and similarly to monopole moduli spaces, \(\mathcal{M}_{\operatorname{Hit}}\) fails to be compact. Recently, there has been interest from several directions on natural compactifications of these two spaces. A key feature on the Dolbeault side is the existence of a \(\mathbb{C}^{*}\) action with the Bialynicki-Birula property, and this may be used to define a completion of \(\mathcal{M}_{\operatorname{Dol}}\) as a projective variety [19, 7, 11]. The ideal points are identified with the \(\mathbb{C}^{*}\) orbits in the complement of the nilpotent cone of \(\mathcal{M}_{\operatorname{Dol}}\). The Hitchin moduli space also admits a more recently introduced compactification, \(\overline{\mathcal{M}}_{\operatorname{Hit}}\), based on the work of several authors (see [30, 33, 47]). The boundary of \(\overline{\mathcal{M}}_{\operatorname{Hit}}\) is given by gauge equivalence classes of _limiting configurations_. This compactification is relevant to many aspects of Hitchin's moduli space. For more details, we refer the reader to [10, 29, 14, 15, 36, 27, 3], and the references therein. By the work of [30, 33], there is a natural extension \[\overline{\Xi}:\overline{\mathcal{M}}_{\operatorname{Dol}}\longrightarrow \overline{\mathcal{M}}_{\operatorname{Hit}}\] of the Kobayashi-Hitchin correspondence to the two compactifications described above, and it is of interest to study the geometry of this map. This involves another key feature of Hitchin's moduli space; namely, spectral curves. Spectral curves and spectral data [22] play a central role in the realization of the Dolbeault moduli space as an algebraically complete integrable system \(\mathcal{H}:\mathcal{M}_{\operatorname{Dol}}\to\mathcal{B}\). In the case of \(\operatorname{SL}(2,\mathbb{C})\), the base \(\mathcal{B}\) is the space of holomorphic quadratic differentials on \(\Sigma\). Given \(q\in H^{0}(K^{2})\), one obtains a (scheme theoretic) spectral curve \(S_{q}\). This curve is reduced if \(q\neq 0\), irreducible if \(q\) is not the square of an abelian differential, and smooth if \(q\) has simple zeros. We let \(\mathcal{B}^{\operatorname{reg}}\subset\mathcal{B}\) denote the open cone of quadratic differentials with simple zeros. The ideal points of both compactifications \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) and \(\overline{\mathcal{M}}_{\mathrm{Hit}}\) have associated nonzero quadratic differentials, and therefore spectral curves. We write \(\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{reg}}\) for the elements in \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) with smooth spectral curves, and \(\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{sing}}=\overline{\mathcal{M}}_ {\mathrm{Dol}}\setminus\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{reg}}\) for those with singular spectral curves; similarly for \(\overline{\mathcal{M}}_{\mathrm{Hit}}^{\mathrm{reg}}\) and \(\overline{\mathcal{M}}_{\mathrm{Hit}}^{\mathrm{sing}}\). We then have the following result. **Theorem 1.1**.: _The restriction of the compactified Kobayashi-Hitchin map \(\overline{\Xi}:\overline{\mathcal{M}}_{\mathrm{Dol}}\to\overline{\mathcal{M}}_ {\mathrm{Hit}}\) to the locus with smooth associated spectral curves defines a homeomorphism \(\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{reg}}\simeq\overline{\mathcal{M }}_{\mathrm{Hit}}^{\mathrm{reg}}\). On the singular spectral curve locus, however, \(\overline{\Xi}^{\mathrm{sing}}:\overline{\mathcal{M}}_{\mathrm{Dol}}^{ \mathrm{sing}}\to\overline{\mathcal{M}}_{\mathrm{Hit}}^{\mathrm{sing}}\) is neither surjective nor injective._ It is convenient to analyze the behavior along rays in \(\mathcal{B}\), where the spectral curve is simply rescaled. Let \(q\neq 0\) be a quadratic differential and \(\overline{\mathcal{M}}_{\mathrm{Dol},q}\) (resp. \(\overline{\mathcal{M}}_{\mathrm{Hit},q}\)) be the points in \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) (resp. \(\overline{\mathcal{M}}_{\mathrm{Hit}}\)) with spectral curves \(S_{tq}\), \(t\in\mathbb{C}^{*}\). The restriction of \(\overline{\Xi}\) gives us the map \(\overline{\Xi}_{q}:\overline{\mathcal{M}}_{\mathrm{Dol},q}\to\overline{ \mathcal{M}}_{\mathrm{Hit},q}\). We shall study the continuous behavior of \(\overline{\Xi}_{q}\) for points in the fiber of \(tq\) as \(t\to\infty\). For convenience, we set \(\mathcal{M}_{q^{*}}:=\overline{\mathcal{M}}_{\mathrm{Dol},q}\cap\mathcal{M}_{ \mathrm{Dol}}\). When \(q\) is irreducible, i.e. not a square, all elements in \(\mathcal{M}_{q}\) are stable. Via the Hitchin [23] and Beauville-Narasimhan-Ramanan (BNR) correspondence [1], this reduces the description of the fiber \(\mathcal{H}^{-1}(q)\) to the characterization of rank 1 torsion free sheaves on the integral curve \(S_{q}\). In [38], parameter spaces for rank 1 torsion free sheaves on algebraic curves with Gorenstein singularities were studied in the context of compactified Jacobians, and the crucial notion of a parabolic module was introduced. This was extensively investigated by Cook in [5, 4], partially following ideas of Bhosle [2]. For simple plane curve singularities of the type appearing in spectral curves, one makes use of the local classification of torsion free modules of Greuel-Knorrer [17]. These methods were applied to study the Hitchin fibration by Gothen-Oliveira in [16] (see also [28] for recent study). In parallel, Horn [24] defines a stratification of \(\mathcal{M}_{q}=\bigcup_{D}\mathcal{M}_{q,D}\) by certain effective divisors contained in the divisor of \(q\) (see Section 5.5, and also [26] for the more general situation). Using the results from these references, we have a reinterpretation of Mochizuki's construction [33]. This leads to the following result. **Theorem 1.2**.: _Let \(q\neq 0\) be an irreducible quadratic differential._ 1. _If_ \(q\) _has only zeros of odd order, then_ \(\overline{\Xi}_{q}\) _is continuous._ 2. _If_ \(q\) _has at least one zero of even order, then for each_ \(D\neq 0\) _there exists an integer_ \(n_{D}\geq 1\) _so that for any Higgs bundle_ \((\mathcal{F},\psi)\in\mathcal{M}_{q,D}\)_, there exist_ \(n_{D}\) _sequences of Higgs bundles_ \((\mathcal{E}_{i}^{k},\varphi_{i}^{k})\) _with_ \(k=1,\ldots,n_{D}\) _satisfying_ * \(\lim_{i\to\infty}(\mathcal{E}_{i}^{k},\varphi_{i}^{k})=(\mathcal{F},\psi)\) _for_ \(k=1,\ldots,n_{D}\)_,_ * _and if we write_ \[\eta^{k}:=\lim_{i\to\infty}\overline{\Xi}_{q}(\mathcal{E}_{i}^{k},\varphi_{i}^{ k})\quad,\quad\xi:=\lim_{i\to\infty}\overline{\Xi}_{q}(\mathcal{F},t_{i}\psi)\,\] _for some sequence_ \(t_{i}\in\mathbb{R}^{+}\)_,_ \(t_{i}\to+\infty\)_, then_ \(\xi,\eta^{1},\ldots,\eta^{n_{D}}\) _are_ \(n_{D}+1\) _different limiting configurations._ When \(q\) is reducible, the description of Higgs bundles in \(\mathcal{M}_{q}\) becomes more complicated because, among other things, of the existence of strictly semistable objects. To understand this, we use the local descriptions of Gothen-Oliveira and Mochizuki (see [16, 33]). Our result, which focuses on the stable locus, is the following. **Theorem 1.3**.: _Suppose \(q\neq 0\) is reducible. Let \(\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{st}}\) denote the stable locus of \(\overline{\mathcal{M}}_{\mathrm{Dol},q}\). If \(g\geq 3\), then the restriction map \(\overline{\Xi}_{q}|_{\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{st}}}\) is discontinuous. However, if \(g=2\), the map \(\overline{\Xi}_{q}|_{\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{st}}}\) is continuous._ We also note that there is recent work of Mochizuki and Szabo [34] on the asymptotic behavior for families of Higgs bundles in the higher rank case. This paper is organized as follows: in Section 2, we provide an overview of Higgs bundles and BNR correspondence. In Section 3, we delve into the concepts of filtered bundles and their compactness properties. Section 4 defines the algebraic and analytic compactifications. Section 5 introduces parabolic modules and examines their connection to spectral curves. The main results for Hitchin fibers with irreducible singular spectral curves are established in Section 6. In Section 7, the results for the reducible case are proven. Finally, in Section 8, we construct the compactified Kobayashi-Hitchin map and prove the main results. The Appendix, based on the work of Greuel-Knorrer, calculates some invariants of rank 1 torsion free sheaves on the spectral curves we consider. **Acknowledgements.** We extend our sincere gratitude to Takuro Mochizuki for his valuable insights and stimulating discussions during the BIRS conference in 2021. The authors also wish to express their gratitude to a great many people for their interest and helpful comments. Among them are Mark de Cataldo, Ron Donagi, Simon Donaldson, Laura Fredrickson, Johannes Horn, Laura Schaposnik, Shizhang Li, Jie Liu, Tony Pantev, Thomas Walpuski, Daxin Xu. S.H is supported by NSFC grant No.12288201. R.W.'s research is supported by NSF grants DMS-1906403 and DMS-2204346, and he thanks the Max Planck Institute for Mathematics in Bonn for its hospitality and financial support. The authors also gratefully acknowledge the support of the MSRI under NSF grant DMS-1928930 during the program "Analytic and Geometric Aspects of Gauge Theory", Fall 2022. ## 2. Background on Higgs bundles This section gives a very brief overview of the Dolbeault and Hitchin moduli spaces, spectral curve descriptions, and the nonabelian Hodge correspondence. For more details on these topics, see [21, 23, 44, 48]. ### Higgs bundles As in the Introduction, throughout this paper \(\Sigma\) will denote a closed Riemann surface of genus \(g\geq 2\) with structure sheaf \(\mathcal{O}=\mathcal{O}_{\Sigma}\) and canonical bundle \(K=K_{\Sigma}\). Let \(E\to\Sigma\) be a complex vector bundle. A Higgs bundle consists of a pair \((\mathcal{E},\varphi)\), where \(\mathcal{E}\) is a holomorphic bundle and \(\varphi\in H^{0}(\operatorname{End}(\mathcal{E})\otimes K)\). If \(\operatorname{rank}(\mathcal{E})=1\), then a Higgs field is just an abelian differential \(\omega\). The pair \((\mathcal{E},\varphi)\) is called an \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundle if \(\operatorname{rank}(E)=2\), \(\det(\mathcal{E})\) has a fixed isomorphism with the trivial bundle, and \(\operatorname{Tr}(\varphi)=0\). In this paper we will focus mainly on \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundles, but the rank 1 case will also be important. Let \((\mathcal{E},\varphi)\) be an \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundle. A (proper) Higgs subbundle of \((\mathcal{E},\varphi)\) is a holomorphic line bundle \(\mathcal{L}\subset\mathcal{E}\) that is \(\varphi\)-invariant, i.e. \(\varphi:\mathcal{L}\to\mathcal{L}\otimes K\). In this case, the restriction \(\varphi_{\mathcal{L}}:=\varphi\big{|}_{\mathcal{L}}\), makes \((\mathcal{L},\varphi_{\mathcal{L}})\) a rank 1 Higgs bundle. Moreover, \(\varphi\) induces a Higgs bundle structure on the quotient \(\mathcal{E}/\mathcal{L}\). We say \((\mathcal{E},\varphi)\) is stable (resp. semistable) if for all Higgs subbundles \(\mathcal{L}\), \(\deg\mathcal{L}<0\) (resp. \(\deg\mathcal{L}\leq 0\)). We say \((\mathcal{E},\varphi)\) is polystable if \((\mathcal{E},\varphi)\simeq(\mathcal{L},\omega)\oplus(\mathcal{L}^{-1},-\omega)\), where \(\mathcal{L}\) is a degree zero holomorphic line bundle and \(\omega\in H^{0}(K)\). If \((\mathcal{E},\varphi)\) is strictly semistable, i.e. semistable but not stable, the Seshadri filtration [40] gives a unique Higgs subbundle \(0\subset(\mathcal{L},\omega)\subset(\mathcal{E},\varphi)\) with \(\deg(\mathcal{L})=\frac{1}{2}\deg(\mathcal{E})=0\). Write \((\mathcal{L}^{\prime},\omega^{\prime}):=(\mathcal{E},\varphi)/(\mathcal{L},\omega)\), then we have \(\omega^{\prime}=-\omega\) and \(\mathcal{L}^{\prime}=\mathcal{L}^{-1}\). The associated graded bundle \(\operatorname{Gr}(\mathcal{E},\varphi)=(\mathcal{L},\omega)\oplus(\mathcal{L }^{-1},-\omega)\) of this filtration is a polystable \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundle. We say that \((\mathcal{E},\varphi)\) is S-equivalent to \(\operatorname{Gr}(\mathcal{E},\varphi)\). Holomorphic bundles \(\mathcal{E}\) with underlying \(C^{\infty}\) bundle \(E\) are in 1-1 correspondence with \(\bar{\partial}\)-operators \(\bar{\partial}_{E}:\Omega^{0}(E)\to\Omega^{0,1}(E)\). We use the notation \(\mathcal{E}:=(E,\bar{\partial}_{E})\). Let \(\mathcal{C}\) denote the space of pairs \((\bar{\partial}_{E},\varphi)\), \(\bar{\partial}_{E}\varphi=0\). Let \(\mathcal{C}^{s}\) and \(\mathcal{C}^{ss}\) denote the subspaces of \(\mathcal{C}\) where the Higgs bundles are stable (resp. semistable). The complex gauge transformation group \(\mathcal{G}_{\mathbb{C}}:=\mathrm{Aut}(E)\) has a right action on \(\mathcal{C}\) by defining for \(g\in\mathcal{G}_{\mathbb{C}}\), \((\bar{\partial}_{E},\varphi)g:=(g^{-1}\circ\bar{\partial}\circ g,g^{-1}\circ \varphi\circ g)\). There is a quasiprojective scheme \(\mathcal{M}_{\mathrm{Dol}}\) whose closed points are in 1-1 correspondence with polystable Higgs bundles constructed via (finite dimensional) Geometric Invariant Theory (see [35, 45]). In [12] it was shown that the infinite dimensional quotient \(\mathcal{C}^{\mathrm{ss}}\mathbin{/\!\!/}\ \mathcal{G}_{\mathbb{C}}\), where the double slash indicates that S-equivalent orbits are identified, admits the structure of a complex analytic space that is biholomorphic to the analytification \(\mathcal{M}_{\mathrm{Dol}}^{\mathrm{an}}\) of \(\mathcal{M}_{\mathrm{Dol}}\). Henceforth, we shall work in the complex analytic category, identify the algebro-geometric and gauge theoretic moduli spaces, and simply denote them both by \(\mathcal{M}_{\mathrm{Dol}}\). We note that the set of stable Higgs bundles modulo gauge transformations, \(\mathcal{M}_{\mathrm{Dol}}^{s}:=\mathcal{C}^{\mathrm{s}}\mathbin{/\!\!/}\ \mathcal{G}_{\mathbb{C}}\), is a geometric quotient and an open subset of \(\mathcal{M}_{\mathrm{Dol}}\). Finally, notice that the a pair \((\mathcal{E},\varphi)\) is stable (resp. semistable) if and only if the same is true for \((\mathcal{E},\lambda\varphi)\), \(\lambda\in\mathbb{C}^{*}\). Hence, \(\mathcal{M}_{\mathrm{Dol}}\) admits an action of \(\mathbb{C}^{*}\) that preserves \(\mathcal{M}_{\mathrm{Dol}}^{s}\). Though \(\mathcal{M}_{\mathrm{Dol}}\) is only quasiprojective, the \(\mathbb{C}^{*}\) action satisfies the Bialynicki-Birula property: **Theorem 2.1** ([21, 44]).: _For any \([(\mathcal{E},\varphi)]\in\mathcal{M}_{\mathrm{Dol}}\),_ \[\lim_{\lambda\to 0}\lambda\cdot[(\mathcal{E},\varphi)]:=\lim_{\lambda\to 0}[( \mathcal{E},\lambda\varphi)]\] _exists in \(\mathcal{M}_{\mathrm{Dol}}\)._ ### Spectral curves and the Hitchin fibration The Hitchin map is defined as \[\mathcal{H}:\mathcal{M}_{\mathrm{Dol}}\longrightarrow H^{0}(K^{2})\,\ [( \mathcal{E},\varphi)]\mapsto\det(\varphi)\,\] where \(H^{0}(K^{2})=:\mathcal{B}\) is known as the Hitchin base. Hitchin [21, 23] showed that \(\mathcal{H}\) is a proper map and a fibration by abelian varieties over the open cone \(\mathcal{B}^{\mathrm{reg}}\subset\mathcal{B}\) consisting of nonzero quadratic differentials with only simple zeros. The discriminant locus \(\mathcal{B}^{\mathrm{sing}}:=\mathcal{B}\setminus\mathcal{B}^{\mathrm{reg}}\) consists of quadratic differentials that are either identically zero or have at least one zero with multiplicity. For \(q\in\mathcal{B}\), let \(\mathcal{M}_{q}:=\mathcal{H}^{-1}(q)\). The "most singular fiber" \(\mathcal{M}_{0}\) is called the _nilpotent cone_. Consider the total space \(\mathrm{Tot}(K)\) of \(K\), along with its projection \(\pi:\mathrm{Tot}(K)\to\Sigma\). The pullback bundle \(\pi^{*}K\) has a tautological section, which we denote by \(\lambda\in H^{0}(\mathrm{Tot}(K),\pi^{*}K)\). Given any \(q\neq 0\in H^{0}(K^{2})\), the _spectral curve_\(S_{q}\) associated with \(q\) is the zero scheme of the section \(\lambda^{2}-\pi^{*}q\in H^{0}(\mathrm{Tot}(K),\pi^{*}K)\). This is a reduced, but possibly reducible, projective algebraic curve. The restriction of \(\pi\) to \(S_{q}\), also denoted by \(\pi:S_{q}\to\Sigma\), is a double covering branched along the zeros of \(q\). The spectral curve \(S_{q}\) is smooth if and only if \(q\) has only simple zeros. It is reducible if and only if \(q=-\omega\otimes\omega\) for some \(\omega\in H^{0}(K)\). We shall refer to such quadratic differentials as _reducible_, and _irreducible_ otherwise. There is a noteworthy observation regarding irreducible spectral curves. **Proposition 2.2** (cf. [23]).: _Let \((\mathcal{E},\varphi)\) be a Higgs bundle with \(q=\det(\varphi)\), and suppose \(q\) is irreducible. Then \((\mathcal{E},\varphi)\) has no proper invariant subbundles. In particular, \((\mathcal{E},\varphi)\) is stable._ Proof.: Suppose \(\mathcal{L}\subset\mathcal{E}\) is \(\varphi\)-invariant, and let \(\varphi_{\mathcal{L}}\) be the restriction. Then \[\det\varphi=-\frac{1}{2}\mathrm{Tr}(\varphi^{2})=-(\varphi_{\mathcal{L}})^{2}\,\] contradicting the assumption. Let us emphasize that being reducible is not the same as having only even zeros. To see this, suppose that \(\mathrm{Div}(q)=2\Lambda^{\prime}\). Then \(K\simeq\mathcal{O}(\Lambda^{\prime})\otimes\mathcal{I}\), where \(\mathcal{I}\) is a 2-torsion point in the Jacobian. The spectral curve \(S_{q}\) is reducible if and only if \(\mathcal{I}\) is trivial. ### Rank 1 torsion free sheaves and the BNR correspondence In this subsection, we provide some background on rank 1 torsion free sheaf theory over spectral curves in the context of the Hitchin and BNR correspondence, as developed in [23, 1]. Let \(S\) be a reduced and irreducible complex projective curve and \(\mathcal{O}_{S}\) its structure sheaf. The moduli space of invertible sheaves on \(S\) is denoted by \(\operatorname{Pic}(S)\), and \(\operatorname{Pic}^{d}(S)\subset\operatorname{Pic}(S)\) is the degree \(d\) component. If \(\mathcal{F}\) is a coherent analytic sheaf on \(S\), we can define its cohomology groups \(H^{i}(X,\mathcal{F})\). Since \(\dim S=1\), \(H^{i}(X,\mathcal{F})=0\) for \(i\geq 2\). The Euler characteristic is defined as \(\chi(\mathcal{F})=\dim H^{0}(X,\mathcal{F})-\dim H^{1}(X,\mathcal{F})\). The degree of a torsion free sheaf \(\mathcal{F}\) is given by \(\deg(\mathcal{F})=\chi(\mathcal{F})-\operatorname{rank}(\mathcal{F})\chi( \mathcal{O}_{S})\). If \(\mathcal{F}\) is locally free, then \(\deg(\mathcal{F})\) coincides with the degree of the invertible sheaf \(\det(\mathcal{F})\). Let \(\overline{\operatorname{Pic}}^{d}(S)\) be the moduli space of degree \(d\) rank 1 torsion free sheaves on \(S\), and \(\overline{\operatorname{Pic}}(S)=\prod_{d\in\mathbb{Z}}\overline{ \operatorname{Pic}}^{d}(S)\)[9]. Then \(\overline{\operatorname{Pic}}^{d}(S)\) is an irreducible projective scheme containing \(\operatorname{Pic}^{d}(S)\) as an open subscheme. When \(S\) is smooth, we have \(\overline{\operatorname{Pic}}^{d}(S)=\operatorname{Pic}^{d}(S)\). The relationship to Higgs bundles is given by the following. **Theorem 2.3** ([23, 1]).: _Let \(q\in H^{0}(K^{2})\) be an irreducible quadratic differential with spectral curve \(S_{q}\). There is a bijective correspondence between points in \(\overline{\operatorname{Pic}}(S_{q})\) and isomorphism classes of rank 2 Higgs pairs \((\mathcal{E},\varphi)\) with \(\operatorname{Tr}(\varphi)=0\) and \(\det(\varphi)=q\). Explicitly: if \(\mathcal{L}\in\overline{\operatorname{Pic}}(S_{q})\), then \(\mathcal{E}:=\pi_{*}(\mathcal{L})\) is a rank 2 vector bundle, and the homomorphism \(\pi_{*}\mathcal{L}\to\pi_{*}\mathcal{L}\otimes K\cong\pi_{*}(\mathcal{L} \otimes\pi^{*}K)\) given by multiplication by the canonical section \(\lambda\) defines the Higgs field \(\varphi\)._ This correspondence gives the very useful exact sequence \[0\to\mathcal{L}(-\Delta)\to\pi^{*}\mathcal{E}\xrightarrow{\pi^{*}\varphi- \lambda}\pi^{*}\mathcal{E}\otimes\pi^{*}K\to\mathcal{L}\otimes\pi^{*}K\to 0\, \tag{1}\] where \(\mathcal{O}_{S}(\Delta):=K_{S}\otimes\pi^{*}K_{\Sigma}^{-1}\) is the ramification divisor. This will be used below in Section 6. Let \(q\) be a quadratic differential with only simple zeros, and define a divisor on \(S\) by \(\Lambda=\operatorname{Div}(\lambda)\). Then \(\Lambda\) is the ramification divisor of the map \(\pi:S\to\Sigma\). By the Riemann-Hurwitz formula, the genus of \(S\) is \(g(S)=4g-3\). Furthermore, for any \(\mathcal{L}\in\operatorname{Pic}(S)\), Riemann-Roch gives \(\deg(\pi_{*}\mathcal{L})=\deg(\mathcal{L})-(2g-2).\) The \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundles are characterized by \[\mathcal{T}:=\{\mathcal{L}\in\operatorname{Pic}^{2g-2}(S)\mid\det(\pi_{*} \mathcal{L})=\mathcal{O}_{\Sigma}\}. \tag{2}\] By the Hitchin-BNR correspondence (Theorem 2.3), the map \(\chi_{BNR}:\mathcal{T}\to\mathcal{M}_{q}\) is a bijection. The branched double cover \(\pi:S\to\Sigma\) is given by an involution \(\sigma:S\to S\). We have the norm map \(\operatorname{Nm}_{S/\Sigma}:\operatorname{Jac}(S)\to\operatorname{Jac}(\Sigma)\), where \(\operatorname{Nm}_{S/\Sigma}(\mathcal{O}_{S}(D)):=\mathcal{O}_{\Sigma}(\pi(D))\). The Prym variety is defined as \[\operatorname{Prym}(S/\Sigma):=\ker(\operatorname{Nm}_{S/\Sigma})=\{\mathcal{ L}\in\operatorname{Pic}(S)\mid\mathcal{L}\otimes\sigma^{*}\mathcal{L}= \mathcal{O}_{S}\}\.\] Also, we have \(\det(\pi_{*}\mathcal{L})\cong\operatorname{Nm}_{S/\Sigma}(\mathcal{L})\otimes K ^{-1}\). Thus, \(\mathcal{T}\) can be expressed as \[\mathcal{T}=\{\mathcal{L}\in\operatorname{Pic}^{2g-2}(S)\mid\operatorname{Nm} _{S/\Sigma}(\mathcal{L})\cong K\}\.\] Hence, \(\mathcal{T}\) is a torsor over \(\operatorname{Prym}(S,\Sigma)\). Explicitly, by choosing \(\mathcal{L}_{0}\in\mathcal{T}\), we obtain an isomorphism \(\mathcal{T}\xrightarrow{\sim}\operatorname{Prym}(S,\Sigma)\) given by \(\mathcal{L}\to\mathcal{L}\otimes\mathcal{L}_{0}^{-1}\). To summarize, we have the following: **Proposition 2.4**.: _Let \(q\) be a quadratic differential with simple zeros. Then \(\mathcal{M}_{q}\cong\mathcal{T}\cong\operatorname{Prym}(S,\Sigma)\)._ If \(q\neq 0\) is irreducible but nongeneric, the spectral curve \(S\) is singular and irreducible. We may still define the set \(\overline{\mathcal{T}}\subset\overline{\mathrm{Pic}}^{2g-2}(S)\) as follows: \[\overline{\mathcal{T}}:=\{\mathcal{L}\in\overline{\mathrm{Pic}}^{2g-2}(S)\mid \det(\pi_{*}\mathcal{L})\cong\mathcal{O}_{\Sigma}\}\,\] We also set \(\mathcal{T}:=\overline{\mathcal{T}}\cap\mathrm{Pic}^{2g-2}\).Then \(\overline{\mathcal{T}}\) is the natural compactification of \(\mathcal{T}\) induced by the inclusion \(\mathrm{Pic}^{2g-2}(S)\subset\overline{\mathrm{Pic}}(S)\). The BNR correspondence, as stated in Theorem 2.3, implies that \(\chi_{\mathrm{BNR}}:\overline{\mathcal{T}}\to\mathcal{M}_{q}\) is an isomorphism. ### The Hitchin moduli space and the nonabelian Hodge correspondence We now recall the well-known nonabelian Hodge correspondence (NAH), which relates the space of flat \(\mathrm{SL}(2,\mathbb{C})\) connections, Higgs bundles, and solutions to the Hitchin equations. This result was developed in the work of Hitchin [21], Simpson [42], Corlette [6], and Donaldson [8]. As above, let \(E\) be a trivial, smooth, rank \(2\) vector bundle over a Riemann surface \(\Sigma\), and let \(H_{0}\) be a fixed Hermitian metric on \(E\). We denote by \(\mathfrak{sl}(E)\) (resp. \(\mathfrak{su}(E)\)) the bundle of traceless (resp. traceless skew-hermitian) endomorphisms of \(E\). Let \(A\) be a unitary (with respect to \(H_{0}\)) connection on \(E\) that induces the trivial connection on \(\det E\), and let \(\phi\in\Omega^{1}(\mathfrak{isu}(E))\). We will sometimes also refer to \(\phi\) as a Higgs field. The Hitchin equations for the pair \((A,\phi)\) are given by: \[F_{A}+\phi\wedge\phi=0\,\ d_{A}\phi=d_{A}^{*}\phi=0. \tag{3}\] If we split the Higgs field into type: \(\phi=\varphi+\varphi^{\dagger}\), with \(\varphi\in\Omega^{1,0}(\mathfrak{sl}(E))\), then (3) is equivalent to: \[F_{A}+[\varphi,\varphi^{\dagger}]=0\,\ \bar{\partial}_{A}\varphi=0. \tag{4}\] Notice that \((\bar{\partial}_{E},\varphi)\) then defines an \(\mathrm{SL}(2,\mathbb{C})\) Higgs bundle. The Hitchin moduli space, denoted by \(\mathcal{M}_{\mathrm{Hit}}\), is the moduli space of solutions to the Hitchin equation, given by \[\mathcal{M}_{\mathrm{Hit}}:=\{(A,\phi)\ |\ (A,\phi)\ \mathrm{satisfies}\ (\ref{eq:Hitchin})\}/ \mathcal{G},\] where \(\mathcal{G}\) is the gauge group of unitary automorphisms of \(E\). Recall that a flat connection \(\mathcal{D}\) is called completely reducible if and only if it is a direct sum of irreducible flat connections. The NAH can be summarized as follows: **Theorem 2.5** ([21, 43, 6, 8]).: _A Higgs bundle \((\mathcal{E},\varphi)\) is polystable if and only if there exists a Hermitian metric \(H\) such that the corresponding Chern connection \(A\) and Higgs field \(\phi=\varphi+\varphi^{\dagger}\) solve the Hitchin equations (3). Moreover, the connection \(\mathcal{D}\) defined by \(\mathcal{D}=\nabla_{A}+\phi\) is a completely reducible flat connection, and it is irreducible if and only if \((\mathcal{E},\varphi)\) is stable._ _Conversely, a flat connection \(\mathcal{D}\) is completely reducible if and only if there exists a Hermitian metric \(H\) on \(E\) such that when we express \(\mathcal{D}=\nabla_{A}+\varphi+\varphi^{\dagger}\), we have \(\bar{\partial}_{\mathcal{E}}\varphi=0\). Moreover, the corresponding Higgs bundle \((\mathcal{E},\varphi)\) is polystable, and it is stable if and only if \(\mathcal{D}\) is irreducible._ The nonabelian Hodge correspondence defines the following Kobayashi-Hitchin homeomorphism \[\Xi:\mathcal{M}_{\mathrm{Dol}}\to\mathcal{M}_{\mathrm{Hit}}\,\] which is a diffeomorphism to irreducible solutions of (3) when restricted to the stable locus. Finally, we note that there is an action of \(S^{1}\) on \(\mathcal{M}_{\mathrm{Hit}}\) defined by \((A,\phi)\to(A,e^{i\theta}\cdot\phi)\), where \(e^{i\theta}\cdot\phi=e^{i\theta}\varphi+e^{-i\theta}\varphi^{\dagger}\). With respect to this and the \(S^{1}\subset\mathbb{C}^{*}\) action on \(\mathcal{M}_{\mathrm{Dol}}\), the map \(\Xi\) is \(S^{1}\)-equivariant. ## 3. Filtered bundles and compactness Filtered (or parabolic) bundles are described, for example, in [43]. They play a key role in the analytic compactification. This section provides a brief overview of filtered line bundles and demonstrates a compactness result. ### Filtered line bundles and nonabelian Hodge Let \(Z\) be a finite collection of distinct points on a closed Riemann surface \(\Sigma\), and let \(\Sigma^{\prime}=\Sigma\setminus Z\). Viewing \(\Sigma\) as a projective algebraic curve, an algebraic line bundle \(L\) over \(\Sigma^{\prime}\) is a line bundle defined by regular transition functions on Zariski open sets over \(\Sigma^{\prime}\). The sheaf of sections of \(L\) can be extended in infinitely many different ways over \(Z\) to obtain coherent (invertible) sheaves on \(\Sigma\). The sections of \(L\) are then realized as meromorphic sections of an extension, regular on \(\Sigma^{\prime}\). A _filtered line bundle_\(\mathcal{F}_{*}(L)\) is an algebraic line bundle \(L\) with a collection of coherent extensions \(L_{\alpha}\) across the punctures \(Z\) such that \(L_{\alpha}\subset L_{\beta}\) for \(\alpha\geq\beta\), for a fixed sufficiently small \(\epsilon\), \(L_{\alpha-\epsilon}=L_{\alpha}\), and \(L_{\alpha}=L_{\alpha+1}\otimes\mathcal{O}_{\Sigma}(Z)\). Let \(\operatorname{Gr}_{\alpha}=L_{\alpha+\epsilon}/L_{\alpha}\) denote the quotient (torsion) sheaf. A value \(\alpha\) where \(\operatorname{Gr}_{\alpha}\neq 0\) is called a jump. Since we are considering line bundles, for each \(p\) in the support of \(\operatorname{Gr}_{\alpha_{p}}\), there is exactly one jump \(\alpha_{p}\) in the interval \([0,1)\). The collection of jumps \(\{\alpha_{p}\}\) fully determines the filtered bundle structure. If we denote by \(\mathcal{L}:=L_{0}\), the degree of a filtered line bundle is defined as \[\deg(\mathcal{F}_{*}(L))=\deg(\mathcal{L})+\sum_{p\in Z}\alpha_{p}\.\] Alternatively, a _weighted line bundle_ is a pair \((\mathcal{L},\chi)\) where \(\mathcal{L}\to\Sigma\) is a holomorphic line bundle and \(\chi:Z\to\mathbb{R}\) is a weight function. The degree of a weighted bundle is defined as \[\deg(\mathcal{L},\chi)=\deg(\mathcal{L})+\sum_{p\in Z}\chi_{p}\.\] The concepts of filtered line bundles and weighted line bundles are nearly equivalent. Namely, given a filtered line bundle \(\mathcal{F}_{*}(L)\), we define \(\mathcal{L}:=\mathcal{L}_{0}\) and \(\chi_{p}=\alpha_{p}\). Conversely, given a weighted line bundle \((\mathcal{L},\chi)\), let \(\alpha_{p}=\chi_{p}+n_{p}\), where \(n_{p}\in\mathbb{Z}\) is the unique integer so that \(0\leq\chi_{p}+n_{p}<1\). A filtered bundle \(\mathcal{F}_{*}(L)\) is then determined by setting \(L_{0}=\mathcal{L}(-\sum_{p\in Z}n_{p}p)\) with jumps \(\alpha_{p}\). Clearly, \(\deg(\mathcal{F}(L))=\deg(\mathcal{L},\chi)\). We shall use the notation \(\mathcal{F}_{*}(\mathcal{L},\chi)\) for the filtered bundle associated to a weighted bundle \((\mathcal{L},\chi)\) in this way. Different weighted bundles can give rise to the same filtered bundle. The following is a fact that will be frequently used in this paper. If \(D=\sum_{x\in Z}d_{x}x\) is a divisor supported on \(Z\), let \[\chi_{D}(x):=\begin{cases}d_{x}\,&x\in Z\ ;\\ 0\,&x\in\Sigma\setminus Z\.\end{cases}\] Then for any weighted bundle \((\mathcal{L},\chi)\) we have \(\mathcal{F}_{*}(\mathcal{L}(D),\chi-\chi_{D})=\mathcal{F}_{*}(\mathcal{L},\chi)\). Let \((\mathcal{L}_{1},\chi_{1})\) and \((\mathcal{L}_{2},\chi_{2})\) be two weighted lines bundles. We define the tensor product \[(\mathcal{L}_{1},\chi_{1})\otimes(\mathcal{L}_{2},\chi_{2}):=(\mathcal{L}_{1} \otimes\mathcal{L}_{2},\chi_{1}+\chi_{2})\.\] Then the degree is additive on tensor products. For filtered bundles, we _define_ \[\mathcal{F}_{*}(\mathcal{L}_{1},\chi_{1})\otimes\mathcal{F}_{*}(\mathcal{L}_{2},\chi_{2}):=\mathcal{F}_{*}(\mathcal{L}_{1}\otimes\mathcal{L}_{2},\chi_{1}+ \chi_{2}). \tag{5}\] The degree is again additive for the tensor product of filtered bundles. This agrees with the usual definition of tensor product for parabolic bundles. ### Harmonic metrics for weighted line bundles **Proposition 3.1**.: _Let \((\mathcal{L},\chi)\) be a degree 0 weighted bundle. Then there exists a Hermitian metric \(h\) on \(\mathcal{L}_{\Sigma^{\prime}}\) such that:_ 1. _the Chern connection_ \(A_{h}\) _of_ \((\mathcal{L},h)\) _is flat:_ \(F_{A_{h}}=0\)_;_ 2. _for_ \(p\in Z\)_, and_ \((U_{p},z)\) _a holomorphic coordinate centered at_ \(p\)_,_ \(|z|^{-2\chi_{p}}h\) _extends to a_ \(\mathcal{C}^{\infty}\) _Hermitian metric on_ \(\mathcal{L}|_{U_{p}}\)_;_ 3. \(h\) _is uniquely determined up to a multiplication by a nonzero constant._ Proof.: We first choose a background Hermitian metric \(h_{0}\) such that \(|z|^{-2\chi_{p}}h_{0}\) defines a \(\mathcal{C}^{\infty}\) Hermitian metric defined on \(U_{p}\). Let \(A_{h_{0}}\) be the Chern connection, and \(F_{A_{0}}\) the curvature. Note that \(F_{A_{0}}\) is smooth on \(\Sigma\). By the Poincare-Lelong formula, we have \(\frac{\sqrt{-1}}{2\pi}\int_{\Sigma}F_{A_{0}}=\deg(\mathcal{L},\chi)=0\). Therefore, there exists a \(\mathcal{C}^{\infty}\) function \(\rho\) such that \(\Delta\rho+\frac{\sqrt{-1}}{2\pi}\Lambda F_{A_{0}}=0\). We define \(h=h_{0}e^{\rho}\). For the corresponding Chern connection \(A_{h}\), we have \(F_{A_{h}}=0\), which implies (i). (ii) follows from the property for \(h_{0}\), since \(\rho\) is a smooth function on \(\Sigma\). As \(\rho\) is well-defined up to a constant, \(h\) is well-defined up to a constant, which implies (iii). The metric obtained above is called the _harmonic metric_. For a weighted bundle \((\mathcal{L},\chi)\), the holomorphic bundle \(\mathcal{L}\) and the harmonic metric \(h\) define a filtration as follows. For \(\epsilon>0\) sufficiently small, let \[L_{\alpha}:=\{s\in\mathcal{L}(*Z)\mid|s|_{h}\leq Cr^{\alpha-\epsilon}\text{ for some }C\}\.\] Here, \(r\) denotes the distance to \(Z\) in any smooth conformal metric on \(\Sigma\). It is straightforward to check that this defines a filtered bundle that matches \(\mathcal{F}_{*}(\mathcal{L},\chi)\) under the correspondence in the previous section. Even though the harmonic metric is only well-defined up to a constant, the Chern connection \(A=(\mathcal{L},h)\) is independent of this choice. The \((1,0)\) part of \(A\), denoted \(\nabla_{h}\), then defines logarithmic connections \(\nabla_{h}:L_{\alpha}\to L_{\alpha}\otimes K(Z)\). ### Convergence of weighted line bundles In this subsection, we will consider the convergence of weighted line bundles. The main result we prove here is a consequence of [34, Theorem 1.8]. For the reader's convenience, we present a short proof here for our situation. Let \((\Sigma_{0},g_{0})\) be a metrized Riemann surface (i.e. a Riemann surface \(\Sigma_{0}\) with conformal metric \(g_{0}\)). We view \(\Sigma_{0}\) as given by an underlying surface \(C\) with almost complex structure \(J_{0}\). Consider a neighborhood \(U_{1}\) of \(J_{0}\) in the moduli space of holomorphic structures and a neighborhood \(U_{2}\) of \(g_{0}\) in the space of smooth metrics. We denote the product of these neighborhoods by \(U=U_{1}\times U_{2}\). We can define the fiber bundle \(\operatorname{Pic}_{U}\to U\), where each fiber is the Picard group defined by the holomorphic structure. Let \((\Sigma_{t}=(C,J_{t}),g_{t})\) be a family of metrized Riemann surfaces that converge smoothly to \((\Sigma_{0},g_{0})\) as \(t\to 0\). Let \(Z_{t}\subset\Sigma_{t}\) be a collection of a finite number of points that converge to \(Z_{0}\) in suitable symmetric products of \(C\). For each \(p\in Z_{0}\), we can write \(Z_{t}=\cup_{p\in Z_{0}}Z_{t,p}\) such that all points in \(Z_{t,p}\) converge to \(p\). We define the convergence of weighted line bundles as follows: **Definition 3.2**.: _A family of weighted line bundles \((\mathcal{L}_{t},\chi_{t})\) over \(\Sigma_{t}\) with weights \(\chi_{t}:Z_{t}\to\mathbb{R}\) converges to \((\mathcal{L}_{0},\chi_{0})\) if_ 1. \(\mathcal{L}_{t}\) _converges to_ \(\mathcal{L}_{0}\) _in_ \(\operatorname{Pic}_{U}\)_,_ 2. _for all_ \(p\in Z_{0}\) _and_ \(t\) _sufficiently small,_ \(\sum_{q\in Z_{t,p}}\chi_{t}(q)=\chi_{0}(p)\)_._ A sequence of filtered bundles \(\mathcal{F}_{*}(\mathcal{L}_{t})\) converges to \(\mathcal{F}_{*}(\mathcal{L}_{0})\) if the corresponding weighted bundles converge. The following theorem provides insight into the compactness of a sequence of weighted line bundles: **Theorem 3.3**.: _Let \((\mathcal{L}_{t},\chi_{t})\) be a sequence of weighted line bundles over \(\Sigma_{t}\setminus Z_{t}\), where \(\deg(\mathcal{L}_{t},\chi_{t})=0\), and let \(h_{t}\) be the corresponding harmonic metrics. If \(Z_{t}\) converges to \(Z_{0}\), we write \(Z_{t}=\cup_{p\in Z_{0}}Z_{t,p}\). Then there exists a weighted line bundle \((\mathcal{L}_{0},\chi_{0})\) over \(Z_{0}\) with a harmonic metric \(h_{0}\) such that:_ 1. _After being rescaled by_ \(c_{t}>0\)_,_ \(c_{t}h_{t}\) _converges to_ \(h_{0}\) _over_ \(\Sigma_{0}\setminus Z_{0}\) _in the_ \(\mathcal{C}^{\infty}_{\mathrm{loc}}\) _sense._ 2. _Let_ \(\nabla_{t}\) _be the unitary connection of_ \(h_{t}\)_. Then on_ \(\Sigma_{0}\setminus Z_{0}\)_,_ \(\lim_{t\to 0}\nabla_{t}=\nabla_{0}\) _in_ \(\mathcal{C}^{\infty}_{\mathrm{loc}}\)_._ Proof.: By the assumptions on weights, \(\deg(\mathcal{L}_{t})\) is a fixed, \(t\)-independent constant. Let \(\gamma_{t}=(J_{t},g_{t})\) be a path in \(U\). Then \(\mathrm{Pic}_{U}|_{\gamma_{t}}\) is compact, and there exists an \(\mathcal{L}_{0}\in\mathrm{Pic}(\Sigma_{0})\) such that \(\mathcal{L}_{t}\) converges to \(\mathcal{L}_{0}\). For \(p\in Z_{0}\), define \(\chi_{0}(p)=\sum_{q\in Z_{t,p}}\chi_{t}(q)\), and thus obtain a weighted line bundle \((\mathcal{L}_{0},\chi_{0})\). We can choose a family of approximate harmonic metrics \(h^{\mathrm{app}}_{t}\), such that \(|z|^{-2\chi_{p}}h^{\mathrm{app}}_{t}\) extends to a smooth metric in a neighborhood of \(p\) and \(h^{\mathrm{app}}_{t}\) converges to \(h^{\mathrm{app}}_{0}\) in \(\mathcal{C}^{\infty}_{\mathrm{loc}}(\Sigma_{0}\setminus Z_{0})\). Moreover, we write \(h_{t}=h^{\mathrm{app}}_{t}e^{s_{t}}\). After a suitable rescale of \(h_{t}\), we can assume \(\|s_{t}\|_{L^{2}}=1\). Let \(\rho_{t}:=\Delta_{t}h^{\mathrm{app}}_{t}\) be the curvature defined by the metric \(h^{\mathrm{app}}_{t}\). Then \(s_{t}\) satisfies the equation \(\Delta_{t}s_{t}=\rho_{t}\) over \(\Sigma\). As \(\rho_{t}\) converges to \(\rho_{0}\in\mathcal{C}^{\infty}_{\mathrm{loc}}(\Sigma\setminus Z_{0})\), and \(g_{t}\) is a family with bounded geometry, we obtain the estimate \[\|s_{t}\|_{\mathcal{C}^{k+2,\alpha}(\Sigma)}\leq C_{k,\alpha}(\|\rho_{t}\|_{ \mathcal{C}^{k,\alpha}(\Sigma)}+1)\,\] where \(C_{k,\alpha}\) is a \(t\)-independent constant. Therefore, passing to a subsequence, \(s_{t}\) converges to \(s_{0}\) in \(\mathcal{C}^{\infty}(\Sigma)\), which implies (i). The assertion (ii) follows from (i). ## 4. The algebraic and analytic compactifications In this section, we introduce compactifications of the Dolbeault and Hitchin moduli spaces. ### The algebraic compactification of the Dolbeault moduli space In this subsection, we present the algebraic method for compactifying the Dolbeault moduli space. This technique is based on the \(\mathbb{C}^{*}\) action on \(\mathcal{M}_{\mathrm{Dol}}\), and was introduced in [41, 39, 19, 7, 27]. The gauge theoretic approach can be found in [11]. **Theorem 4.1** ([41, Thm. 11.2],[7]).: _Let \(V\) be an algebraic variety with \(\mathbb{C}^{*}\) action. Suppose_ 1. _the fixed point set of the_ \(\mathbb{C}^{*}\) _action is proper,_ 2. _for every_ \(t\in\mathbb{C}^{*},\;v\in V\)_, the limit_ \(\lim_{t\to 0}t\cdot v\) _exists._ _Then the space \(U:=\{v\in V\mid\lim_{t\to\infty}t\cdot v\text{ does not exist}\}\) is open in \(V\), and the quotient \(U/\mathbb{C}^{*}\) is separated and proper._ We apply this to the Dolbeault moduli space. The first step is to note that the possible isotropy subgroups are limited. **Lemma 4.2**.: _[_19_, Thm. 6.2]_ _Let \(\xi=[(\mathcal{E},\varphi)]\) be a Higgs bundle equivalence class with \(\mathcal{H}(\xi)\neq 0\). Then the stabilizer \(\Gamma_{\xi}\) of \(\xi\) for the \(\mathbb{C}^{*}\) action is either trivial or \(\mathbb{Z}/2\). The latter case holds if and only if \((\mathcal{E},\varphi)\) and \((\mathcal{E},-\varphi)\) are complex gauge equivalent._ Proof.: For \(t\in\Gamma_{\xi}\), \(\mathcal{H}(t\cdot\xi)=t^{2}\mathcal{H}(\xi)\), hence \(t^{2}=1\) if \(\mathcal{H}(\xi)\neq 0\). By this Lemma, the space \((\mathcal{M}_{\mathrm{Dol}}\setminus\mathcal{H}^{-1}(0))/\mathbb{C}^{*}\) has an orbifold structure. In passing, we note that the fixed points of the \(\mathbb{Z}/2\) action correspond to real representations under the nonabelian Hodge correspondence [21, Sec. 10]. By the properness of the Hitchin map \(\mathcal{H}\) (see Theorem 2.1), it follows that \(\lim_{t\to\infty}t\cdot\xi\) exists if and only if \(\mathcal{H}(\xi)=0\). Now define \[\overline{\mathcal{M}}_{\mathrm{Dol}}=\left\{(\mathcal{M}_{\mathrm{Dol}} \times\mathbb{C}^{*})\coprod(\mathcal{M}_{\mathrm{Dol}}\setminus\mathcal{H}^{- 1}(0))\right\}/\mathbb{C}^{*}. \tag{6}\] The analytic topology on the disjoint union is generated by open sets \(U\times W_{1}\) and \(V\times(W_{2}\cap\mathbb{C}^{*})\amalg V\cap(\mathcal{M}_{\mathrm{Dol}}\setminus \mathcal{H}^{-1}(0))\), where \(U,V\subset\mathcal{M}_{\mathrm{Dol}}\), \(W_{1},W_{2}\subset\mathbb{C}\) are open, and \(0\not\in W_{1}\), \(0\in W_{2}\). The topology on \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) is then the quotient topology, and it is straightforward to see that with this topology, it is compact. Since \((\mathcal{M}_{\mathrm{Dol}}\times\mathbb{C}^{*})/\mathbb{C}^{*}=\mathcal{M}_{ \mathrm{Dol}}\), there is a natural inclusion \[\iota:\mathcal{M}_{\mathrm{Dol}}\to\overline{\mathcal{M}}_{\mathrm{Dol}},\; \iota(\xi)=[(\xi,1)]\,\] where brackets denote the equivalence class under the \(\mathbb{C}^{*}\) action. The boundary of \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) is \[\partial\overline{\mathcal{M}}_{\mathrm{Dol}}=\overline{\mathcal{M}}_{\mathrm{ Dol}}\setminus\iota(\mathcal{M}_{\mathrm{Dol}})=(\mathcal{M}_{\mathrm{Dol}} \setminus\mathcal{H}^{-1}(0))/\mathbb{C}^{*}\.\] There is a _boundary map_ \[\iota_{\partial}:\mathcal{M}_{\mathrm{Dol}}\setminus\mathcal{H}^{-1}(0)\longrightarrow \partial\overline{\mathcal{M}}_{\mathrm{Dol}},\;\xi\mapsto[(\xi,0)]\,\] which is invariant under the \(\mathbb{C}^{*}\) action, i.e., \(\iota_{\partial}(\lambda\xi)=\iota_{\partial}(\xi)\) for \(\lambda\in\mathbb{C}^{*}\). The \(\mathbb{C}^{*}\) action on \(\mathcal{M}_{\mathrm{Dol}}\) covers the square of the action on \(\mathcal{B}\). Hence, it is natural to compactify \(\mathcal{B}\) by projectivizing: \[\overline{\mathcal{B}}:=\mathbb{P}(H^{0}(K^{2})\oplus\mathbb{C})\.\] The inclusion is given, as usual, by \[\iota_{0}:\mathcal{B}\to\overline{\mathcal{B}},\;\iota(q)=[q\times\{1\}]\,\] where \(q\times\{1\}\in H^{0}(K^{2})\oplus\mathbb{C}\). We also define \(\partial\overline{\mathcal{B}}=\overline{\mathcal{B}}\setminus\iota_{0}( \mathcal{B})\simeq\mathbb{P}(H^{0}(K^{2}))\), with boundary projection map \[\iota_{0,\partial}:\mathcal{B}\setminus\{0\}\to\partial\overline{\mathcal{B}} \,\;\iota_{0,\partial}(q)=[q\times\{0\}]\.\] The Hitchin map \(\mathcal{H}:\mathcal{M}_{\mathrm{Dol}}\to\mathcal{B}\) extends to \(\overline{\mathcal{H}}:\overline{\mathcal{M}}_{\mathrm{Dol}}\to\overline{ \mathcal{B}}\), where \(\overline{\mathcal{H}}|_{\mathcal{M}_{\mathrm{Dol}}}:=\iota_{0}\circ\mathcal{H}\), and for every \([(\mathcal{E},\varphi)]/\mathbb{C}^{*}\in\partial\overline{\mathcal{M}}_{ \mathrm{Dol}}\), \[\overline{\mathcal{H}}([(\mathcal{E},\varphi)]/\mathbb{C}^{*}):=[(\mathcal{H} (\varphi),0)]\subset\overline{\mathcal{B}}\.\] This is well defined, since \(\det(\varphi)\neq 0\) if \([(\mathcal{E},\varphi)]/\mathbb{C}^{*}\in\partial\overline{\mathcal{M}}_{ \mathrm{Dol}}\). Moreover, commutes. There is a good algebraic structure on this algebraic compactification: **Theorem 4.3** ([41, 39, 19, 7, 11]).: _The compactified space \(\overline{\mathcal{M}}_{\mathrm{Dol}}\) is a normal projective variety, and \(\partial\overline{\mathcal{M}}_{\mathrm{Dol}}\) is a Cartier divisor of \(\overline{\mathcal{M}}_{\mathrm{Dol}}\)._ The following characterization of sequential convergence is useful. **Proposition 4.4**.: _Let \([(\mathcal{E}_{i},\varphi_{i})]\in\mathcal{M}_{\mathrm{Dol}}\) be a sequence of Higgs bundles, and write \(q_{i}=\det(\varphi_{i})\) and \(r_{i}=\left\|q_{i}\right\|_{L^{2}}^{\frac{1}{2}}\). Suppose \(\limsup r_{i}\to\infty\). Then up to subsequence:_ 1. _there exists a Higgs bundle_ \([(\widehat{\mathcal{E}}_{\infty},\hat{\varphi}_{\infty})]\) _with_ \(\hat{q}_{\infty}=\det(\tilde{\varphi}_{\infty})\) _and_ \(\|\hat{q}_{\infty}\|_{L^{2}}=1\) _such that_ \(\lim_{i\to\infty}[(\mathcal{E}_{i},r_{i}^{-1}\varphi_{i})]=[(\widehat{\mathcal{ E}}_{\infty},\hat{\varphi}_{\infty})]\) _in_ \(\mathcal{M}_{\mathrm{Dol}}\) _and_ \(\lim_{i\to\infty}r_{i}^{-1}q_{i}=\hat{q}_{\infty}\) _in_ \(H^{0}(K^{2})\)_;_ 2. \[\lim_{i\to\infty}\iota[(\mathcal{E}_{i},\varphi_{i})] =\iota_{\partial}[(\widehat{\mathcal{E}}_{\infty},\hat{\varphi}_{ \infty})]\,\ \text{on}\ \overline{\mathcal{M}}_{\mathrm{Dol}}\,\] \[\lim_{i\to\infty}\iota_{0}(q_{i}) =\iota_{0,\partial}(\hat{q}_{\infty})\,\ \text{on}\ \overline{\mathcal{B}}\.\] Proof.: The first point follows since the Hitchin map \(\mathcal{H}\) is proper and \(\mathcal{H}(r_{i}^{-1}\varphi_{i})\) is bounded. The second follows directly from the definition. ### The analytic compactification of the Hitchin moduli space We next describe the compactification of the Hitchin moduli space, as developed in [29, 33, 46]. #### 4.2.1. Decoupled Hitchin equations We begin by defining the decoupled Hitchin equations. Recall the notation from Section 2.4, let \(E\) be a trivial, smooth, rank \(2\) vector bundle over a Riemann surface \(\Sigma\), and let \(H_{0}\) be a background Hermitian metric on \(E\). Let \(Z\) be a finite set of distinct points in \(\Sigma\). For a smooth unitary connection \(A\) on \(E|_{\Sigma\setminus Z}\) and smooth \(\phi=\varphi+\varphi^{\dagger}\in\Omega^{1}(i\mathfrak{su}(E))|_{\Sigma \setminus Z}\), the _decoupled Hitchin equations_ on \(\Sigma\setminus Z\) are: \[F_{A}=0\,\ [\varphi,\varphi^{\dagger}]=0\,\ \bar{\partial}_{A}\varphi=0. \tag{7}\] Solutions to (7) may be quite singular near \(Z\), so we make the following restriction: **Definition 4.5**.: _A solution \((A,\phi)\) to the decoupled Hitchin equations over \(\Sigma\setminus Z\) is called admissible if \(\phi\neq 0\), and \(|\phi|\) extends to a continuous function on \(\Sigma\) with \(|\phi|^{-1}(0)=Z\)._ By a _limiting configuration_ we always mean an admissible solution to the decoupled Hitchin equations. Clearly, \(Z\) is determined by \((A,\phi)\). Admissibility guarantees that \(\det(\varphi)\) extends to a holomorphic quadratic differential \(q=\det(\varphi)\) on \(\Sigma\), with \(Z=q^{-1}(0)\) the zero locus. Hence, the spectral curve \(S_{q}\) is well-defined. We emphasize that \(Z\) may vary for different admissible solutions, but one always has that \(\#Z\leq 4g-4\). The equivalence relation on limiting configurations is that \((A_{1},\phi_{1})\sim(A_{2},\phi_{2})\) if \(Z_{1}=Z_{2}\) and \((A_{1},\phi_{1})g=(A_{2},\phi_{2})\) for a smooth unitary gauge transformation \(g\) on \(\Sigma\setminus Z_{1}\). The moduli space of decoupled Hitchin equations is then \[\mathcal{M}_{\mathrm{Hit}}^{\mathrm{Lim}}=\{\text{admissible solutions to \eqref{eq:Hitchin}}\}/\sim\.\] We denote by \(\mathcal{M}_{\mathrm{Hit},q}^{\mathrm{Lim}}\) the elements in \(\mathcal{M}_{\mathrm{Hit}}^{\mathrm{Lim}}\) with a fixed quadratic differential \(q\). In this case, the equivalence relation is induced by the action of the unitary gauge group over \(\Sigma\setminus Z\), \(Z=q^{-1}(0)\). There is a natural \(\mathbb{C}^{*}\) action on the moduli space \(\mathcal{M}_{\mathrm{Hit}}^{\mathrm{Lim}}\): given \((A,\phi=\varphi+\varphi^{\dagger})\in\mathcal{M}_{\mathrm{Hit}}^{\mathrm{Lim}}\) and \(t\in\mathbb{C}^{*}\), we set \(t\cdot[(A,\phi)]=[(A,t\varphi+\bar{t}\varphi^{\dagger})]\), which is also a solution to (7). #### 4.2.2. Compactification of the Hitchin moduli space The following compactness result is due to Taubes [47] and Mochizuki [33] (see also [20]). **Proposition 4.6**.: _Let \((A_{i},\varphi_{i})\) be a sequence of solutions to (3), with \(q_{i}=\det(\varphi_{i})\in H^{0}(K^{2})\). Then_ 1. _if_ \(\limsup\|q_{i}\|_{L^{2}(\Sigma)}<\infty\)_, then there is a subsequence (also denoted_ \(\{i\}\)_), a smooth solution_ \((A_{\infty},\phi_{\infty})\) _to (_3_), and a sequence_ \(g_{i}\) _of smooth unitary gauge transformations on_ \(\Sigma\)_, such that_ \((A_{i},\phi_{i})g_{i}\) _converges smoothly to_ \((A_{\infty},\phi_{\infty})\) _on_ \(\Sigma\)_;_ 2. _if_ \(\lim\|q_{i}\|_{L^{2}(\Sigma)}=\infty\)_, then there is a subsequence (also denoted_ \(\{i\}\)_), and_ \(q_{\infty}\in H^{0}(K^{2})\) _so that_ \[\frac{q_{i}}{\|q_{i}\|_{L^{2}}}\longrightarrow q_{\infty}\] _over_ \(\Sigma\)_, and an admissible solution_ \((A_{\infty},\phi_{\infty}=\varphi_{\infty}+\varphi_{\infty}^{\dagger})\) _to (_7_), with_ \(Z_{\infty}:=q_{\infty}^{-1}(0)\)_, and smooth unitary gauge transformations_ \(g_{i}\) _on_ \(\Sigma\setminus Z_{\infty}\)_, such that over any open set_ \(\Omega\Subset\Sigma\setminus Z_{\infty}\)_,_ \((A_{i})g_{i}\to A_{\infty}\)_, and_ \[\frac{g_{i}^{-1}\phi_{i}g_{i}}{\|\phi\|_{L^{2}}}\longrightarrow\phi_{\infty}\] _smoothly on_ \(\Omega\)_._ The norm on \(H^{0}(\Sigma,K^{2})\) can be chosen arbitrarily, since it is a finite dimensional space. There is also a compactness result for sequences of solutions in \(\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}\). **Proposition 4.7**.: _Let \([(A_{i},\phi_{i}=\varphi_{i}+\varphi_{i}^{\dagger})]\in\mathcal{M}^{\mathrm{Lim}} _{\mathrm{Hit}}\) be a sequence of admissible solutions to (7), and let \(q_{i}=\det(\varphi_{i})\) be the corresponding quadratic differentials. Then after passing to a subsequence, there are \(t_{i}\in\mathbb{C}^{*}\), a limiting configuration \((A_{\infty},\phi_{\infty}=\varphi_{\infty}+\varphi_{\infty}^{\dagger})\) with quadratic differential \(q_{\infty}=\det(\varphi_{\infty})\neq 0\), and a sequence \(g_{i}\) of smooth gauge transformations on \(\Sigma\setminus Z_{\infty}\), such that:_ 1. \(t_{i}^{2}q_{i}\) _converges smoothly to_ \(q_{\infty}\)_,_ 2. _over any open set_ \(\Omega\Subset X\setminus Z_{\infty}\)_,_ \((A_{i},t_{i}\cdot\phi_{i})g_{i}\) _converges smoothly to_ \((A_{\infty},\phi_{\infty})\)_._ Proof.: Write \(q_{i}=\det(\varphi_{i})\in H^{0}(K^{2})\). Adjusting by \(t_{i}\) if necessary, we may assume \(q_{i}\) converges to \(q_{\infty}\) over \(\Sigma\). Also, since \(F_{A_{i}}=0\) over \(\Sigma\setminus Z_{i}\) and \(Z_{i}\) converges to \(Z_{\infty}\), we can apply both Uhlenbeck compactness and the classical bootstrapping method to obtain \(A_{\infty}\) such that up to gauge \(A_{i}\) converges smoothly to \(A_{\infty}\) over \(\Sigma\setminus Z_{\infty}\). Finally, the convergence of \(\varphi_{i}\) follows by the bound on \(q_{i}\)'s. #### 4.2.3. The topology on the compactified space We now carefully define the topology on the space \(\mathcal{M}_{\mathrm{Hit}}\coprod\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}/ \mathbb{C}^{*}\). Choose a metric in the conformal class on \(\Sigma\). Let \(W^{k,2}\) denote the Sobolev spaces on \(\Sigma\) of distributional sections with at least \(k\) derivatives in \(L^{2}\). For a finite set of points \(Z\subset\Sigma\) (or indeed any closed subset), \[W^{k,2}_{\mathrm{loc}}(\Sigma\setminus Z):=\{f\mid f\in W^{k,2}(K),\ K\subset \Sigma\setminus Z,\ K\ \mathrm{compact}\}.\] These definitions extend easily to the space of connections and \(\Omega^{1}(\textit{i}\mathfrak{su}(E))\) for a Hermitian vector bundle \((E,H_{0})\) over \(\Sigma\) with a fixed smooth background connection. Let \(\omega_{n}\) be a nested collection of open sets with \(\omega_{n}\subset\overline{\omega_{n}}\subset\omega_{n+1}\), with \(\bigcup_{n}\omega_{n}=\Sigma\setminus Z\). We then define the seminorms \(\|f\|_{n}:=\|f\|_{W^{k,2}(\omega_{n})}\); in terms of these, \(W^{k,2}_{\mathrm{loc}}(\Sigma\setminus Z)\) a Frechet space. For any \(q\in H^{0}(K^{2})\setminus\{0\}\), set \(Z_{q}:=q^{-1}(0)\), and consider the moduli space \[\mathbb{M}_{q}=\{(A,\phi)\in\mathcal{M}_{q^{*}}\cap W^{k,2}(\Sigma)\}\cup\{(A,\phi)\in\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit},q}\cap W^{k,2}_{\mathrm{loc }}(\Sigma\setminus Z_{q})\}/\mathbb{C}^{*}\.\] By classical bootstrapping of the gauge-theoretic elliptic equations, \(\mathbb{M}_{q}\) is independent of \(k\geq 2\). Next define \(\mathbb{M}:=\mathcal{M}_{0}\cup\bigcup_{q\in H^{0}(K^{2})\setminus\{0\}} \mathbb{M}_{q}\). Its topology is generated by two types of open sets. For interior points \(\xi=[(A,\phi)]\in\mathcal{M}_{\mathrm{Hit}}\subset\mathbb{M}\) we use the open sets \[V_{\xi,\epsilon}:=\{[(A^{\prime},\phi^{\prime})]\in\mathcal{M}_{\mathrm{Hit}} \mid\|A^{\prime}-A\|_{W^{k,2}(\Sigma)}+\|\phi^{\prime}-\phi\|_{W^{k,2}(\Sigma)} <\epsilon\}\] from the topology of \(\mathcal{M}_{\mathrm{Hit}}\). For any boundary point \(\xi_{0}\in\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}/\mathbb{C}^{*}\), choose a representative \((A_{0},e^{i\theta}\phi_{0})\) for some constant \(\theta\). Let \(q=\det(\phi_{0})\), and fix any open set \(\omega\Subset\Sigma\setminus Z_{q}\). Then, setting \(\mathcal{M}^{*}_{\mathrm{Hit}}=\mathcal{M}_{\mathrm{Hit}}\setminus\mathcal{H}^ {-1}(0)\), \[U_{\xi_{0},\omega,\epsilon}:= \{(A,\phi)\in\mathcal{M}^{*}_{\mathrm{Hit}}\mid\|A-A_{0}\|_{W^{k,2 }(\omega)}+\inf_{\theta\in\mathbb{S}^{1}}\|\|\phi\|_{L^{2}}^{-\frac{1}{2}}\phi -e^{i\theta}\phi_{0}\|_{W^{k,2}(\omega)}<\epsilon,\ \|\phi\|_{L^{2}}>\epsilon\}\] \[\bigcup\{(A,\phi)\in\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}} \|\|A-A_{0}\|_{W^{k,2}(\omega)}+\|\phi-\phi_{0}\|_{W^{k,2}(\omega)}<\epsilon\}\] defines an open set around \(\xi_{0}\). The sets \(U_{\xi_{0},\omega,\epsilon}\) and \(V_{\xi,\epsilon}\) generate the topology on \(\mathbb{M}\). **Theorem 4.8**.: _The space \(\mathbb{M}\) is a Hausdorff and compact._ Proof.: The Hausdorff property follows from the definition of the topology. By Propositions 4.6 and 4.7, \(\mathbb{M}\) is sequentially compact. Moreover, using this explicit base for the topology \(\mathbb{M}\) is first countable, and hence compact. We may now define the compactification of the Hitchin moduli space as the closure \(\overline{\mathcal{M}}_{\mathrm{Hit}}\subset\mathbb{M}\); we write \(\partial\overline{\mathcal{M}}_{\mathrm{Hit}}\) for the boundary of the closure, and \(\overline{\mathcal{M}}_{\mathrm{Hit},q}:=\overline{\mathcal{M}}_{\mathrm{Hit}} \cap\mathbb{M}_{q}\) for the subset of elements with a fixed quadratic differential. The following result is described in [30, 36, 31]. **Theorem 4.9** ([31, Prop. 3.3]).: _If \(q\) has only simple zeros, then \(\overline{\mathcal{M}}_{\mathrm{Hit},q}=\mathbb{M}_{q}\)._ In other words, the compactification of any slice where \(q\) does not lie in the discriminant locus is "the obvious one". ## 5. Parabolic modules and stratification of BNR data In this section, we review the notion of a parabolic module, as described in [38, 5, 4, 16]. This concept leads to a partial normalization of the generalized Jacobian and Prym varieties of the spectral curve. ### Normalization of the spectral curve Let \(q\neq 0\) be a quadratic differential with an irreducible, singular spectral curve \(S=S_{q}\). The zeros of \(q\) define a divisor \(\mathrm{Div}(q)=\sum_{i=1}^{r_{1}}m_{i}p_{i}+\sum_{j=1}^{r_{2}}n_{j}p_{j}^{\prime}\), where the \(m_{i}\) and \(n_{j}\) are even and odd integers, respectively, and hence \(r_{1}\) and \(r_{2}\) are the numbers of even and odd zeros, respectively, counted without multiplicity. Write \(Z_{\mathrm{even}}=\{p_{1},\ldots,p_{r_{1}}\}\), \(Z_{\mathrm{odd}}=\{p_{1}^{\prime},\ldots,p_{r_{2}}^{\prime}\}\), and \(Z=Z_{\mathrm{even}}\cup Z_{\mathrm{odd}}\), so \(\#Z=r=r_{1}+r_{2}\). The map \(\pi:S\to\Sigma\) is a double covering branched along \(Z\); hence, we may view \(p_{i}\) and \(p_{i}^{\prime}\) as points in \(S\). For \(x\in S\), let \(\mathcal{O}_{x}\) be the local ring, \(\mathcal{O}_{x}^{*}\) its group of units, and \(R_{x}\) the completion. We say that \(S\) has an \(A_{n}\) singularity at \(x\) if \(R_{x}\cong\mathbb{C}[[r,s]]/(r^{2}-s^{n+1})\), where \(n\geq 1\). If \(S\) has an \(A_{1}\) singularity at \(x\), we call it a _nodal_ singularity, and if \(S\) has an \(A_{2}\) singularity at \(x\), we call it a _cusp_ singularity. Let \(p:\widetilde{S}\to S\) be the normalization of \(S\), and let \(\tilde{\pi}:=\pi\circ p\): For even zeros \(p_{i}\) we write \(p^{-1}(p_{i})=\{\tilde{p}_{i}^{+},\tilde{p}_{i}^{-}\}\), and for odd zeros \(p_{i}^{\prime}\) we write \(p^{-1}(p_{i}^{\prime})=\tilde{p}_{i}^{\prime}\). Since \(\pi:S\to\Sigma\) is a branched double cover, the involution \(\sigma\) on \(S\) extends to an involution of \(\widetilde{S}\) which we also denote by \(\sigma\). Note that \(\sigma(\tilde{p}_{i}^{\prime})=\tilde{p}_{i}^{\prime}\) while \(\sigma(\tilde{p}_{i}^{\pm})=\tilde{p}_{i}^{\mp}\). The ramification divisor \(\Lambda^{\prime}=\frac{1}{2}\sum_{i=1}^{r_{1}}m_{i}p_{i}+\sum_{j=1}^{r_{2}}(n _{j}-1)p_{j}^{\prime\prime}\), is a divisor on \(S\), and there is an exact sequence: \[0\longrightarrow\mathcal{O}_{S}\longrightarrow p_{*}\mathcal{O}_{\widetilde{ S}}\longrightarrow\mathcal{O}_{\Lambda^{\prime}}\longrightarrow 0. \tag{8}\] The genus of \(\widetilde{S}\) is \(g(\widetilde{S})=4g-3-\deg(\Lambda^{\prime})=2g-1+r_{2}/2\). ### Jacobian under the pull-back of the normalization We now recall some facts about the Jacobian under the pull-back of the normalization, cf. [16]. Let \(x\in Z\subset S\) be a singular point, i.e. either \(x\in Z_{\mathrm{even}}\) or \(x=p_{j}^{\prime}\) with \(n_{j}\geq 3\). Let \(\widetilde{\mathcal{O}}_{x}\) be the integral closure of \(\mathcal{O}_{x}\). We take \(V:=\prod_{x\in Z}\widetilde{\mathcal{O}}_{x}^{*}/\mathcal{O}_{x}^{*}\). Then we have the following well-known short exact sequence induced by the pull-back of the normalization. \[0\longrightarrow V\longrightarrow\mathrm{Jac}(S)\xrightarrow{p^{*}}\mathrm{ Jac}(\widetilde{S})\longrightarrow 0. \tag{9}\] This will play an important role later on. #### 5.2.1. Hitchin fiber Now we examine the locally free part \(\mathcal{T}\) of the Hitchin fiber under the pull-back. Here, \(\mathcal{T}\) is defined to be the set of \(L\in\operatorname{Pic}^{2g-2}(S)\) such that \(\det(\pi_{*}L)=\mathcal{O}_{\Sigma}\). For any \(L\in\operatorname{Pic}(S)\), from (8) we see that \(\det(\tilde{\pi}_{*}p^{*}L)\cong\det(\pi_{*}L)\otimes\mathcal{O}_{\Sigma}( \Lambda^{\prime})\). We define a new set, \(\widetilde{\mathcal{T}}\), as follows: \[\widetilde{\mathcal{T}}:=\{\widetilde{L}\in\operatorname{Pic}^{2g-2}( \widetilde{S})\mid\det(\tilde{\pi}_{*}L)\cong\mathcal{O}(\Lambda^{\prime})\}.\] It follows that \(p^{*}\) maps \(\mathcal{T}\) to \(\widetilde{\mathcal{T}}\). Furthermore, if \(L_{1},L_{2}\in\operatorname{Pic}(S)\) satisfy \(p^{*}L_{1}\cong p^{*}L_{2}\), then we have \(\pi_{*}L_{1}\cong\pi_{*}L_{2}\). This means that the fiber of \(p^{*}:\operatorname{Jac}(S)\to\operatorname{Jac}(\widetilde{S})\) is the same as \(p^{*}:\mathcal{T}\to\widetilde{\mathcal{T}}\), resulting in the following exact sequence: \[0\longrightarrow V\longrightarrow\mathcal{T}\xrightarrow{\ p^{*}}\widetilde{ \mathcal{T}}\longrightarrow 0. \tag{10}\] ### Torsion free sheaves Now we present Cook's parametrization of rank 1 torsion free sheaves on curves with Gorenstein singularities (see [4, p. 40] and also [5, 38]). An explicit computation of the invariants used in this paper is provided in Appendix A. Let \(x\in Z\) be a singular point of \(S\), and let \(L\to S\) be a rank 1 torsion free sheaf. We again let \(\mathcal{O}_{x}\) denote the local ring at \(x\), \(\widetilde{\mathcal{O}}_{x}\) its integral closure, \(\mathcal{K}_{x}\) its fraction field, and \(\delta_{x}=\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{x}/\mathcal{O}_{x})\). According to [18, Lemma 1.1], there exists a fractional ideal \(I_{x}\) that is isomorphic to \(L_{x}\), uniquely defined up to multiplication by a unit of \(\widetilde{\mathcal{O}}_{x}\), such that \(\mathcal{O}_{x}\subset I_{x}\subset\widetilde{\mathcal{O}}_{x}\). We define \(\ell_{x}:=\dim_{\mathbb{C}}(I_{x}/\mathcal{O}_{x})\) and \(b_{x}:=\dim_{\mathbb{C}}(\operatorname{Tor}(I_{x}\otimes_{\mathcal{O}_{x}} \widetilde{\mathcal{O}}_{x}))\). Then, \(\ell_{x}\) and \(b_{x}\) are invariants of \(L\). Let \(\mathcal{K}_{x}\) be the field of fractions of \(\mathcal{O}_{x}\). The _conductor_ of \(I_{x}\subset\widetilde{\mathcal{O}}_{x}\) is defined to be \[C(I_{x})=\{u\in\mathcal{K}_{x}\mid u\cdot\widetilde{\mathcal{O}}_{x}\subset I _{x}\}\.\] The singularity is characterized by the following dimensions: \[\underbrace{C(\mathcal{O}_{x})\subset C(I_{x})\subset\overbrace{\mathcal{O} _{x}\subset\underbrace{I_{x}\subset\widetilde{\mathcal{O}}_{x}}_{\delta_{x}- \ell_{x}}}^{2\delta_{x}-2\ell_{x}}}_{2\delta_{x}}\.\] For \(x=p_{i}\in Z_{\text{even}}\), we have \(\delta_{p_{i}}=m_{i}/2\), and there are two maximal ideals \(\mathfrak{m}_{\pm}\) in \(\widetilde{\mathcal{O}}_{x}\) corresponding to the points \(\tilde{p}_{i}^{\pm}\). We let \((\widetilde{\mathcal{O}}_{p_{i}}/C(I_{p_{i}}))_{\mathfrak{m}_{\pm}}\) be the localization by the ideals \(\mathfrak{m}_{\pm}\), and define \(a_{\tilde{p}_{i}^{\pm}}:=\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{p_{i}}/C( I_{p_{i}}))_{\mathfrak{m}_{\pm}}\). Moreover, we have \(\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{p_{i}}/C(\mathcal{O}_{p_{i}}))_{ \mathfrak{m}_{\pm}}=m_{i}/2=\delta_{p_{i}}\). By Appendix A, \(a_{\tilde{p}_{i}^{\pm}}=(m_{i}/2)-\ell_{p_{i}}\), and therefore \(a_{\tilde{p}_{i}^{+}}+a_{\tilde{p}_{i}^{-}}=2\delta_{p_{i}}-2\ell_{p_{i}}\), and also \(b_{p_{i}}=\ell_{p_{i}}\). Define \[V(L_{p_{i}})=\{(c_{i}^{+},c_{i}^{-})\mid c_{i}^{\pm}\in\mathbb{Z}_{\geq 0}\,\ c_{i}^{+}+c_{i}^{-}=\ell_{p_{i}}\}\.\] For \(x=p_{i}^{\prime}\in Z_{\text{odd}}\), we have \(\delta_{p_{i}^{\prime}}=(n_{i}-1)/2\), and the maximal ideal \(\mathfrak{m}\) of \(\widetilde{\mathcal{O}}_{x}\) is unique. Define \(a_{\tilde{p}_{i}^{\prime}}:=\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{p_{i}^{ \prime}}/C(I_{p_{i}^{\prime}}))_{\mathfrak{m}}\). By Appendix A, we have \(a_{\tilde{p}_{i}^{\prime}}=2\delta_{p_{i}^{\prime}}-2\ell_{p_{i}^{\prime}}\) and \(b_{p_{i}^{\prime}}=\ell_{p_{i}^{\prime}}\). Moreover, \(\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{p_{i}^{\prime}}/C(\mathcal{O}_{p_{ i}^{\prime}}))_{\mathfrak{m}}=n_{i}-1=2\delta_{p_{i}^{\prime}}\). In this case we set \(V(L_{p_{i}^{\prime}})=\{\ell_{p_{i}^{\prime}}\}\). Now consider modules compatible with \(L_{x}\). Let \(\eta:\widetilde{\mathcal{O}}_{x}\to\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x})\) be the quotient map. Define \[S(L_{x}):=\{\mathcal{O}_{x}\text{-submodules }F_{x}\subset\widetilde{\mathcal{O}}_{x}/C( \mathcal{O}_{x})\mid\dim_{\mathbb{C}}(F_{x})=\delta\,\ \eta^{-1}(F_{x})\cong L_{x}\}\.\] Hence, if \(J_{x}=\eta^{-1}(F_{x})\) with \(F_{x}\in S(L_{x})\), there exists an ideal \(\mathfrak{t}_{x}\) such that \(J_{x}=\mathfrak{t}_{x}\cdot L_{x}\). For \(x=p_{i}\in Z_{\text{even}}\), we obtain two integers \(c_{i}^{\pm}=\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{x}/(\mathfrak{t}_{x}\cdot \widetilde{\mathcal{O}}_{x}))_{\mathfrak{m}_{\pm}}\). By [4, Lemma 6], \((c_{i}^{+},c_{i}^{-})\in V(L_{p_{i}})\), for \(x=p_{i}^{\prime}\in Z_{\mathrm{odd}}\), \(\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{x}/(\mathfrak{t}_{x}\cdot\widetilde{ \mathcal{O}}_{x}))=\ell_{p_{i}^{\prime}}\in V(L_{p_{i}^{\prime}})\), and these only depend on \(F_{x}\). Hence, there is a well-defined map: \[\kappa_{x}:S(L_{x})\longrightarrow V(L_{x})\ :\ \begin{cases}F_{x}\to(c_{i}^{+},c_{i}^{-})& \text{when }x=p_{i},\\ F_{x}\to\ell_{p_{i}^{\prime}}&\text{when }x=p_{i}^{\prime}\.\end{cases}\] **Lemma 5.1** ([4, Lemma 6]).: _For \(x\in Z\), the components of \(S(L_{x})\) are parameterized by elements in \(V(L_{x})\)._ Set \(V(L):=\prod_{x\in Z}V(L_{x})\) and \(S(L):=\prod_{x\in Z}S(L_{x})\). Write \(N(L):=|V(L)|\) for the number of points in \(V(L)\). there is a map \[\kappa:=\prod_{x\in Z}\kappa_{x}:S(L)\longrightarrow V(L)\.\] For any \(\mathbf{c}\in V(L)\), write \(\mathbf{c}=(c_{1}^{\pm},\dots,c_{r_{1}}^{\pm},\ell_{p_{1}^{\prime}},\dots, \ell_{p_{r_{2}}^{\prime}})\). Associate to this the divisor \[D_{\mathbf{c}}=\sum_{i=1}^{r_{1}}(c_{i}^{+}\tilde{p}_{i}^{+}+c_{i}^{-}\tilde{p }_{i}^{-})+\sum_{i=1}^{r_{2}}\ell_{p_{i}^{\prime}}\tilde{p}_{i}^{\prime}\] on \(\widetilde{S}\). Composing \(\kappa\) with the map above, we define \[\varkappa:S(L)\longrightarrow\operatorname{Div}(\widetilde{S})\ :\ \prod_{x\in Z}F_{x}\mapsto\mathbf{c}\mapsto D_{\mathbf{c}}. \tag{11}\] The following result is straightforward but important: **Proposition 5.2**.: \(L\) _is locally free if and only if \(\varkappa=0\) on \(S(L)\)._ Proof.: \(L\) is locally free if and only if \(\ell_{x}=0\) for \(x\in Z\). The claim then follows directly from the definition of \(D_{\mathbf{c}}\). ### Parabolic modules In this subsection, we define the notion of a parabolic module, following [38, 5, 4]. First note that \(\dim_{\mathbb{C}}(\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x}))=2\delta_{x}\). Let \(\operatorname{Gr}(\delta_{x},\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x}))\) be the Grassmannian of \(\delta_{x}\) dimensional subspaces of the vector space \(\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x})\). Then \(\widetilde{\mathcal{O}}_{x}^{\pm}\) acts on \(\operatorname{Gr}(\delta_{x},\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x}))\) by multiplication, with fixed points corresponding to \(\delta_{x}\)-dimensional \(\mathcal{O}_{x}\) submodules of \(\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x})\). We write \(\mathcal{P}(x)\) for the (reduced) variety of fixed points. This is a closed subvariety of \(\operatorname{Gr}(\delta_{x},\widetilde{\mathcal{O}}_{x}/C(\mathcal{O}_{x}))\). Suppose \(x\) is an \(A_{n}\) singularity. For notational convenience, we write \(\mathcal{P}(A_{n}):=\mathcal{P}(x)\). We have the following: **Proposition 5.3** ([4, Prop. 2]).: _The following holds:_ 1. \(\mathcal{P}(A_{n})\) _is connected and depends only on_ \(\delta_{x}\)_. Also,_ \(\dim_{\mathbb{C}}\mathcal{P}(A_{n})=n\)_, and we have isomorphisms_ \(\mathcal{P}(A_{2n-1})\cong\mathcal{P}(A_{2n})\)_._ 2. _If_ \(P(A_{0})\) _is defined to be a point, then the inclusions_ \(\mathcal{P}(A_{0})\subset\mathcal{P}(A_{2})\subset\dots\subset\mathcal{P}(A_{2n})\) _give a cell decomposition of_ \(\mathcal{P}(A_{2n})\)_._ 3. _The singular locus_ \(\operatorname{Sing}(\mathcal{P}(A_{2n}))\cong\mathcal{P}(A_{2n-4})\)_. In particular, it has codimension_ \(\geq 2\)_. Moreover,_ \(\mathcal{P}(A_{1})=\mathcal{P}(A_{2})\cong\mathbb{C}P^{1}\)_, and_ \(\mathcal{P}(A_{4})\) _is a quadric cone._ Define \(\mathscr{P}(S)=\prod_{x\in Z}\mathcal{P}(x)\). This only depends on the curve singularity of \(S\). Let \(J\in\operatorname{Pic}(\widetilde{S})\). We have an isomorphism \(J_{x}\otimes\mathcal{O}_{\Lambda^{\prime},x}\cong\widetilde{\mathcal{O}}_{x}/ C(\mathcal{O}_{x})\) as \(\mathcal{O}_{x}\)-modules. More explicitly, as vector spaces, \[J_{\tilde{p}_{i}^{+}}^{\oplus\frac{m_{i}}{2}}\oplus J_{\tilde{p}_{i}^{-}}^{ \oplus\frac{m_{i}}{2}}\cong\widetilde{\mathcal{O}}_{pi_{i}}/C(\mathcal{O}_{p_{ i}^{\prime}})\,\ J_{\tilde{p}_{i}^{\prime}}^{\oplus(n_{i}-1)}\cong\widetilde{\mathcal{O}}_{p_{i}^{ \prime}}/C(\mathcal{O}_{p_{i}^{\prime}})\.\] **Definition 5.4**.: _A parabolic module \(\mathrm{PMod}(\widetilde{S})\) consists of pairs \((J,v)\), where \(J\in\mathrm{Jac}(\widetilde{S})\) and \(v=\prod_{x\in Z}v_{x}\), with \(v_{x}\in J_{x}\otimes\mathcal{O}_{\Lambda^{\prime},x}\)._ By [4, p. 41], \(\mathrm{PMod}(\widetilde{S})\) has a natural algebraic structure. Let \(\mathrm{pr}:\mathrm{PMod}(\widetilde{S})\to\mathrm{Jac}(\widetilde{S})\) be the projection to the first component. Then \(\mathrm{pr}\) defines a fibration of \(\mathrm{PMod}(\widetilde{S})\) with fiber \(\mathscr{P}(S)\). Moreover, there is a finite morphism \(\tau:\mathrm{PMod}(\widetilde{S})\to\overline{\mathrm{Jac}}(S)\) defined by sending \((J,v)\to L\), where \(L\) is the kernel of the restriction map \(p_{*}J\to(J\otimes\mathcal{O}_{\Lambda})/v\): \[0\longrightarrow L\longrightarrow p_{*}J\longrightarrow(J\otimes\mathcal{O}_{ \Lambda})/v\longrightarrow 0\.\] There is a diagram: The map \(\tau\) may be regarded as the compactification of the pull-back normalization map \(p^{*}\) in (9). **Theorem 5.5** ([4, Thm. 1]).: _For the map \(\tau:\mathrm{PMod}(\widetilde{S})\to\overline{\mathrm{Jac}}(S)\) defined above,_ * \(\tau\) _is a finite morphism, where the fiber over_ \(L\) _consists of_ \(N(L)\) _points,_ * _The restriction_ \(\tau:\tau^{-1}\mathrm{Jac}(S)\to\mathrm{Jac}(S)\) _is an isomorphism. Moreover, for_ \(L\in\mathrm{Jac}(S)\)_, we have_ \(\mathrm{pr}\circ\tau^{-1}(L)=p^{*}(L)\)_._ * _Suppose_ \(\tau(J,v)=L\)_. For_ \(x\in Z\)_, we have_ \(v\in S(L)\)_. Let_ \(D_{v}=\varkappa(v)\) _be the divisor defined in (_11_). Then_ \[0\longrightarrow p^{*}L/\mathrm{Tor}(p^{*}L)\longrightarrow J\longrightarrow \mathcal{O}_{D_{v}}\longrightarrow 0\.\] _In particular,_ \(p^{*}L/\mathrm{Tor}(p^{*}L)=J(-D_{v})\)_._ Suppose all of the zeros of the quadratic differential \(q\) are odd. Then for \(L\in\overline{\mathrm{Jac}}(S)\), \(N(L)=1\), and we can deduce the following. **Corollary 5.6**.: _If \(q^{-1}(0)=\{p^{\prime}_{1},\ldots,p^{\prime}_{r}\}\) and all zeroes have odd multiplicity, then \(\tau:\mathrm{PMod}(\widetilde{S})\to\overline{\mathrm{Jac}}(S)\) is a bijection. Moreover, for \(L\in\overline{\mathrm{Jac}}(S)\) with \(\tau(J,v)=L\), we have_ \[p^{*}L/\mathrm{Tor}(p^{*}L)=J(-\sum\ell_{p^{\prime}_{i}}\vec{p}^{\prime}_{i})\.\] We will now present an example of a parabolic module. **Example 5.7** ([4, Ex. 2]).: _Suppose \(q\) contains \(4g-2\) simple zeros and one zero \(x\) of order \(2\). Then the spectral curve \(S\) has one nodal singularity at \(x\). Denote \(p:\widetilde{S}\to S\) the normalization, with \(p^{-1}(x)=\{\tilde{x}_{+},\tilde{x}_{-}\}\). Then \(\mathscr{P}(S)=\mathbb{C}P^{1}\), and we obtain a fibration \(\mathbb{C}P^{1}\to\mathrm{PMod}(\widetilde{S})\to\mathrm{Jac}(\widetilde{S})\). Let \(L\in\overline{\mathrm{Jac}}(S)\). If we write \(\widetilde{L}:=p^{*}L/\mathrm{Tor}(p^{*}L)\), then \(\tau^{-1}(L)=(\widetilde{L}\otimes\mathcal{O}(\tilde{x}_{+}),v_{+}),( \widetilde{L}\otimes\mathcal{O}(\tilde{x}_{-}),v_{-})\). We can define two sections:_ \[s_{\pm}:\overline{\mathrm{Jac}}(S)\longrightarrow\mathrm{PMod}(\widetilde{S} ),\;s_{\pm}:L\longrightarrow(\widetilde{L}\otimes\mathcal{O}(\tilde{x}_{\pm}),v_{\pm})\.\] _Then \(\overline{\mathrm{Jac}}(S)\) is the quotient of \(\mathrm{PMod}(\widetilde{S})\) given by the identification_ \[\overline{\mathrm{Jac}}(S)\cong\mathrm{PMod}(\widetilde{S})/(s_{+}\sim \mathcal{O}(\tilde{x}_{-}-\tilde{x}_{+})s_{-})\.\] _In particular, \(\mathrm{PMod}(\widetilde{S})\) is not a fibration over \(\overline{\mathrm{Jac}}(S)\)._ **Proposition 5.8**.: _The singular set of \(\mathrm{PMod}(\widetilde{S})\) has codimension at least \(2\). Moreover, if the spectral curve \(S\) contains only cusp or nodal singularities, then \(\mathrm{PMod}(\widetilde{S})\) is smooth._ Proof.: As the singularities of \(\operatorname{PMod}(\widetilde{S})\) come from the space \(\mathscr{P}(S)\), the claim follows from Proposition 5.3. Since we focus on \(\operatorname{SL}(2,\mathbb{C})\) Higgs bundles, we must consider the parabolic module compactification of the fibration \[0\longrightarrow V\longrightarrow\mathcal{P}\xrightarrow{\ \ \ p^{*}\ } \operatorname{Prym}(\widetilde{S}/\Sigma)\longrightarrow 0\.\] Setting, \(\widehat{\operatorname{PMod}}(\widetilde{S}):=\tau^{-1}(\overline{\mathcal{P}})\), then there is a diagram from [16, p. 17] (12) Theorem 5.5 proves that \(\operatorname{pr}\circ\tau^{-1}|_{\mathcal{P}}=p^{*}\). ### Stratifications of the BNR data Parabolic modules define a stratification of \(\overline{\mathcal{P}}\) and \(\overline{\mathcal{T}}\). In the following, \(\pi:S\to\Sigma\) is a branched double cover, \(\sigma\) the associated involution on \(S\), and by \(\sigma\) we also denote its extension to an involution on the normalization \(\widetilde{S}\) of \(S\). For a rank \(1\) torsion free sheaf \(L\in\overline{\operatorname{Pic}}(S)\), consider the map \[p^{*}_{\operatorname{tf}}:\overline{\operatorname{Pic}}(S)\longrightarrow \operatorname{Pic}(\widetilde{S})\,\ p^{*}_{\operatorname{tf}}(L):=p^{*}L/ \mathrm{Tor}(p^{*}L)\,\] i.e. the torsion free part of the pull-back to the normalization. By [37], \(p^{*}_{\operatorname{tf}}(L)=p^{*}L\) at \(x\in\widetilde{S}\) if and only if \(L\) is locally free at \(p(x)\in S\). **Definition 5.9** ([24]).: _An effective divisor \(D\in\operatorname{Div}(\widetilde{S})\) is called a \(\sigma\)-divisor if_ 1. \(D\leq\Lambda\) _and_ \(\sigma^{*}D=D\)_;_ 2. _and for any_ \(x\in\operatorname{Fix}(\sigma)\)_,_ \(D|_{x}=d_{x}x\)_, where_ \(d_{x}\equiv 0\mod 2\)_._ The \(\sigma\)-divisors play an important role in describing the singular Hitchin fibers. **Proposition 5.10** ([24, 33]).: _Let \(L\in\overline{\mathcal{P}}\) and write \(\widetilde{L}:=p^{*}_{\operatorname{tf}}L\). Then we have \(\widetilde{L}\otimes\sigma^{*}\widetilde{L}=\mathcal{O}(\Lambda-D)\) for \(D\) a \(\sigma\)-divisor._ For a \(\sigma\)-divisor \(D\), define \[\begin{split}\widetilde{\mathcal{T}}_{D}&=\{J\in \operatorname{Pic}(\widetilde{S})\mid J\otimes\sigma^{*}J=\mathcal{O}(\Lambda- D)\}\ ;\\ \widetilde{\mathcal{P}}_{D}&=\{J\in\operatorname{Pic}( \widetilde{S})\mid J\otimes\sigma^{*}J=\mathcal{O}(-D)\}\.\end{split} \tag{13}\] Then by [24, Prop. 5.6], \(\widetilde{\mathcal{T}}_{D}\) and \(\widetilde{\mathcal{P}}_{D}\) are abelian torsors over \(\operatorname{Prym}(\widetilde{S}/\Sigma)\) with dimension \(g(\widetilde{S})-g(S)=g-1+\frac{1}{2}r_{2}\). In addition, we define \[\begin{split}\overline{\mathcal{T}}_{D}&=\{L\in \overline{\mathcal{T}}\mid p^{*}_{\operatorname{tf}}L\in\widetilde{\mathcal{T}} _{D}\}\ ;\\ \overline{\mathcal{P}}_{D}&=\{L\in\overline{\mathcal{P} }\mid p^{*}_{\operatorname{tf}}L\in\widetilde{\mathcal{P}}_{D}\}\.\end{split} \tag{14}\] Then the partial order on divisors defines a stratification of \(\overline{\mathcal{T}}\) (resp. \(\overline{\mathcal{P}}\)) by: \(\cup_{D^{\prime}\leq D}\overline{\mathcal{T}}_{D^{\prime}}\) (resp. \(\cup_{D^{\prime}\leq D}\overline{\mathcal{P}}_{D^{\prime}}\)). The top strata are \(\overline{\mathcal{T}}_{D=0}\) (resp. \(\overline{\mathcal{P}}_{D=0}\)), and these consist of the locally free sheaves. From the definition, \(\mathcal{T}=\overline{\mathcal{T}}_{D=0}\) and \(\mathcal{P}=\overline{\mathcal{P}}_{D=0}\). **Theorem 5.11** ([24, Thm. 6.2]).: _(i) Suppose \(q\) contains at least one zero of odd order. Then for each stratum indexed by \(\sigma\)-divisor \(D\), if we let \(n_{ss}\) be the number of \(p\) such that \(D|_{p}=\Lambda|_{p}\), then there are holomorphic fiber bundles_ \[(\mathbb{C}^{*})^{k_{1}}\times\mathbb{C}^{k_{2}}\longrightarrow\overline{ \mathcal{T}}_{D}\overset{p_{\mathrm{tf}}^{*}}{\longrightarrow}\widetilde{ \mathcal{T}}_{D}\ ; \tag{15}\] \[(\mathbb{C}^{*})^{k_{1}}\times\mathbb{C}^{k_{2}}\longrightarrow\overline{ \mathcal{P}}_{D}\overset{p_{\mathrm{tf}}^{*}}{\longrightarrow}\widetilde{ \mathcal{P}}_{D}\,\] _where \(k_{1}=r_{1}-n_{ss}\), \(k_{2}=2g-2-\frac{1}{2}\deg(D)-r_{1}+n_{ss}-\frac{r_{2}}{2}\), and \(r_{1},r_{2}\) are the number of even and odd zeros. (ii) Suppose \(q\) is irreducible but all zeros are of even order. Then there exists an analytic space \(\overline{\mathcal{T}}_{D}\) and a double branched covering \(p:\overline{\mathcal{T}}_{D}\rightarrow\overline{\mathcal{T}}_{D}^{\prime}\), with \(\overline{\mathcal{T}}_{D}^{\prime}\) a holomorphic fibration_ \[(\mathbb{C}^{*})^{k_{1}}\times\mathbb{C}^{k_{2}}\longrightarrow\overline{ \mathcal{T}}_{D}^{\prime}\overset{p_{\mathrm{tf}}^{*}}{\longrightarrow} \widetilde{\mathcal{T}}_{D}\.\] _In particular, \(\dim(\overline{\mathcal{P}}_{D})=\dim(\overline{\mathcal{T}}_{D})=3g-3-\frac{ 1}{2}\deg(D)\)._ As explained in [24], via the BNR correspondence the stratification above translates into a stratification of the Hitchin fiber. Let \(\chi_{\mathrm{BNR}}:\overline{\mathcal{T}}\xrightarrow{\sim}\mathcal{M}_{q}\) be the bijection in Theorem 2.3. Let \(D\) be a \(\sigma\)-divisor. Define \(\mathcal{M}_{q,D}:=\chi_{\mathrm{BNR}}(\overline{\mathcal{T}}_{D})\). Then the stratification of \(\overline{\mathcal{T}}\) induces a stratification on \(\mathcal{M}_{q}=\bigcup_{D}\mathcal{M}_{q,D}\). For each \(\sigma\)-divisor \(D\), since \(\sigma^{*}D=D\), we can write \(D^{\prime}:=\frac{1}{2}\bar{\pi}(D)\). Then \(D^{\prime}\) is an effective divisor with \(\mathrm{supp}\ D^{\prime}\subset Z\). Moreover, for \(x\in q^{-1}(0)\), \(D^{\prime}_{x}\leq\frac{1}{2}[\operatorname{ord}_{x}(q)]\). Therefore, \(\mathcal{M}_{q}\) may be regarded as also being stratified by divisors \(D^{\prime}\) defined over \(\Sigma\). ### The structure of the parabolic module projection We now explain the relationship between the divisor \(D_{v}\) in Theorem 5.5 and the \(\sigma\)-divisor. Given \(L\in\overline{\mathcal{P}}\), define \[\mathscr{N}_{L}:=\{(J,v)\in\widehat{\mathrm{PMod}}(\widetilde{S} )\mid\tau(J,v)=L\}\ ;\] \[\mathscr{D}_{L}:=\{D_{v}\mid(J,v)\in\mathscr{N}_{L}\}\, \tag{16}\] where \(\mathscr{N}_{L}\) is \(\tau^{-1}(L)\), and \(\mathscr{D}_{L}\) is the collection of divisors \(D_{v}\) such that \(J(-D_{v})=p_{\mathrm{tf}}^{*}(L)\). If \(L\) is locally free, then \(J=p^{*}L\), and \(\mathscr{D}_{L}\) is empty. Moreover, if \(\tau(J,v)=\tau(J^{\prime},v)\), then \(J^{\prime}=J(D_{v^{\prime}}-D_{v})\). The divisor \(D_{v}\) satisfies the following symmetry property: **Proposition 5.12**.: _Let \(D\) be a \(\sigma\)-invariant divisor and \(L\in\overline{\mathcal{P}}_{D}\). For any \(D_{v}\in\mathscr{D}_{L}\), we have \(D_{v}+\sigma^{*}D_{v}=D\)._ Proof.: Let \(\tau(J,v)=L\). Then by Theorem 5.5, we have \(\widetilde{L}=J(-D_{v})\), where \(\widetilde{L}=p_{\mathrm{tf}}^{*}(L)\). As \(L\in\overline{\mathcal{P}}_{D}\) and \(J\in\mathrm{Prym}(\widetilde{S}/\Sigma)\), we have \(\widetilde{L}\otimes\sigma^{*}\widetilde{L}=\mathcal{O}(-D)\) and \(J\otimes\sigma^{*}J=\mathcal{O}_{\widetilde{S}}\), which implies \(D_{v}+\sigma^{*}D_{v}=D\). As a consequence, we have the following: **Corollary 5.13**.: _Suppose \(q\) has only zeroes of odd order. Then for \(L\in\overline{\mathcal{P}}_{D}\) and \(D_{v}\in\mathscr{D}_{L}\), we have \(\sigma^{*}D_{v}=D_{v}\) and \(D_{v}=\frac{1}{2}D\). In addition, \(\tau:\widehat{\mathrm{PMod}}(\widetilde{S})\rightarrow\overline{\mathcal{P}}\) is a bijection._ Proof.: Since each zero has odd order \(\mathrm{supp}\ (D_{v})\subset\mathrm{Fix}(\sigma)\), which implies \(D_{v}=\sigma^{*}D_{v}\). By Proposition 5.12, we must have \(D_{v}=\frac{1}{2}D\). There are relationships between the integers appearing in the construction of the parabolic module: **Lemma 5.14** ([17]).: _Let \(D=\sum_{i=1}^{r_{1}}d_{i}(\tilde{p}_{i}^{+}+\tilde{p}_{i}^{-})+\sum_{i=1}^{r_{2 }}d_{i}^{\prime}\tilde{p}_{i}^{\prime}\) be a \(\sigma\)-divisor, and let \(L\in\overline{\mathcal{P}}_{D}\). Then we have_ * \(\ell_{p_{i}}=d_{i}\) _and_ \(\ell_{p_{i}^{\prime}}=d_{i}^{\prime}/2\)_;_ * \(a_{\tilde{p}_{i}^{+}}=a_{\tilde{p}_{i}^{-}}=(m_{i}/2)-d_{i}\) _and_ \(a_{\tilde{p}_{i}^{\prime}}=n_{i}-1-d_{i}^{\prime}\)_._ Proof.: Since \(L\in\overline{\mathcal{P}}_{D}\), we have \(\dim\operatorname{Tor}(p^{*}L_{p_{i}})=d_{i}\) and \(\dim\operatorname{Tor}(p^{*}L_{p^{\prime}_{i}})=d^{\prime}_{i}/2\). The claim then follows from Proposition A.1. The elements in \(\mathscr{D}_{L}\) can be explicitly computed. **Proposition 5.15**.: _Let \(D=\sum_{i=1}^{r_{1}}d_{i}(\tilde{p}_{i}^{+}+\tilde{p}_{i}^{-})+\sum_{i=1}^{r_{2 }}d^{\prime}_{i}\tilde{p}^{\prime}_{i}\) be a \(\sigma\)-divisor, and let \(L\in\overline{\mathcal{P}}_{D}\). Then \(N_{L}=\prod_{i=1}^{r_{1}}(d_{i}+1)\). The number \(N_{L}\) depends only on the \(\sigma\)-divisor \(D\)._ Proof.: By Lemma 5.14, \(V(L)\) can be rewritten as \[V(L)=\{(c_{1}^{\pm},\dots,c_{r_{1}}^{\pm},c_{1}^{\prime}=l_{p^{\prime}_{1}}, \dots,c_{r_{2}}^{\prime}=l_{p^{\prime}_{r_{2}}})\mid c_{i}^{+}+c_{i}^{-}=d_{i}, c_{i}^{\pm}\in\mathbb{Z}_{\geq 0}\}\,\] which implies the claim. The condition \(D_{v}+\sigma^{*}D_{v}=D\) is automatically satisfied. If we define \(n_{L}\) to be the number of \(D_{v}\in\mathscr{D}_{L}\) such that \(\sigma^{*}D_{v}\neq D_{v}\), then we have the following: **Proposition 5.16**.: 1. \(n_{L}\) _is even;_ 2. _if_ \(L\in\overline{\mathcal{P}}_{D}\) _with_ \[D=\sum_{i=1}^{r_{1}}d_{i}(\tilde{p}_{i}^{+}+\tilde{p}_{i}^{-})+\sum_{i=1}^{r_ {2}}d^{\prime}_{i}\tilde{p}^{\prime}_{i}\,\] _and if there exists_ \(i_{0}\in\{1,\dots,r_{1}\}\) _such that_ \(d_{i_{0}}\) _is not even, then_ \(n_{L}=N_{L}\)_; otherwise,_ \(n_{L}=N_{L}-1\)_._ Proof.: To prove (i), note that if \(\sigma^{*}D_{v}\neq D_{v}\), then \(\sigma^{*}(\sigma^{*}D_{v})\neq\sigma^{*}D_{v}\), which means that \(n_{L}\) is even. For (ii), by Proposition 5.15, \(D_{v}=\sigma^{*}D_{v}\) for \(D_{v}\in\mathscr{D}_{L}\) if and only if \(c_{i}^{+}=c_{i}^{-}=d_{i}/2\). Therefore, \(n_{L}\neq N_{L}\) if and only if all \(d_{i}\) are even, which implies (ii). ## 6. Irreducible singular fibers and the Mochizuki map In this section, we provide a reinterpretation of the limiting configuration construction of a Higgs bundle on an irreducible fiber, as introduced by Mochizuki in [33] (see also [24]). We also investigate the relationship between limiting configurations and the stratification. ### Abelianization of a Higgs bundle Let \(q\) be a fixed irreducible quadratic differential with spectral curve \(S\), with normalization \(p:\widetilde{S}\to S\). We define \(\widetilde{K}:=\widetilde{\pi}^{*}K\) (but note that \(\widetilde{K}\neq K_{\widetilde{S}}\)) and \(\widetilde{q}:=\widetilde{\pi}^{*}q\in H^{0}(\widetilde{K}^{2})\), where \(\widetilde{\pi}\) is the double branched covering of \(\widetilde{S}\to\Sigma\) associated to the branching set \(Z\) of \(q\). Choose a square root \(\omega\in H^{0}(\widetilde{K})\) such that \(\widetilde{q}=-\omega\otimes\omega\). Let \(\Lambda:=\operatorname{Div}(\omega)\) and \(\widetilde{Z}:=\operatorname{supp}(\Lambda)\). We can then write \[\Lambda=\sum_{i=1}^{r_{1}}\frac{m_{i}}{2}(\tilde{p}_{i}^{+}+\tilde{p}_{i}^{-}) +\sum_{j=1}^{r_{2}}n_{j}\tilde{p}^{\prime}_{j}\.\] If \(\sigma:\widetilde{S}\to\widetilde{S}\) is the involution, then \(\sigma^{*}\omega=-\omega\). Let \((\mathcal{E},\varphi)\) be a Higgs bundle with \(\det\varphi=q\). Consider the pullback \((\widetilde{\mathcal{E}},\tilde{\varphi}):=(\widetilde{\pi}^{*}\mathcal{E}, \widetilde{\pi}^{*}\varphi)\). We have \(\tilde{\varphi}\in H^{0}(\operatorname{End}(\widetilde{\mathcal{E}})\otimes \widetilde{K})\) and \(\widetilde{q}=\det(\tilde{\varphi})\). Since \(\widetilde{q}=-\omega\otimes\omega\), \(\pm\omega\) are well-defined eigenvalues of \(\tilde{\varphi}\) over \(\widetilde{S}\). Let \(\tilde{\lambda}\) be the canonical section of \(\operatorname{Tot}(\widetilde{K})\). The spectral curve for \((\widetilde{\mathcal{E}},\tilde{\varphi})\) is defined by the equation \[\widetilde{S}^{\prime}:=\{\tilde{\lambda}^{2}-\tilde{q}=0\}.\] The set \(\widetilde{S}^{\prime}=\operatorname{Im}(\omega)\cup\operatorname{Im}(-\omega) \subset\operatorname{Tot}(\widetilde{K})\) decomposes into two irreducible pieces. Having fixed a choice of \(\omega\), the eigenvalues of \(\tilde{\varphi}\) are globally well-defined, and we can define the line bundle \(\widetilde{L}_{+}\subset\mathcal{E}\) as \(\widetilde{L}_{+}:=\ker(\tilde{\varphi}-\omega)\). Since \(\sigma^{*}\omega=-\omega\), \(\widetilde{L}_{-}=\sigma^{*}\widetilde{L}_{+}=\ker(\tilde{\varphi}+\omega)\), and there is an isomorphism \(\widetilde{\mathcal{E}}\big{|}_{\widetilde{S}\setminus\widetilde{Z}}\cong \widetilde{L}_{+}\oplus\widetilde{L}_{-}\big{|}_{\widetilde{S}\setminus\widetilde{Z}}\). There is a local description of \((\widetilde{\mathcal{E}},\widetilde{\varphi})\): **Lemma 6.1** ([24, Lemma 5.1, Thm. 5.3],[33, Lemma 4.2]).: _Let \(x\in\widetilde{Z}\) and write \(\Lambda|_{x}=m_{x}x\). Let \(U\) be a holomorphic coordinate neighborhood of \(x\). Then there exists a frame \(\mathfrak{c}\in H^{0}(U,\widetilde{K})\) such that, under a suitable trivialization of \(\mathcal{E}|_{U}\cong U\times\mathbb{C}^{2}\), we can write_ \[\widetilde{\varphi}=z^{d_{x}}\begin{pmatrix}0&1\\ z^{2m_{x}-2d_{x}}&0\end{pmatrix}\otimes\mathfrak{c}. \tag{17}\] _Moreover, if we define \(D:=\sum_{x\in\widetilde{Z}}d_{x}x\), then \(D\) is a \(\sigma\)-divisor._ **Lemma 6.2** ([33, Sec. 4.1]).: _For the \(\widetilde{L}_{\pm}\) defined above, we have \(\widetilde{L}_{+}\otimes\widetilde{L}_{-}=\mathcal{O}(D-\Lambda)\). Moreover, if we denote \(\widetilde{L}_{0}:=\widetilde{L}_{+}(\Lambda-D)\) and \(\widetilde{L}_{1}:=\sigma^{*}\widetilde{L}_{0}\), then \(\widetilde{L}_{+}=\widetilde{\mathcal{E}}\cap\widetilde{L}_{0}\), \(\widetilde{L}_{-}=\widetilde{\mathcal{E}}\cap\widetilde{L}_{1}\), and we have the exact sequences_ \[0\longrightarrow\widetilde{L}_{+}\longrightarrow\widetilde{ \mathcal{E}}\longrightarrow\widetilde{L}_{1}\longrightarrow 0\ ;\] \[0\longrightarrow\widetilde{L}_{-}\longrightarrow\widetilde{ \mathcal{E}}\longrightarrow\widetilde{L}_{0}\longrightarrow 0\.\] Proof.: The inclusion of \(\widetilde{L}_{\pm}\to\mathcal{E}\) defines an exact sequence of sheaves \[0\longrightarrow\mathcal{O}(\widetilde{L}_{+})\oplus\mathcal{O}(\widetilde{L }_{-})\longrightarrow\mathcal{O}(\mathcal{E})\longrightarrow\mathcal{T} \longrightarrow 0\,\] where \(\mathcal{T}\) is a torsion sheaf with \(\operatorname{supp}\mathcal{T}\subset\widetilde{Z}\). From the local description in (17), in the same trivialization, \(\widetilde{L}_{\pm}\) are spanned by the bases \(s_{\pm}=\begin{pmatrix}1\\ \pm z^{m_{x}-d_{x}}\end{pmatrix}.\) Therefore, as \(\det(\mathcal{E})=\mathcal{O}_{\Sigma}\), we obtain \(\widetilde{L}_{+}\otimes\widetilde{L}_{-}=\mathcal{O}(D-\Lambda)\). Since \(s_{+},s_{-}\) are linear independent away from \(z\), \(\widetilde{\mathcal{E}}/\widetilde{L}_{+}\) is locally generated by the section \(z^{d_{x}-m_{x}}s_{-}\). Therefore, \(\widetilde{\mathcal{E}}/\widetilde{L}_{+}\cong\widetilde{L}_{-}(\Lambda-D)= \widetilde{L}_{1}\). Using the involution, we obtain the other exact sequence. Therefore, if \(\widetilde{L}\otimes\sigma^{*}\widetilde{L}=\mathcal{O}(D-\Lambda)\), we have \(\widetilde{L}_{0}=\widetilde{L}(\Lambda-D)\in\widetilde{\mathcal{T}}_{D}\). In summary, the construction above leads us to consider the composition of the following maps: \[\delta:\mathcal{M}_{q}\longrightarrow\operatorname{Pic}(\widetilde{S}) \xrightarrow{\otimes\mathcal{O}(\Lambda-D)}\widetilde{\mathcal{T}}_{D}\ ;\] \[(\mathcal{E},\varphi)\mapsto\widetilde{L}_{+}\mapsto\widetilde{L}_{+}(\Lambda -D)\,\] where the first map is given by taking the kernel of \((\tilde{\pi}^{*}\varphi-\omega)|_{\tilde{\pi}^{*}\mathcal{E}}\). This construction is directly related to the torsion free pull-back. Recall that \(\chi_{\operatorname{BNR}}:\overline{\mathcal{T}}\to\mathcal{M}_{q}\) is the BNR correspondence map, and \(p^{\star}_{\operatorname{tf}}:\overline{\operatorname{Pic}}(S)\to \operatorname{Pic}(\widetilde{S})\) is the torsion free pull-back. Then we have **Proposition 6.3**.: \(\delta\circ\chi_{\operatorname{BNR}}=p^{\star}_{\operatorname{tf}}\)_. In particular, if \(J\in\overline{\mathcal{T}}_{D}\), then \(\delta\circ\chi_{\operatorname{BNR}}(J)\in\widetilde{\mathcal{T}}_{D}\)._ Proof.: Let \(J\in\overline{\mathcal{T}}\), and write \((\mathcal{E},\varphi)=\chi_{\operatorname{BNR}}(J)\), \((\widetilde{\mathcal{E}},\widetilde{\varphi}):=\tilde{\pi}^{*}(\mathcal{E},\varphi)\). Recall the BNR exact sequence on \(S\) (see (1)): \[0\longrightarrow J(-\Delta)\longrightarrow\pi^{*}\mathcal{E}\xrightarrow{ \pi^{*}\varphi-\lambda}\pi^{*}\mathcal{E}\otimes\pi^{*}K\longrightarrow J \otimes\pi^{*}K\longrightarrow 0\.\] As \(p^{*}\) is right exact, we obtain \[\widetilde{\mathcal{E}}\xrightarrow{\tilde{\varphi}-\tilde{\lambda}} \widetilde{\mathcal{E}}\otimes\tilde{K}\longrightarrow p^{*}J\otimes\tilde{K} \longrightarrow 0\.\] Since the spectral curve is \(\widetilde{S}^{\prime}=\operatorname{Im}(\omega)\cup\operatorname{Im}(-\omega)\), we can consider the restriction to the component \(\operatorname{Im}(\omega)\) and write \(\tilde{\lambda}=\omega,\ \widetilde{L}_{\pm}:=\ker(\tilde{\varphi}\mp\omega)\). We obtain an exact sequence \[0\longrightarrow\widetilde{L}_{+}\longrightarrow\widetilde{\mathcal{E}} \xrightarrow{\tilde{\varphi}-\omega}\widetilde{\mathcal{E}}\otimes\tilde{K} \longrightarrow p^{*}J\otimes\tilde{K}\longrightarrow 0\,\] which breaks into short exact sequences \[0\longrightarrow\widetilde{L}_{+}\longrightarrow\widetilde{ \mathcal{E}}\longrightarrow\operatorname{Im}(\tilde{\varphi}-\omega) \longrightarrow 0\ ;\] \[0\longrightarrow\operatorname{Im}(\tilde{\varphi}-\omega) \longrightarrow\widetilde{\mathcal{E}}\otimes\tilde{K}\longrightarrow p^{*}J \otimes\tilde{K}\longrightarrow 0\.\] Using the local trivialization in Lemma 6.1, \(\operatorname{Im}(\tilde{\varphi}-\omega)\) is locally spanned by \(\begin{pmatrix}z^{d_{x}}\\ -z^{m_{x}}\end{pmatrix}\mathfrak{e}\). From Lemma 6.2, if we write \(\widetilde{L}_{0}:=\widetilde{L}_{+}(\Lambda-D)\) and \(\widetilde{L}_{1}:=\sigma^{*}\widetilde{L}_{0}\), then \[\delta\circ\chi_{BNR}(J)=\widetilde{L}_{+}(\Lambda-D)\.\] Moreover, there is an isomorphism \(\operatorname{Im}(\tilde{\varphi}-\omega)\cong\widetilde{L}_{1}\). Letting \(\widetilde{L}_{1}^{\prime}\) be the saturation of \(\widetilde{L}_{1}\), then we obtain the commutative diagram: where \(i:\widetilde{L}_{1}\to\widetilde{L}_{1}^{\prime}\) is the natural inclusion. Moreover, in the same trivialization, \(\widetilde{L}_{1}^{\prime}\) is spanned by the section \(\begin{pmatrix}1\\ -z^{m_{x}-d_{x}}\end{pmatrix}\mathfrak{e}\). Therefore, \(\widetilde{L}_{1}^{\prime}\cong\widetilde{L}_{-}\otimes\tilde{K}\) and from Lemma 6.2, \(p_{\mathrm{tf}}^{\star}J=\delta\circ\chi_{\mathrm{BNR}}(J)\). If \((\mathcal{E},\varphi)\) is a Higgs bundle with \((\mathcal{E},\varphi)=\chi_{\mathrm{BNR}}(L)\), and \(\widetilde{L}_{0}=\delta\circ\chi_{BNR}(L)\), then by Proposition 6.3, \(\widetilde{L}_{0}=p_{\mathrm{tf}}^{\star}(L)\). We define a Higgs bundle \((\widetilde{\mathcal{E}}_{0},\tilde{\varphi}_{0})\) as follows \[\widetilde{\mathcal{E}}_{0}=\widetilde{L}_{0}\oplus\sigma^{*}\widetilde{L}_{0 },\ \tilde{\varphi}_{0}=\begin{pmatrix}\omega&0\\ 0&-\omega\end{pmatrix}.\] Moreover, \(\widetilde{\mathcal{E}}\) is an \(\mathcal{O}_{\widetilde{S}}\) submodule of \(\widetilde{\mathcal{E}}_{0}\) with a natural inclusion \(\iota:\widetilde{\mathcal{E}}\to\widetilde{\mathcal{E}}_{0}\) satisfying the following: 1. the induced morphism \(\widetilde{\mathcal{E}}\to\widetilde{L}_{0},\ \widetilde{\mathcal{E}}\to\sigma^{*} \widetilde{L}_{0}\) is surjective, 2. the restriction of \(\iota|_{\widetilde{S}\setminus\widetilde{Z}}\) is an isomorphism, 3. \(\tilde{\varphi}_{0}\circ\iota=\iota\circ\tilde{\varphi}\). Following [33, Sec. 4.1], we call \((\widetilde{\mathcal{E}}_{0},\tilde{\varphi}_{0})\) the _abelianization of the Higgs bundle_\((\mathcal{E},\varphi)\). ### The construction of the algebraic Mochizuki map In this subsection, we define the algebraic Mochizuki map, as introduced in [33]. Recall that for any divisor \(D=\sum_{x\in Z}d_{x}x\), there is a canonical weight function \[\chi_{D}(x):=\begin{cases}d_{x}&x\in\operatorname{supp}\,D\ ;\\ 0&x\notin\operatorname{supp}\,D\.\end{cases}\] We also have the stratification \(\overline{\mathcal{T}}=\cup_{D}\overline{\mathcal{T}}_{D}\), for \(\sigma\) divisors \(D\). Let \(\mathscr{F}(\widetilde{S})\) be the space of all degree zero filtered line bundles over \(\widetilde{S}\). The _algebraic Mochizuki map_\(\Theta^{\mathrm{Moc}}\) is defined as \[\Theta^{\mathrm{Moc}}:\overline{\mathcal{T}}\longrightarrow\mathscr{F}( \widetilde{S})\,\ L\mapsto\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}(L),\tfrac{1}{2}\chi_{D- \Lambda})\.\] **Example 6.4**.: _When \(q\) has only simple zeroes, this construction generalizes that of [30] (see also [13]). In the case of a quadratic differential with simple zeros, the spectral curve \(S\) is smooth, and every torsion free sheaf is locally free, so that \(\mathcal{T}=\overline{\mathcal{T}}\). If \(Z=\{p_{1},\ldots,p_{4q-4}\}\) are the branch points of \(S\), and \(\Lambda=\sum_{i=1}^{4q-4}p_{i}\), then the weight function \(\frac{1}{2}\chi_{-\Lambda}\) assigns a weight of \(-\frac{1}{2}\) at each \(p_{i}\). For \(L\in\mathcal{T}\), \(\Theta^{\mathrm{Moc}}(L)=\mathcal{F}_{*}(L,\frac{1}{2}\chi_{-\Lambda})\)._ Below are some additional properties of \(\Theta^{\mathrm{Moc}}\). **Proposition 6.5**.: \(\Theta^{\mathrm{Moc}}|_{\overline{\mathcal{T}}_{D}}\) _is a continuous map._ Proof.: This follows directly from the definition of \(\Theta^{\mathrm{Moc}}\) and Theorem 3.3. From Theorem 5.11, we know that for a \(\sigma\)-divisor \(D\), the preimage of the map \(p_{\mathrm{tf}}^{\star}:\overline{\mathcal{T}}_{D}\to\widetilde{\mathcal{T}}_ {D}\) has dimension \(2g-2-\frac{1}{2}\deg(D)-r_{2}/2\), where \(r_{2}\) is the number of odd zeros of \(q\). Even for the top stratum \(D=0\), \(p_{\mathrm{tf}}^{\star}\) is not injective if the spectral curve is not smooth. Indeed, if \(L_{1},L_{2}\in\overline{\mathcal{T}}_{D}\) with \(p_{\mathrm{tf}}^{\star}(L_{1})=p_{\mathrm{tf}}^{\star}(L_{2})\), then based on the construction we have \(\Theta^{\mathrm{Moc}}(L_{1})=\Theta^{\mathrm{Moc}}(L_{2})\). In summary, we have the following result: **Proposition 6.6**.: _If \(q\in H^{0}(K^{2})\) is irreducible, then \(\Theta^{\mathrm{Moc}}\) is injective if and only if \(q\) has simple zeros._ ### Convergence of subsequences Fix a locally free \(L_{0}\in\mathcal{T}\). Using the isomorphism \(\psi_{L_{0}}:\overline{\mathcal{T}}\to\overline{\mathcal{P}}\) defined by \(\psi_{L_{0}}(L)=LL_{0}^{-1}\), we can extend the Mochizuki map \(\Theta^{\mathrm{Moc}}\) to \(\overline{\mathcal{P}}\). For \(J\in\overline{\mathcal{P}}_{D}\), we write \(\widetilde{J}:=p_{\mathrm{tf}}^{\star}(J)\) and choose the weight function \(\frac{1}{2}\chi_{D}\). We then define: \[\Theta^{\mathrm{Moc}}_{0}:\overline{\mathcal{P}}\longrightarrow\mathscr{F}( \widetilde{S})\,\ J\mapsto\mathcal{F}_{*}(\widetilde{J},\tfrac{1}{2}\chi_{D})\.\] **Proposition 6.7**.: _The map \(\Theta^{\mathrm{Moc}}_{0}\) satisfies the following properties:_ 1. _Let_ \(J\in\overline{\mathcal{P}}\) _and_ \(L:=L_{0}J\)_, then_ \[\Theta^{\mathrm{Moc}}_{0}(J)=\Theta^{\mathrm{Moc}}(L)\otimes\Theta^{\mathrm{ Moc}}(L_{0})^{-1}\,\] _where_ \(\otimes\) _is the tensor product for filtered line bundles (_5_)._ 2. _Suppose_ \(L=\tau(I,v)\) _with_ \((\mathcal{I},v)\in\widehat{\mathrm{PMod}}(\widetilde{S})\) _and_ \(L\in\overline{\mathcal{P}}_{D}\)_, then_ \[\Theta^{\mathrm{Moc}}_{0}\circ\tau(I,v)=\mathcal{F}_{*}(I(-D_{v}),\tfrac{1}{2} \chi_{D_{v}+\sigma^{*}D_{v}})\,\] _where_ \(D_{v}\) _is the corresponding divisor defined in Theorem_ 5.5_._ 3. _If_ \(\sigma^{*}D_{v}=D_{v}\)_, then_ \(\Theta^{\mathrm{Moc}}_{0}\circ\tau(\mathcal{I},v)=\mathcal{F}_{*}(\mathcal{I},0)\)_, where_ \(0\) _means all parabolic weights are zero._ Proof.: As \(L_{0}\) is locally free, we have \(p_{\mathrm{tf}}^{\star}J=(p^{*}L_{0})^{-1}\otimes p_{\mathrm{tf}}^{\star}L\). By definition, \[\Theta^{\mathrm{Moc}}_{0}(J)=\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}J,\tfrac{ 1}{2}\chi_{D})\,\ \Theta^{\mathrm{Moc}}(L)=\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}L,\tfrac{1}{2} \chi_{D-\Lambda})\,\ \Theta^{\mathrm{Moc}}(L_{0})=\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}L_{0}, \tfrac{1}{2}\chi_{-\Lambda})\,\] which implies (i). For (ii), by Theorem 5.5, \(p_{\mathrm{tf}}^{\star}L=I(-D_{v})\), and from Proposition 5.12, we have \(D=D_{v}+\sigma^{*}D_{v}\), which implies (ii). When \(\sigma^{*}D_{v}=D_{v}\), we compute \[\mathcal{F}_{*}(I(-D_{v})\,\ \tfrac{1}{2}\chi_{D_{v}+\sigma^{*}D_{v}})=\mathcal{F}_{* }(I(-D_{v})\,\ \chi_{D_{v}})=\mathcal{F}_{*}(I,0)\,\] which implies (iii). We now give a criterion for the continuity of the map \(\Theta^{\mathrm{Moc}}\). By Proposition 6.7, it is sufficient to study the map \(\Theta^{\mathrm{Moc}}_{0}\). Recall that for \(L\in\overline{\mathcal{P}}\), we have \[\mathscr{N}_{L}:=\{(J,v)\in\widehat{\mathrm{PMod}}(\widetilde{S})\mid\tau(J,v )=L\},\quad\mathscr{D}_{L}:=\{D_{v}\ |\ (J,v)\in\mathscr{N}_{L}\},\] and the number \(n_{L}\) is defined to be the number of divisors \(D_{v}\in\mathscr{D}_{L}\) such that \(\sigma^{*}D_{v}\neq D_{v}\). **Proposition 6.8**.: _Let \(D\) be a \(\sigma\)-divisor, \(L\in\overline{\mathcal{P}}_{D}\), and assume that \(\Theta^{\mathrm{Moc}}_{0}\) is continuous at \(L\). Then, for \((J,v)\in\mathscr{N}_{L}\) and \(D_{v}\in\mathscr{D}_{L}\), we have \(\sigma^{*}D_{v}=D_{v}\), i.e., \(n_{L}=0\)._ Proof.: As the top stratum \(\mathcal{P}\) is dense in \(\overline{\mathcal{P}}\), there exists a family of \(L_{i}\in\mathcal{P}\) such that \(\lim_{i\to\infty}L_{i}=L\). Let \((J_{i},v_{i})\in\widehat{\mathrm{PMod}}(\widetilde{S})\) be such that \(\tau(J_{i},v_{i})=L_{i}\). Then, \(\lim_{i\to\infty}(J_{i},v_{i})=(J_{\infty},v_{\infty})\), and \(\tau(J_{\infty},v_{\infty})=L\). As \(L_{i}\) is locally free, we have \(D_{v_{i}}=0\). Moreover, by Theorem 5.5, we have \(p_{\text{tr}}^{\star}L=J_{\infty}(-D_{v_{\infty}})\), and from Proposition 5.12, we have \(D=D_{v_{\infty}}+\sigma^{\star}D_{v_{\infty}}\). By Proposition 6.7, we have \[\Theta_{0}^{\text{Moc}}(L_{i})=\Theta_{0}^{\text{Moc}}\circ\tau(J_{i},v_{i})= \mathcal{F}_{\ast}(J_{i},0)\,\] and we compute \[\lim_{i\to\infty}\Theta_{0}^{\text{Moc}}(L_{i})=\mathcal{F}_{\ast}(J_{\infty}, 0)=\mathcal{F}_{\ast}(J_{\infty}(-D_{v_{\infty}}),\chi_{D_{v_{\infty}}})\.\] Moreover, by Proposition 6.7, we have \[\Theta_{0}^{\text{Moc}}(L)=\mathcal{F}_{\ast}(J_{\infty}(-D_{v_{\infty}}), \frac{1}{2}(\chi_{D_{v_{\infty}}}+\chi_{\sigma^{\star}D_{v_{\infty}}})).\] Since \(\Theta_{0}^{\text{Moc}}\) is continuous on \(L\), we have \(\lim_{i\to\infty}\Theta_{0}^{\text{Moc}}(L_{i})=\Theta^{\text{Moc}}(L)\), which implies that \(\chi_{D_{v_{\infty}}}=\chi_{\sigma^{\star}D_{v_{\infty}}}\). By Proposition 5.16, \(n_{L}>0\) if and only if \(q\) has at least one zero of even order. Hence, the following is immediate. **Corollary 6.9**.: _Suppose \(q\) is irreducible and has a zero of even order, then \(\Theta_{0}^{\text{Moc}}\) is not continuous._ By contrast, we have the following. **Proposition 6.10**.: _If \(q\) is irreducible with all zeroes of odd order, then \(\Theta_{0}^{\text{Moc}}\) is continuous._ Proof.: Since all zeroes of \(q\) are odd, for any \(L\in\overline{\mathcal{P}}\), we have \(n_{L}=0\). Let \(L_{\infty}\in\overline{\mathcal{P}}\) be fixed and let \(L_{i}\in\overline{\mathcal{P}}\) be any sequence such that \(\lim_{i\to\infty}L_{i}=L_{\infty}\). Since \(\tau:\widehat{\text{PMod}}(\widetilde{S})\to\overline{\mathcal{P}}\) is bijective, we take \((J_{i},v_{i})\in\widehat{\text{PMod}}(\widetilde{S})\) with \(\tau(J_{i},v_{i})=L_{i}\). Moreover, we assume \(\lim_{i\to\infty}(J_{i},v_{i})=(J_{\infty},v_{\infty})\) with \(\tau(J_{\infty},v_{\infty})=L_{\infty}\). Since \(q\) only contains odd zeros, it follows that \(\text{supp }D_{v}\subset\text{Fix}(\sigma)\). By Proposition 6.7, we have \(\Theta_{0}^{\text{Moc}}(L_{i})=\mathcal{F}_{\ast}(J_{i},0)\). Therefore, we have: \[\lim_{i\to\infty}\Theta_{0}^{\text{Moc}}(L_{i})=\lim_{i\to\infty}\mathcal{F}_ {\ast}(J_{i},0)=\mathcal{F}_{\ast}(J_{\infty},0)=\Theta_{0}^{\text{Moc}}(L_{ \infty}).\] This concludes the proof. **Theorem 6.11**.: _Suppose \(q\) is irreducible. For the map \(\Theta^{\text{Moc}}:\mathcal{M}_{q}\to\mathscr{F}(\widetilde{S})\), we have:_ * \(\Theta^{\text{Moc}}\) _is injective if and only if_ \(q\) _only has only simple zeros;_ * _if_ \(q\) _has only zeroes of odd order,_ \(\Theta^{\text{Moc}}\) _is continuous;_ * _if_ \(q\) _contains a zero of even order,_ \(\Theta^{\text{Moc}}\) _is not continuous._ Proof.: (i) follows from Proposition 6.6. (ii) follows from Proposition 6.10. (iii) follows from Corollary 6.9. **Proposition 6.12**.: _Suppose \(n_{L}>0\). Then for \(k=1,\dots,n_{L}\), there exist sequences \(L_{i}^{k}\) with \(\lim_{i\to\infty}L_{i}^{k}=L\) such that if we denote \(\mathcal{F}_{\ast}^{k}:=\lim_{i\to\infty}\Theta_{0}^{\text{Moc}}(L_{i}^{k})\), \(\mathcal{F}_{\ast}^{0}:=\Theta_{0}^{\text{Moc}}(L)\), then \(\mathcal{F}_{\ast}^{k_{1}}\neq\mathcal{F}_{\ast}^{k_{2}}\) for \(k_{1}\neq k_{2}\). Moreover, there exist \(\{D_{1},\dots,D_{n_{L}}\}\subset\mathscr{D}_{L}\) such that \(\mathcal{F}_{\ast}^{k}=\mathcal{F}_{\ast}(p_{\text{tr}}^{\star}L,\chi_{D_{k}})\)._ Proof.: By the definition of \(n_{L}\), we can find \((J^{k},v^{k})\) with \(\tau(J^{k},v^{k})=L\). If we define \(D_{k}:=D_{v^{k}}\), then \(\sigma^{\star}D_{k}\neq D_{k}\). Moreover, by Theorem 5.5, we have \(p_{\text{tr}}^{\star}L=J^{k}(-D_{k})\). As \(\tau^{-1}(\mathcal{P})\) is dense in \(\widehat{\text{PMod}}(\widetilde{S})\), for each \((J^{k},v^{k})\), we can find a sequence \((J_{i}^{k},v_{i}^{k})\in\tau^{-1}(\mathcal{P})\) such that \(\lim_{i\to\infty}(J_{i}^{k},v_{i}^{k})=(J^{k},v^{k})\) and we define \(L_{i}^{k}:=\tau(J_{i}^{k},v_{i}^{k})\). Since \(L_{i}^{k}\) is locally free, \(D_{v_{i}^{k}}=0\), and thus \(\Theta_{0}^{\text{Moc}}(L_{i}^{k})=\mathcal{F}_{\ast}(J_{i}^{k},0)\). We compute \[\lim_{i\to\infty}\Theta_{0}^{\text{Moc}}(L_{i}^{k})=\mathcal{F}_{\ast}(J^{k},0)= \mathcal{F}_{\ast}(p_{\text{tr}}^{\star}L,\chi_{D_{k}})\] and \(\Theta^{\mathrm{Moc}}_{0}(L)=\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}L,\frac{1}{2} \chi_{D})\). Based on our assumptions, we have \(D_{k_{1}}\neq D_{k_{2}}\) for \(k_{1}\neq k_{2}\) and \(\sigma^{*}D_{k}\neq D_{k}\), which implies that \(\chi_{D_{k_{1}}}\neq\chi_{D_{k_{2}}}\) for \(k_{1}\neq k_{2}\) and \(\chi_{D_{k}}\neq\frac{1}{2}\chi_{D}\). We now present a computation for the case of a simple nodal curve. **Example 6.13**.: _Let \(q\) be a quadratic differential with \(2g-4\) simple zeros, and let \(x\) be an even zero of \(q\) of order two. Then \(S\) has a singular point, which we also denote by \(x\). Let \(p:\widetilde{S}\to S\) be the normalization map and \(p^{-1}(x)=\{x_{1},x_{2}\}\). Consider the \(\sigma\)-divisor \(D=x_{1}+x_{2}\), and let \(L\in\overline{\mathcal{P}}_{D}\). Then \(n_{L}=2\), and we can write \(\mathscr{N}_{L}=(J_{1},v_{1}),(J_{2},v_{2})\), where \(D_{v_{1}}=x_{1}\) and \(D_{v_{2}}=x_{2}\). Moreover, we have \(p_{\mathrm{tf}}^{\star}L=J_{1}\otimes\mathcal{O}(-x_{1})=J_{2}\otimes \mathcal{O}(-x_{2})\). Let \((\alpha,\beta)\) denote the parabolic weight that is equal to \(\alpha\) at \(x_{1}\), \(\beta\) at \(x_{2}\), and \(\frac{1}{2}\) at all other zeros. Then the filtered bundles obtained in Proposition 6.12 are_ \[\mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}L,(1,0))\,\ \mathcal{F}_{*}(p_{ \mathrm{tf}}^{\star}L,(0,1))\,\ \mathcal{F}_{*}(p_{\mathrm{tf}}^{\star}L,(\tfrac{1}{2}, \tfrac{1}{2}))\.\] ### Mochizuki's convergence theorem for irreducible fibers In this subsection, we recall Mochizuki's construction of the limiting configuration metric [33, Section 4.2.1, 4.3.2] and the convergence theorem. #### 6.4.1. Limiting configuration metric Let \(q\) be an irreducible quadratic differential and \((\mathcal{E},\varphi)\in\mathcal{M}_{q}\) a Higgs bundle with \((\mathcal{E},\varphi)=\chi_{\mathrm{BNR}}(L)\). We write \(\widetilde{L}_{0}=p_{\mathrm{tf}}^{\star}L\) and \((\widetilde{\mathcal{E}},\tilde{\varphi}):=p^{*}(\mathcal{E},\varphi)\). Then the abelianization of \((\mathcal{E},\varphi)\), which is a Higgs bundle over \(\widetilde{S}\), can be written as \(\widetilde{\mathcal{E}}_{0}=\widetilde{L}_{0}\oplus\sigma^{*}\widetilde{L}_{ 0},\ \tilde{\varphi}_{0}=\mathrm{diag}(\omega,\ -\omega)\). The natural inclusion \(\iota:(\widetilde{\mathcal{E}},\tilde{\varphi})\to(\widetilde{\mathcal{E}}_{0},\tilde{\varphi}_{0})\) is an isomorphism over \(\widetilde{S}\setminus\widetilde{Z}\). Moreover, we write \(D\) be the \(\sigma\)-divisor of \((\mathcal{E},\varphi)\). From the construction of \(\Theta^{\mathrm{Moc}}(L)\) and Proposition 6.12, we have \(n_{L}\) different divisors \(D_{k}\) for \(k=1,\cdots,n_{L}\) with \(\sigma^{*}D_{k}\neq D_{k}\) and \(D_{k}+\sigma^{*}D_{k}=D\). Moreover, we can find \(n_{L}+1\) different filtered bundles with \(\deg 0\). Define \[\mathcal{F}_{*,0}:=\Theta^{\mathrm{Moc}}(L)=\mathcal{F}_{*}(\widetilde{L}_{0},\chi_{\frac{1}{2}(D-\Lambda)}),\ \mathcal{F}_{*,k}:=\mathcal{F}_{*}(\widetilde{L}_{0},\chi_{D_{k}-\frac{1}{2} \Lambda})),\] which are all degree zero filtered bundles with different level of filtrations. Now, we will introduce the construction in [33, Section 4.2.1, 4.3.2]. For \(k=0,\cdots,n_{L}\), we define \(\tilde{h}_{k}\) to be the harmonic metric for the filtered bundle \(\mathcal{F}_{*,k}\); this is well-defined up to a multiplicative constant. To fix this constant, assume that \(\sigma^{*}\tilde{h}_{k}\otimes\tilde{h}_{k}=1\). This gives a unique choice of \(\tilde{h}_{0}\). We then define the metric \(\widetilde{H}_{k}=\mathrm{diag}(\tilde{h}_{k},\ \sigma^{*}\tilde{h}_{k})\) on \(\widetilde{\mathcal{E}}_{0}\), with \(\det(\widetilde{H}_{k})=1\). For the resulting harmonic bundle \((\widetilde{\mathcal{E}}_{0},\varphi_{0},\widetilde{H}_{k})\), we define \(\widetilde{\nabla}_{k}\) to be the unitary connection determined by \(\widetilde{H}_{k}\). Since \(\widetilde{H}_{k}\) is diagonal, over \(\widetilde{S}\setminus\widetilde{Z}\), it follows that \(F_{\widetilde{\nabla}_{k}}=0\), and we have \([\varphi_{0},\varphi_{0}^{\dagger\,\tilde{H}_{k}}]=0\). Furthermore, as \(\iota\) is an isomorphism on \(\widetilde{S}\setminus\widetilde{Z}\), the metric \(\widetilde{H}_{k}\) also defines a metric on \((\widetilde{\mathcal{E}},\tilde{\varphi})\) over \(\widetilde{S}\setminus\widetilde{Z}\). For any \(\tilde{x}\in\widetilde{S}\setminus\widetilde{Z}\) with \(x:=p(\tilde{x})\), we have isomorphisms \[(\widetilde{\mathcal{E}}_{0},\tilde{\varphi}_{0})|_{\sigma(\tilde{x})}\cong( \widetilde{\mathcal{E}}_{0},\tilde{\varphi}_{0})|_{\tilde{x}}\cong(\widetilde{ \mathcal{E}},\tilde{\varphi})|_{\tilde{x}}\cong(\mathcal{E},\varphi)|_{x}.\] Therefore, \(\widetilde{H}_{k}\) induces a metric \(H_{k}^{\mathrm{Lim}}\) on \(\Sigma\setminus Z\), and we may consider \(H_{k}^{\mathrm{Lim}}\) as the push-forward of \(\tilde{h}_{k}\). In [25, Theorem 5.2], the push-forward metric of \(\Theta^{\mathrm{Moc}}(L)\) is explicitly written in local coordinates. Recall the notation from Section 2.4, let \(E\) be a trivial, smooth, rank \(2\) vector bundle over a Riemann surface \(\Sigma\), and let \(K\) be a background Hermitian metric on \(E\). Over \(\Sigma\setminus Z\), we write \(\nabla_{k}^{\mathrm{Lim}}\) for the Chern connection defined by \(H_{k}^{\mathrm{Lim}}\), which is unitary w.r.t. \(H_{0}\) and \(\phi_{k}^{\mathrm{Lim}}=\varphi_{k}^{\mathrm{Lim}}+\varphi_{k}^{\mathrm{Lim}}\) be the corresponding Higgs field in the unitary gauge. They satisfy the decoupled Hitchin equations over \(\Sigma\setminus Z\). Thus from any Higgs bundle \((\mathcal{E},\varphi)\), we obtain \(n_{L}+1\) limiting configurations \[(\nabla^{\mathrm{Lim}}_{k},\phi^{\mathrm{Lim}}_{k}=\varphi+\varphi^{\mathrm{I_{ Lim}}}_{k})\in\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}.\] The flat connection, which is defined over \(\Sigma\setminus Z\), may be understood by using the nonabelian Hodge correspondence for filtered vector bundles [43]. Given the filtered line bundles \(\mathcal{F}_{*,k}\), define filtered vector bundles \(\widetilde{\mathcal{E}}_{*,k}:=\mathcal{F}_{*,k}\oplus\sigma^{*}\mathcal{F}_{*,k}\), which can be explicitly written as \[\begin{split}\widetilde{\mathcal{E}}_{*,0}:&= \mathcal{F}_{*}(\widetilde{L}_{0},\chi_{\frac{1}{2}(D-\Lambda)})\oplus \mathcal{F}_{*}(\sigma^{*}\widetilde{L}_{0},\chi_{\frac{1}{2}(D-\Lambda)})\ ;\\ \widetilde{\mathcal{E}}_{*,k}:&=\mathcal{F}_{*}( \widetilde{L}_{0},\chi_{D_{k}-\frac{1}{2}\Lambda})\oplus\mathcal{F}_{*}( \sigma^{*}\widetilde{L}_{0},\chi_{\sigma^{*}D_{k}-\frac{1}{2}\Lambda})\,\ k\neq 0\.\end{split} \tag{18}\] These are polystable filtered vector bundles over \(\widetilde{S}\setminus\widetilde{Z}\). As for each \(k=0,\cdots,n_{L}\), \(\sigma^{*}\widetilde{\mathcal{E}}_{*,k}=\widetilde{\mathcal{E}}_{*,k}\), the filtered bundles \(\widetilde{\mathcal{E}}_{*,k}\) induce a filtered vector bundles \(\mathcal{E}_{*,k}\) over \(\Sigma\setminus Z\). The flat connection \(\nabla^{\mathrm{Lim}}_{k}\) will be the unique harmonic unitary connection corresponding to the \(\mathcal{E}_{*,k}\). Moreover, for \(0\leq k_{1}\neq k_{2}\leq n_{L}\), based on the definition of \(D_{k_{1}}\) and \(D_{k_{2}}\), we can always find \(\tilde{x}\in\widetilde{Z}_{\mathrm{even}}\), a preimage of an even zero \(x\) of \(q\), such that \(\widetilde{\mathcal{E}}_{*,k_{1}}\) and \(\widetilde{\mathcal{E}}_{*,k_{2}}\) have different filtered structures near \(\tilde{x}\). Since over even zeros, \(\widetilde{S}\to\Sigma\) is not a branched covering, we conclude that near \(x\), \(\mathcal{E}_{*,k_{1}}\) and \(\mathcal{E}_{*,k_{2}}\) are different filtered bundles. By [43, Main theorem], the harmonic connections \(\nabla_{k_{1}}\) and \(\nabla_{k_{2}}\) are not gauge equivalent. We therefore conclude the following: **Proposition 6.14**.: _For \(0\leq k_{1}\neq k_{2}\leq n_{L}\), \((\nabla^{\mathrm{Lim}}_{k_{1}},\phi^{\mathrm{Lim}}_{k_{1}})\) and \((\nabla^{\mathrm{Lim}}_{k_{2}},\phi^{\mathrm{Lim}}_{k_{2}})\) are not gauge equivalent in \(\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}\)._ This leads us to define the _analytic Mochizuki map_\(\Upsilon^{\mathrm{Moc}}\) as \[\Upsilon^{\mathrm{Moc}}:\mathcal{M}_{q}\longrightarrow\mathcal{M}^{\mathrm{ Lim}}_{\mathrm{Hit}}\ :\ [(\mathcal{E},\varphi)]\mapsto[(\nabla^{\mathrm{Lim}}_{0},\phi^{\mathrm{Lim}}_{0})], \tag{19}\] which we recall is the limiting configuration defined by \(\Theta^{\mathrm{Moc}}(L)\). #### 6.4.2. The continuity of the limiting configurations We now introduce the main result of Mochizuki [33]. Fix \((\mathcal{E},\varphi)=\chi_{\mathrm{BNR}}(L)\in\mathcal{M}_{q}\). For any real parameter \(t\), \((\mathcal{E},t\varphi)\) is a stable Higgs bundle. By the Kobayashi-Hitchin correspondence, there exists a unique metric \(H_{t}\) solving the Hitchin equation. Denote by \(\nabla_{t}\) the unitary connection defined by \(H_{t}\) and write \(\mathcal{D}_{t}=\nabla_{t}+t\phi_{t}\) for the full \(\mathrm{SL}(2,\mathbb{C})\) flat connection. We then have: **Theorem 6.15** ([33]).: _The family \((\mathcal{E},t\varphi)\) has a unique limiting configuration as limit for \(t\to\infty\). Moreover, for any compact set \(K\subset\Sigma\setminus Z\), let \(d:=\min_{x\in K}|q|(x)\). Then there exist \(t\)-independent constants \(C_{k}\) and \(C_{k}^{\prime}\) such that_ \[|(\nabla_{t},\phi_{t})-\Upsilon^{\mathrm{Moc}}(\mathcal{E},\varphi)|_{\mathcal{ C}^{k}}\leq C_{k}e^{-C_{k}^{\prime}td}\.\] As the map \(\Upsilon^{\mathrm{Moc}}\) is the composition of \(\Theta^{\mathrm{Moc}}\circ\chi^{-1}_{\mathrm{BNR}}\) with the nonabelian Hodge correspondence, the behavior of \(\Upsilon^{\mathrm{Moc}}\) is the same as \(\Theta^{\mathrm{Moc}}\). Recall the decomposition \(\mathcal{M}_{q}=\bigcup\mathcal{M}_{q,D}\) from the end of Section 5. By Theorem 6.11, Proposition 6.12 and Proposition 6.14, we obtain: **Theorem 6.16**.: _Let \(q\) be an irreducible quadratic differential. The map \(\Upsilon^{\mathrm{Moc}}:\mathcal{M}_{q}\to\mathcal{M}^{\mathrm{Lim}}_{\mathrm{Hit}}\) satisfies the following properties:_ 1. _if all the zeros of_ \(q\) _are odd, then_ \(\Upsilon^{\mathrm{Moc}}\) _is continuous;_ 2. _if at least one zero of_ \(q\) _is even, then for each_ \((\mathcal{E},\varphi)\in\mathcal{M}_{q,D}\)_, there exists an integer_ \(n_{D}\) _that only depends on_ \(D\)_, and_ \(n_{D}\) _sequences_ \(\{(\mathcal{E}^{k}_{i},\varphi^{k}_{i})\}\) _for_ \(k=1,\ldots,n_{D}\)_, such that_ \(\lim_{i\to\infty}(\mathcal{E}^{k}_{i},\varphi^{k}_{i})=(\mathcal{E}_{\infty}, \varphi_{\infty})\)_, and_ \[\lim_{i\to\infty}\Upsilon^{\mathrm{Moc}}(\mathcal{E}^{k_{1}}_{i},\varphi^{k_{1} }_{i})\neq\lim_{i\to\infty}\Upsilon^{\mathrm{Moc}}(\mathcal{E}^{k_{2}}_{i}, \varphi^{k_{2}}_{i})\neq\Upsilon^{\mathrm{Moc}}(\mathcal{E}_{\infty},\varphi_{ \infty})\,\ \text{for }k_{1}\neq k_{2}\.\] ## 7. Reducible singular fiber and the Mochizuki map We now investigate the properties of the Hitchin fiber associated with a reducible quadratic differential, as discussed in [16]. Additionally, we will provide an overview of Mochizuki's technique for constructing limiting configurations of Hitchin fibers for reducible quadratic differentials, as detailed in [33]. We also analyze the continuity of the Mochizuki map. ### Local description of a Higgs bundle Write \(q=-\omega\otimes\omega\) with \(\omega\in H^{0}(K)\), \(\Lambda=\operatorname{Div}(\omega)\), \(Z=\operatorname{supp}\,(\Lambda)\), and \(\mathcal{M}_{q}=\mathcal{H}^{-1}(q)\). Compared to the irreducible case, \(\mathcal{M}_{q}\) contains strictly semistable Higgs bundles, so we let \(\mathcal{M}_{q}^{\operatorname{st}}\) denote the stable locus. We point out that there is a sign ambiguity in the choice of \(\omega\), which actually plays an important role in the following. #### 7.1.1. Local description Given a Higgs bundle \((\mathcal{E},\varphi)\) with \(\det(\varphi)=q\), define line bundles \[L_{\pm}:=\ker(\varphi\pm\omega). \tag{20}\] Then the inclusion maps \(L_{\pm}\to\mathcal{E}\) are injective. Similarly, we may define an abelianization of \((\mathcal{E},\varphi)\) by \((\mathcal{E}_{0}=L_{+}\oplus L_{-},\varphi_{0}=\operatorname{diag}(\omega,- \omega))\). We then have a natural inclusion \(\iota:\mathcal{E}_{0}\to\mathcal{E}\), which is is an isomorphism on \(\Sigma\setminus Z\), and \(\varphi\circ\iota=\iota\circ\varphi_{0}\). It follows from [16, Prop. 7.10] that \(L_{\pm}\) are the only \(\varphi\)-invariant subbundles of \(\mathcal{E}\). If we write \(d_{\pm}:=\deg(L_{\pm})\), then \((\mathcal{E},\varphi)\) is stable (resp. semistable) if and only if \(d_{\pm}<0\) (resp. \(\leq 0\)). As \(\det(\mathcal{E})=\mathcal{O}\), the map \(\det(\iota):L_{+}\otimes L_{-}\to\mathcal{O}\) defines a divisor \(D=\operatorname{Div}(\det(\iota))\) such that \(L_{+}\otimes L_{-}=\mathcal{O}(-D).\) Therefore, we obtain \[d_{+}+d_{-}+\deg D=0\,\] and \(0\leq D\leq\Lambda\). The Higgs bundle \((\mathcal{E},\varphi)\) is semistable if and only if \(-\deg D\leq d_{+}\leq 0\) and stable if the equalities are strict. For the rest of this section, we always write \(D=\sum_{p\in Z}\ell_{p}p\). #### 7.1.2. Semistable settings As the fiber \(\mathcal{M}_{q}\) might contain strictly semistable Higgs bundles, we now explicitly enumerate all of the possible \(S\)-equivalence classes. When \(D=0\), then \(L_{-}=L_{+}^{-1}\) and \(\deg(L_{+})=0\). The corresponding Higgs bundle is polystable and can be explicitly written as \[\Big{(}L\oplus L^{-1},\begin{pmatrix}\omega&0\\ 0&-\omega\end{pmatrix}\Big{)}\,\] where \(L\in\operatorname{Jac}(\Sigma)\). When \(D\neq 0\), suppose \(\deg(L_{+})=-\deg(D)\). Then \(L_{-}=L_{+}^{-1}(-D)\) and \(\deg(L_{-})=0\). Under \(S\)-equivalence, the polystable Higgs bundle is \[\Big{(}L_{+}(D)\oplus L_{+}^{-1}(-D),\begin{pmatrix}\omega&0\\ 0&-\omega\end{pmatrix}\Big{)}\,\] where \(L_{+}\in\operatorname{Pic}^{-\deg(D)}(\Sigma)\). ### Reducible spectral curves In this subsection, we introduce the algebraic data in [16] which describes the singular fiber with a reducible spectral curve. This plays a similar role to the parabolic modules. See [16, Sec. 7.1] for more details. For any divisor \(D\), and line bundle \(L\), define the space \[H^{0}(D,L)=\bigoplus_{p\in\operatorname{supp}\,D}\mathcal{O}(L)_{p}/\sim\,\] where \(s_{1}\sim s_{2}\) if and only if \(\operatorname{ord}_{p}([s_{1}]-[s_{2}])\geq D_{p}\). Let \(L\in\operatorname{Pic}(\Sigma)\), define the following subspaces of \(H^{0}(\Lambda,L^{2}K)\): \[\mathcal{V}(D,L) :=\{s\in H^{0}(\Lambda,L^{2}K)\mid\operatorname{ord}_{p}(s)= \Lambda_{p}-D_{p},\text{ if }D_{p}>0;\ s(p)=0,\text{ if }D_{p}=0\},\] \[\mathcal{W}(D,L) =\{s\in H^{0}(\Lambda,L^{2}K)\mid s|_{\operatorname{supp}\,( \Lambda-D)}=0\}\.\] One checks that \(\mathcal{W}(D,L)=\cup_{D^{\prime}\leq D}\mathcal{V}(D^{\prime},L)\). Moreover, the space \(\mathcal{V}(D,L)\) is a linear subspace of \(H^{0}(\Lambda,L^{2}K)\) with a hyperplane removed. In addition, \(\mathbb{C}^{*}\) acts on \(\mathcal{V}(D,L)\) by multiplication, and \(\dim(\mathcal{V}(D,L)/\mathbb{C}^{*})=\deg(D)-1\). We define the fibrations \[p_{m}:\mathscr{V}(D,m)\longrightarrow\operatorname{Pic}^{m}(\Sigma),\;p_{m}: \mathscr{W}(D,m)\longrightarrow\operatorname{Pic}^{m}(\Sigma)\] such that for \(L\in\operatorname{Pic}^{m}(\Sigma)\), the fibers are \(\mathcal{V}(D,L)\) and \(\mathcal{W}(D,L)\). #### 7.2.1. Algebraic data from the extension The Higgs bundle \((\mathcal{E},\varphi)\) can be understood in terms of an extension of exact sequence. As \(\det(\mathcal{E})=\mathcal{O}\), we have the exact sequence \[0\longrightarrow L_{+}\longrightarrow\mathcal{E}\longrightarrow L_{+}^{-1} \longrightarrow 0\.\] For each \(p\in Z\), with \(U\subset\Sigma\) a neighborhood of \(p\), \((\mathcal{E},\varphi)\) can be written as the following splitting of \(\mathcal{C}^{\infty}\) bundles \[\mathcal{E}=L_{+}\oplus_{\mathcal{C}^{\infty}}L_{+}^{-1}\,\ \bar{\partial}_{ \mathcal{E}}=\begin{pmatrix}\bar{\partial}_{L_{+}}&b\\ 0&\bar{\partial}_{L_{+}^{-1}}\end{pmatrix}\,\ \varphi=\begin{pmatrix} \omega&c\\ 0&-\omega\end{pmatrix}\.\] We would like to consider the restriction of \(\varphi+\omega\cdot\operatorname{id}\) to \(\Lambda\). As \(\omega|_{\Lambda}=0\), \(\varphi+\omega\cdot\operatorname{id}|_{L_{+}}=0\) and the image of \(\varphi+\omega\operatorname{id}\subset(L_{+}\oplus 0)\otimes K\subset \mathcal{E}\otimes K\). Therefore, the restriction of \(\varphi+\omega\cdot\operatorname{id}\) to \(\Lambda\) defines a holomorphic map \(s:L_{+}^{-1}|_{\Lambda}\to L_{+}K|_{\Lambda}\), or equivalently a section \(s\in H^{0}(\Lambda,L_{+}^{2}K)\). Moreover, by [16, Lemma 7.12], \(\operatorname{Div}(s)=\Lambda-D\). Therefore, given any \((\mathcal{E},\varphi)\in\mathcal{M}_{q}\), we obtain an \(L\in\operatorname{Pic}^{m}(\Sigma)\) and an element in \(\mathcal{V}(D,L)\). Moreover, the stability condition implies that \(0\leq D\leq\Lambda\), we have \(-\deg D\leq\deg L\leq 0\). #### 7.2.2. Inverse construction The inverse of the construction above also holds; for further details, see [16, Sec. 7] and [24, Sec. 5]. Given \(L\in\operatorname{Pic}^{m}(\Sigma)\) and \(q\in\mathcal{V}(D,L)\), we define a Higgs bundle via extensions as follows. From \(q,L\), we have a short exact sequence of complexes of sheaves: where, for a section \(s\in\Gamma(L^{2})\), \(c(s):=\sqrt{-1}\omega s\), and \(\operatorname{res}(\Lambda)\) is the restriction map to the divisor \(\Lambda\). The long exact sequence in hypercohomology implies that \(\operatorname{res}(\Lambda)\) induces an isomorphism \[\operatorname{res}(\Lambda):\mathbf{H}^{1}(C_{2}^{*})\cong\mathbf{H}^{1}(C_{3 }^{*})=H^{0}(\Lambda,L^{2}K)\.\] Moreover, we have \(\mathbf{H}^{1}(C_{2}^{*})\cong H^{1}(\Sigma,L^{2})\), which parameterizes extensions \[0\longrightarrow L\longrightarrow\mathcal{E}\longrightarrow L^{-1} \longrightarrow 0\.\] From \(s\in H^{0}(\Lambda,L^{2}K)\) and \(\mathcal{E}\) above, we can find a section \(c\in\Gamma(L^{2}K)\), and construct a Higgs bundle \[E=L\oplus_{\mathcal{C}^{\infty}}L^{-1},\;\bar{\partial}_{E}=\begin{pmatrix} \bar{\partial}_{L}&b\\ 0&\bar{\partial}_{L^{-1}}\end{pmatrix},\;\varphi=\begin{pmatrix}\omega&c\\ 0&-\omega\end{pmatrix}, \tag{21}\] where \(\bar{\partial}c=2b\omega\) for \(b\in\Omega^{0,1}(L^{2})\) and \(c\) is an extension of \(q\). For \(0\leq D\leq\Lambda\) and \(-\deg D\leq m\leq 0\), the construction above defines a map \[\wp:\mathscr{V}(D,m)\longrightarrow\mathcal{M}_{q}\,\ s\in\mathcal{V}(D,L) \mapsto(\mathcal{E},\varphi)\,\] where \((\mathcal{E},\varphi)\) is the Higgs bundle constructed in (21). When \(D=0\), \(\mathcal{V}(\Lambda,L)=\{0\}\) and the image of \(\wp:\mathscr{V}(\Lambda,0)\to\mathcal{M}_{q}\) are the polystable Higgs bundles \(\mathcal{E}=L\oplus L^{-1},\ \varphi=\operatorname{diag}(\omega,-\omega)\) such that \(L^{2}\cong\mathcal{O}_{\Sigma}\). **Theorem 7.1** ([16, Thm. 7.7]).: _For \(0\leq D\leq\Lambda\) and \(-\deg(D)\leq m_{1}\leq 0\) and the map \(\wp:\mathscr{V}(D,m_{1})\to\mathcal{M}_{q}\), we have_ * _for_ \(m_{2}=-\deg(D)-m_{1}\)_, we have_ \(\wp(\mathscr{V}(D,m_{1}))=\wp(\mathscr{V}(D,m_{2}))\)_,_ * _for the_ \(\mathbb{C}^{*}\) _action on_ \(\mathscr{V}(D,m_{1})\) _by multiplication, for_ \(\xi\in\mathscr{V}(D,m_{1})\)_,_ \(\wp(\mathbb{C}^{*}\xi)=\wp(\xi)\)_,_ * _when_ \(m_{1}\neq-\frac{1}{2}\deg(D)\)_,_ \(\wp:\mathscr{V}(D,m_{1})/\mathbb{C}^{*}\to\mathcal{M}_{q}\) _is an isomorphism onto its image,_ * _when_ \(m_{1}=-\frac{1}{2}\deg(D)\)_,_ \(\wp:\mathscr{V}(D,m_{1})/\mathbb{C}^{*}\to\mathcal{M}_{q}\) _is a double branched covering, which branched along line bundles_ \(L\in\operatorname{Pic}^{m_{1}}(\Sigma)\) _such that_ \(L^{2}\cong\mathcal{O}(-D)\)_,_ * _when_ \(D=0\)_, then_ \(\wp:\mathscr{V}(\Lambda,0)\to\mathcal{M}_{q}\) _is a double branched covering, branched along_ \(L\in\operatorname{Pic}^{0}(\Sigma)\) _such that_ \(L^{2}\cong\mathcal{O}\)_._ **Example 7.2**.: _When \(g=2\), for \(q=-\omega\otimes\omega\), we can write \(\Lambda=p_{1}+p_{2}\) or \(\Lambda=2p\). In either case, the \(\mathcal{M}_{q}^{\mathrm{st}}=\wp(\mathscr{V}(D,m))\) for \(-\deg(D)<m<0\) and \(0\leq D\leq\Lambda\). Therefore, \(m=-1,\ D=\Lambda\) and \(\wp(\mathscr{V}(\Lambda,-1))=\mathcal{M}_{q}^{\mathrm{st}}\). Moreover, generically, the map \(\wp:(\mathscr{V}(\Lambda,-1))/\mathbb{C}^{*}\to\mathcal{M}_{q}^{\mathrm{st}}\) is two-to-one._ ### The stratification of the singular fiber We now present two stratifications of \(\mathcal{M}_{q}\). Recall that from any Higgs bundle \((\mathcal{E},\varphi)\) we obtain two line bundles \(L_{\pm}\) and a divisor \(D\). There are two different stratifications: one given by the divisor \(D\) and the other by the degree of \(L_{+}\). #### 7.3.1. Divisor stratification We first discuss the stratification defined by the divisor. Indeed, using \(D\), decompose into strata: \(\mathcal{M}_{q}=\bigcup_{0\leq D\leq\Lambda}\mathcal{M}_{D}\). As the definition of \(L_{\pm}\) depends on the choice of the square root, there is no natural map from \(\mathcal{M}_{D}\) to \(\operatorname{Pic}(\Sigma)\). Consider the following space: \(\mathbb{V}_{D}=\bigcup_{-\deg(D)\leq m\leq 0}\mathscr{V}(D,m)\). This forms a fibration \[\tau:\mathbb{V}_{D}\longrightarrow\bigcup_{-\deg(D)\leq m\leq 0}\operatorname{Pic} ^{m}(\Sigma)\.\] Moreover, for \(L\in\operatorname{Pic}^{m}(\Sigma)\), we have \(\tau^{-1}(L)=\mathscr{V}(D,L)\) and \(\dim(\tau^{-1}(L)/\mathbb{C}^{*})=\deg(D)-1\). By Theorem 7.1, \(\wp|_{\mathbb{V}_{D}}:\mathbb{V}_{D}\to\mathcal{M}_{D}\) is surjective. Moreover, since \[\wp|_{\mathscr{V}(D,m)}=\wp|_{\mathscr{V}(D,-\deg(D)-m)}\] generically, \(\wp|_{\mathbb{V}_{D}}\) is a two-to-one map. In summary, we obtain the following map which characterizes the singular fiber. \[\wp:\mathbb{V}=\bigcup_{0\leq D\leq\Lambda}\mathbb{V}_{D}\to\mathcal{M}_{q}= \bigcup_{0\leq D\leq\Lambda}\mathcal{M}_{D}.\] The top stratum is given by \(D=\Lambda\). #### 7.3.2. Degree stratification We next introduce the stratification defined by degrees; this encodes how different divisor stratifications are pasted together. For \(-(2g-2)\leq m\leq 0\) and \(L\in\operatorname{Pic}^{m}(\Sigma)\), define \(\mathbb{W}(L):=\bigcup_{\deg D\geq-m}\mathcal{V}(D,L)\). This set is connected, based on the definition and [16, Lemma 7.14]. Moreover, if we define \[\mathbb{W}_{m}:=\bigcup_{-m\leq\deg D,\ 0\leq D\leq\Lambda}\mathscr{V}(D,m)\,\ \mathbb{W}:=\bigcup_{-(2g-2)\leq m\leq 0} \mathbb{W}_{m}\,\] then we have \(\wp(\mathbb{W})=\wp(\mathbb{V}).\) We should also note that though \(\mathbb{W}_{m}\cap\mathbb{W}_{n}=\emptyset\) for any \(m\neq n\), \(\mathbb{W}\) is connected. As \(L_{+},L_{-}\) are symmetric, by Theorem 7.1, we have \(\wp(\mathscr{V}(D,-\deg(D)-m))\), which implies that for any integer \(-(2g-2+m)\leq n\leq 0\), \(\wp\mathbb{W}_{m}\cap\wp\mathbb{W}_{n}\neq\emptyset\). We now give an example of the degree stratification when \(g=2\). **Example 7.3**.: _Suppose \(\omega\) has only one zero with order 2. Then \(\Lambda=2p\), and all possible divisors are \(D_{2}=2p,D_{1}=p,D_{0}=0\). The degree stratification is_ \[\mathbb{W}_{-2}=\mathscr{V}(D_{2},-2)\,\ \mathbb{W}_{-1}= \mathscr{V}(D_{2},-1)\cup\mathscr{V}(D_{1},-1)\] \[\mathbb{W}_{0}=\mathscr{V}(D_{0},0)\cup\mathscr{V}(D_{1},0)\cap \mathscr{V}(D_{2},0)\.\] _The image of \(\wp(\mathscr{V}(D_{2},-1))\) is stable, \(\wp(\mathscr{V}(D_{0},0))\) is poly-stable and \(\wp(\mathscr{W}\setminus(\mathscr{V}(D_{2},-1)\cup\mathscr{V}(D_{0},0)))\) is semistable._ _Moreover, we have \(\wp(\mathscr{V}(D_{2},-2))=\wp(\mathscr{V}(D_{2},0))\), \(\wp(\mathscr{V}(D_{1},-1))=\wp(\mathscr{V}(D_{1},0))\) and \(\wp|_{\mathscr{V}(D_{2},-1)}\) is a branched covering. Moreover, we have \(\wp(\mathscr{V}(D_{2},-1))\cap\wp(\mathscr{V}(D_{1},0))\neq 0\) and \(\wp(\mathscr{V}(D_{2},-1))\cap\wp(\mathscr{V}(D_{0},0))=0\)._ ### Algebraic Mochizuki map Based on the study of the local rescaling properties of Higgs bundles, Mochizuki introduced a weight for each \(p\in Z\) in [33, Sec. 3]. To be more specific, let \(c\) be a real number. For each \(p\in Z\), the weight we consider is given by \[\chi_{p}(c)=\min\{\ell_{p},(m_{p}+1)c+\ell_{p}/2\}\.\] By utilizing the global geometry of a Higgs bundle, we can uniquely determine the constant \(c\). We aim to choose the sign of \(\omega\) such that \(d_{+}\leq d_{-}\). **Lemma 7.4** ([33, Lemma 4.3]).: _If \((\mathcal{E},\varphi)\) is stable, then there exists a unique constant \(c>0\) such that_ \[d_{+}+\sum_{p\in Z}\chi_{p}(c)=0\,\ d_{-}+\sum_{p\in Z}(\ell_{p}-\chi_{p}(c))=0\.\] Proof.: Since \((\mathcal{E},\varphi)\) is stable, we have \(-\sum\ell_{p}<d_{\pm}<0\). We define the function \[f(c)=d_{+}+\sum_{p}\chi_{p}(c)\, \tag{22}\] which is strictly increasing. Moreover, for \(c\) sufficiently large, \(\chi_{p}(c)=\ell_{p}\), and therefore \(f(c)=d_{+}+\sum_{p}\ell_{p}=-d_{-}>0\). Additionally, \(f(0)=d_{+}+\sum_{p}(\ell_{p}/2)\). Since \(d_{+}\leq d_{-}\) and \(d_{+}+d_{-}+\sum_{p}\ell_{p}=0\), we obtain \(f(0)\leq 0\). The monotonicity of \(f\) implies the existence of \(c_{0}\) such that \(f(c_{0})=0\). From the construction, if \(d_{+}\leq d_{-}\), two weighted bundles \((L_{+},\chi_{p}(c_{0}))\) and \((L_{-},\ell_{p}-\chi_{p}(c_{0}))\) are obtained with weights \(\chi_{p}(c_{0})\) and \(\ell_{p}-\chi_{p}(c_{0})\) at each \(p\in Z\), respectively. On the other hand, if \(d_{+}\geq d_{-}\), by symmetry, weighted bundles \((L_{+},\ell_{p}-\chi_{p}(c_{0}))\) and \((L_{-},\chi_{p}(c_{0}))\) are obtained. When \((\mathcal{E},\varphi)\) is strictly semistable, S-equivalent to \((L,\omega)\oplus(L^{-1},-\omega)\), then we would like to consider the weighted bundles \((L,0)\oplus(L^{-1},0)\) with weight zero. Next, we define the algebraic Mochizuki map. Let \(\mathscr{F}_{\pm}(\Sigma)\) be the space of rank 1 degree zero filtered bundles on \(\Sigma\), and let \(\mathscr{F}_{2}(\Sigma):=\mathscr{F}_{+}(\Sigma)\oplus\mathscr{F}_{-}(\Sigma)\) be the direct sum. Fix a choice of \(\omega\). Then from any Higgs bundle \((\mathcal{E},\varphi)\), we obtain the subbundles \(L_{\pm}\) with degree \(d_{\pm}\) and define the algebraic Mochizuki map \[\Theta^{\text{\rm{Moc}}}:\mathcal{M}_{q} \longrightarrow\mathscr{F}_{2}(\Sigma),\] \[\Theta^{\text{\rm{Moc}}}(\mathcal{E},\varphi) :=\begin{cases}\mathcal{F}_{*}(L_{+},\chi_{p}(c_{0}))\oplus\mathcal{ F}_{*}(L_{-},\ell_{p}-\chi_{p}(c_{0})),\ \text{if}\ d_{+}\leq d_{-}\\ \mathcal{F}_{*}(L_{+},\ell_{p}-\chi_{p}(c_{0}))\oplus\mathcal{F}_{*}(L_{-}, \chi_{p}(c_{0})),\ \text{if}\ d_{-}\leq d_{+}\,\ (\mathcal{E},\varphi)\ \text{stable},\end{cases}\] \[\Theta^{\text{\rm{Moc}}}(\mathcal{E},\varphi) :=\mathcal{F}_{*}(L,0)\oplus\mathcal{F}_{*}(L^{-1},0),\ (\mathcal{E},\varphi)\ \text{semistable}.\] We list some properties of this map. **Proposition 7.5**.: _For \(\Theta^{\rm{Moc}}\), we have:_ 1. _for each_ \(\mathscr{V}(D,m)\) _with_ \(0\leq D\leq\Lambda\)_,_ \(-\deg(D)\leq m\leq 0\)_,_ \(\Theta^{\rm{Moc}}|_{\wp(\mathscr{V}(D,m))}\) _is continuous,_ 2. _for i=1,2 and_ \(s_{i}\in\mathbb{V}_{D}\) _with_ \((\mathcal{E}_{i},\varphi_{i}):=\wp(s_{i})\)_, suppose_ \(\tau(s_{1})=\tau(s_{2})\)_, then_ \(\Theta^{\rm{Moc}}(\mathcal{E}_{1},\varphi_{1})=\Theta^{\rm{Moc}}(\mathcal{E}_ {2},\varphi_{2})\)_. In particular,_ \(\Theta^{\rm{Moc}}\) _is not injective._ Proof.: The proof follows directly from the definition. ### Exotic phenomena A Higgs bundle \((\mathcal{E},\varphi)\in\mathcal{M}_{q}\) is called "exotic" if the constant \(c\) in Lemma 7.4 satisfies \(c\neq 0\). This new behavior only appears in the Hitchin fiber with reducible spectral curve. In this subsection, we aim to understand the exotic phenomenon of a Higgs bundle. We first provide another expression for the weight in Lemma 7.4. **Proposition 7.6**.: _Suppose \(-\deg D<d_{+}\leq-\frac{1}{2}\deg D\) and let \(Z_{0}:=\operatorname{supp}\,(D)\), then the constant \(c\) in Lemma 7.4 is given by_ \[c=\frac{d_{+}+\frac{1}{2}\deg D}{\deg(\Lambda|_{Z_{0}})+|Z_{0}|}\,\] _where \(\Lambda|_{Z_{0}}\) means the restriction of the divisor to \(Z_{0}\), and \(|Z_{0}|\) means the number of points in \(Z_{0}\) without multiplicity._ Proof.: By the definition of \(\chi_{p}(c)\), if \(p\in Z_{1}\), \(\chi_{p}(c)=0\) for any \(c\geq 0\). Therefore, the choice of \(c\) is determined by the equation \(d_{+}+\sum_{p\in Z_{0}}\min\{\ell_{p},(m_{p}+1)c+\frac{\ell_{p}}{2}\}=0\). Define the function \(F(c):=d_{+}+\sum_{p\in Z_{0}}((m_{p}+1)c+\ell_{p}/2)\), and let \(f(c)\) be the function defined in (22). Then \(F(c)\geq f(c)\), \(F(0)=0\) and for \(c\) sufficiently large, \(F(c)=f(c)\). We compute \[\sum_{p\in Z_{0}}(m_{p}+1)c=(\deg(\Lambda|_{Z_{0}})+|Z_{0}|))c\,\ \sum_{p\in Z_{0}}(\ell_{p}/2)=\frac{1}{2}\deg D\.\] Then for \[c_{0}=\frac{d_{+}+\frac{1}{2}\deg D}{\deg(\Lambda|_{Z_{0}})+|Z_{0}|}\] we have \(F(c_{0})=0\). Therefore, \(f(c_{0})=0\), which determines the choice of the constant \(c\) in Lemma 7.4. **Corollary 7.7**.: _A Higgs bundle \((\mathcal{E},\varphi)\) is exotic if and only if its corresponding degrees \(d_{\pm}\) satisfy \(d_{+}\neq d_{-}\). Additionally, there are only a finite number of possible choices for \(c\) over \(\mathcal{M}_{q}\)._ Proof.: The result follows directly from the formulas in Proposition 7.6. We will compute some examples of possible weights in some special cases. First consider the generic case. **Example 7.8**.: _Suppose \(\omega\) has only simple zeros, therefore \(Z=\{p_{1},\ldots,p_{2g-2}\}\) and \(\Lambda=p_{1}+\cdots+p_{2g-2}\). Given a stable Higgs bundle \((\mathcal{E},\varphi)\) with corresponding \(L_{+},L_{-},D,d_{\pm}\), recall that from the stability condition, these degrees satisfy the followings:_ \[d_{+}+d_{-}+\deg D=0\,\ d_{+}\leq d_{-}\,\ d_{+}<0,\ d_{-}<0\,\ -\deg D<d_{+}\leq-\frac{1}{2}\deg D\.\] _If we write \(Z_{0}:=\operatorname{supp}\,D\), then \(\chi_{p}=0\) for \(p\in Z\setminus Z_{0}\) and \(\chi_{p}=2c+1/2\) for \(p\in Z_{0}\). Moreover, we have \(d_{+}+\deg(D)(2c+1/2)=0\). Therefore, \(c=-\frac{d_{+}}{2\deg(D)}+\frac{1}{4}\), and two weighted bundles we obtained are_ \[(L_{+},(-\frac{d_{+}}{\deg D}|_{Z_{0}},0|_{Z\setminus Z_{0}}))\,\ (L_{-},(1+\frac{d_{+}}{\deg D}|_{Z_{0}},1|_{Z \setminus Z_{0}}))\.\] Next, we consider the most nongeneric case. **Example 7.9**.: _Suppose \(\omega\) only contains one zero. Write \(\operatorname{Div}(\omega)=(2g-2)p\). Then the possible divisors are \(D=\ell p\), for \(0\leq\ell\leq 2g-2\). Let \(L_{+}\) be a line bundle with \(d_{+}:=\deg(L_{+})\) and \(-\ell<d_{+}\leq\ell/2\). Then the choice of \(c\) is determined by the equation_ \[d_{+}+\min\{\ell,(2g-1)c+\ell/2\}=0\.\] _As \(d_{+}\neq-\ell\), we must have \(c=-\frac{\ell}{2(2g-1)}\) and the corresponding weighted bundles are_ \[(L_{+},-d_{+}),\;(L_{+}^{-1}\otimes\mathcal{O}(-\ell p),\ell-d_{+})\.\] Finally, we explicitly compute the limiting configurations when \(g=2\) for the strictly semistable locus of the stratification in Example 7.3. **Example 7.10**.: _When \(g=2\), we consider \(\Lambda=2p\) and \(D_{i}=ip\) for \(i=0,1,2\). Let \(p_{m}:\mathscr{V}(D,m)\to\operatorname{Pic}^{m}(\Sigma)\) be the projection. Over \(\wp(\mathscr{V}(D_{i},0))\), for \(L\in\operatorname{Pic}^{0}(\Sigma)\), the S-equivalence class for \(\wp(p_{0}^{-1}(L))\) is \((L\oplus L^{-1},\begin{pmatrix}\omega&0\\ 0&-\omega\end{pmatrix})\) and the corresponding weighted bundles are \((L,0)\oplus(L^{-1},0)\). Over \(\wp(\mathscr{V}(D_{2},-1))\), for \(L\in\operatorname{Pic}^{-1}(\Sigma)\), let \((\mathcal{E},\varphi)=\wp(L,s\in\mathcal{V}(D_{2},L))\), then_ \[\Theta^{\operatorname{Moc}}(\mathcal{E},\varphi)=\mathcal{F}_{*}(L,1)\oplus \mathcal{F}_{*}(L^{-1}(-D_{2}),1)=\mathcal{F}_{*}(L(p),0)\oplus\mathcal{F}_{*} (L^{-1}(p-D_{2}),0)\.\] ### Discontinuous behavior In this subsection, we study the discontinuous behavior of \(\Theta^{\operatorname{Moc}}\). Consider a sequence of algebraic data \((L_{i},q_{i})\in\mathbb{W}_{m}\), where \(L_{i}\in\operatorname{Pic}^{m}\) and \(q_{i}\in\mathscr{V}(D,L_{i})\). We assume that \(\lim_{i\to\infty}L_{i}=L_{\infty}\) in \(\operatorname{Pic}^{m}\) and \(\lim_{i\to\infty}q_{i}=q_{\infty}\in\mathscr{V}(D_{\infty},L)\), for \(D_{\infty}\neq D\). As the space \(\bigcup_{\deg D^{\prime}\geq-m}\mathscr{V}(D^{\prime},m)\) is connected, we can always find such a sequence. Let \(L_{+}^{i}:=L_{i}\) and \(L_{-}^{i}:=L_{i}^{-1}\otimes\mathcal{O}(-D)\). By Lemma 7.4, the weight function, which we denote by \(\chi_{\pm}\), is independent of \(i\). In addition, we have \[\lim_{i\to\infty}\Theta^{\operatorname{Moc}}\circ\wp(L_{i},q_{i})=\mathcal{F} _{*}(L_{\infty},\chi_{+})\oplus\mathcal{F}_{*}(L_{\infty}^{-1}(-D),\chi_{-})\.\] For \((L_{\infty},q_{\infty}\in\mathscr{V}(D_{\infty},L))\), let \(\chi_{\pm}^{\infty}\) be the corresponding weights. These depend on \(D_{\infty}\) and \(m\). Then \[\Theta^{\operatorname{Moc}}\circ\wp(L_{\infty},q_{\infty})=\mathcal{F}_{*}(L _{\infty},\chi_{+}^{\infty})\oplus\mathcal{F}_{*}(L_{\infty}^{-1}\otimes \mathcal{O}(-D_{\infty}),\chi_{-}^{\infty})\.\] Therefore, we obtain \[\lim_{i\to\infty}\Theta^{\operatorname{Moc}}\circ\wp(L_{i},q_{i})\] \[= \Theta^{\operatorname{Moc}}\circ\wp(L_{\infty},q_{\infty})\otimes \left(\mathcal{F}_{*}(\mathcal{O},\chi_{+}-\chi_{+}^{\infty})\oplus\mathcal{F} _{*}(\mathcal{O}(D_{\infty}-D),\chi_{-}-\chi_{-}^{\infty})\right)\,. \tag{23}\] **Proposition 7.11**.: _When \(g\geq 3\), there exists a sequence \((\mathcal{E}_{i},\varphi_{i})\in\mathcal{M}_{q}\) of stable Higgs bundles with stable limit \((\mathcal{E}_{\infty},\varphi_{\infty})=\lim_{i\to\infty}(\mathcal{E}_{i}, \varphi_{i})\) such that_ \[\lim_{i\to\infty}\Theta^{\operatorname{Moc}}(\mathcal{E}_{i},\varphi_{i})\neq \Theta^{\operatorname{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty})\.\] Proof.: Choose \(D=\Lambda\) and \(d_{+}=-(g-1)\) with \(L_{i}=L\in\operatorname{Pic}^{d_{+}}(\Sigma)\), and study the degenerate behavior for a family \(q_{i}\in\mathcal{V}(\Lambda,L)\) which converges to \(q_{\infty}\in\mathscr{V}(D_{\infty},L)\). Here, \(D_{\infty}\) satisfies \(D_{\infty}\leq D\) and \(\deg(D_{\infty})=\deg(D)-1\). As \(q_{i}\) lies in the top stratum, we can always find such a family. Take \((\mathcal{E}_{i},\varphi_{i})=\wp(L_{i},q_{i})\) and \((\mathcal{E}_{\infty},\varphi_{\infty})=\wp(L,q_{\infty})\). When \(g\geq 3\), we have \(-\deg(D_{\infty})<d_{+}\leq-\frac{1}{2}\deg(D_{\infty})\), which implies \((\mathcal{E}_{\infty},\varphi_{\infty})\) is a stable Higgs bundle. Write \(D=\sum_{p}\ell_{p}\). As \((\mathcal{E}_{i},\varphi_{i})\) is nonexotic, the weights will be \(\chi_{+}(p)=\chi_{-}(p)=\ell_{p}/2\). However, as \(\deg(D_{\infty})\neq 2d_{+}\), \((\mathcal{E}_{\infty},\varphi_{\infty})\) is exotic. By Proposition 7.6, if we write \(\chi_{\pm}^{\infty}(p)\) for the weight functions with corresponding constant \(c\), then \(c>0\). Therefore, for \(p\neq p_{0}\) we have \(\chi_{+}^{\infty}(p)=(m_{p}+1)c+m_{p}/2>m_{p}/2=\chi_{+}(p)\). By (23), \(\lim_{i\to\infty}\Theta^{\mathrm{Moc}}(\mathcal{E}_{i},\varphi_{i})\neq\Theta^{ \mathrm{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty})\). When \(g=2\), the stratification is simpler, and we have the following. **Proposition 7.12**.: _When \(g=2\), the following holds:_ 1. _Suppose_ \(\Lambda=p_{1}+p_{2}\) _for_ \(p_{1}\neq p_{2}\)_, then_ \(\Theta^{\mathrm{Moc}}|_{\mathcal{M}^{\mathrm{st}}_{q}}\) _is continuous. Moreover, there exists a sequence of stable Higgs bundles_ \((\mathcal{E}_{i},\varphi_{i})\in\mathcal{M}_{q}\) _where the limit_ \((\mathcal{E}_{\infty},\varphi_{\infty})=\lim_{i\to\infty}(\mathcal{E}_{i}, \varphi_{i})\) _is semistable, and_ \(\gamma(0)\) _is also semistable and furthermore_ \[\lim_{i\to\infty}\Theta^{\mathrm{Moc}}(\mathcal{E}_{i},\varphi_{i})\neq\Theta ^{\mathrm{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty})\.\] 2. _Suppose_ \(\Lambda=2p\)_. Then_ \(\Theta^{\mathrm{Moc}}|_{\mathcal{M}^{\mathrm{st}}_{q}}\) _is continuous._ Proof.: For (i), suppose \(\Lambda=p_{1}+p_{2}\), then by Example 7.2, we have \(\mathcal{M}^{\mathrm{st}}_{q}=\wp(\mathscr{V}(\Lambda,-1))\). By Proposition 7.5, \(\Theta^{\mathrm{Moc}}_{q}|_{\mathcal{M}^{\mathrm{st}}_{q}}\) is continuous. However, for semistable elements other strata must be taken into consideration. Take \(L\in\mathrm{Pic}^{-1}(\Sigma)\) and \(q_{i}\in\mathcal{V}(\Lambda,L)\) such that \(q_{i}\) convergence to \(q_{\infty}\in\mathcal{V}(p_{1},L)\). We define \((\mathcal{E}_{i},\varphi_{i})=\wp(L,q_{i})\) and \((\mathcal{E}_{\infty},\varphi_{\infty})=\wp(L,q_{\infty})\). For each \(i\), \[\Theta^{\mathrm{Moc}}(\mathcal{E}_{i},\varphi_{i})=\mathcal{F}_{*}(L,(\tfrac{ 1}{2},\tfrac{1}{2}))\oplus\mathcal{F}_{*}(L^{-1}(-\Lambda),(\tfrac{1}{2}, \tfrac{1}{2})).\] Moreover, we have \[\Theta^{\mathrm{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty})=\mathcal{F}_{*}(L (D),(0,0))\oplus\mathcal{F}_{*}(L^{-1}(-D),(0,0))\neq\lim_{i\to\infty}\Theta^ {\mathrm{Moc}}(\mathcal{E}_{i},\varphi_{i}).\] For (ii), by Example 7.3, \(\wp(\mathscr{V}(D_{2},-1))=\mathcal{M}^{\mathrm{st}}_{q}\) and by Proposition 7.5, \(\Theta^{\mathrm{Moc}}_{q}|_{\mathcal{M}^{\mathrm{st}}_{q}}\) is continuous. We now consider the behavior of the filtered bundle when crossing the divisors. ### The analytic Mochizuki map and limiting configurations In this subsection, we construct the analytic Mochizuki map for the Hitchin fiber with a reducible spectral curve. We also introduce the convergence theorem of Mochizuki as stated in [33] and examine the discontinuous behavior of the analytic Mochizuki map. For \((\mathcal{E},\varphi)\in\mathcal{M}_{q}\), we can express the abelianization as \((\mathcal{E}_{0},\varphi_{0})=(L_{+}\oplus L_{-},\begin{pmatrix}\omega&0\\ 0&-\omega\end{pmatrix})\), thus \(\Theta^{\mathrm{Moc}}(\mathcal{E},\varphi)=\mathcal{F}_{*}(L_{+},\chi_{+}) \oplus\mathcal{L}_{-}(L_{-},\chi_{-})\in\mathcal{F}_{2}(\Sigma)\). Via the nonabelian Hodge correspondence for filtered bundles, we obtain two Hermitian metrics \(h^{\mathrm{Lim}}_{\pm}\) with corresponding Chern connections \(A_{h^{\mathrm{Lim}}_{\pm}}\). These metrics satisfy the following proposition. **Proposition 7.13** ([32, Lemma 4.4]).: _The metrics \(h^{\mathrm{Lim}}_{\pm}\) over \(L_{\pm}\) satisfy_ 1. \(F_{A_{h^{\mathrm{Lim}}_{\pm}}}=0\) _and_ \(h^{\mathrm{Lim}}_{+}h^{\mathrm{Lim}}_{-}=1\)_,_ 2. _for every_ \(p\in\Sigma\)_, there exists an open neighborhood_ \((U,z)\) _with_ \(P=\{z=0\}\) _such that_ \(|z|^{-2\chi_{p}(c_{0})}h^{\mathrm{Lim}}_{+}\) _and_ \(|z|^{2\chi_{p}(c_{0})+2l_{P}}h^{\mathrm{Lim}}_{-}\) _extends smoothly to_ \(L_{\pm}|_{U}\)_._ Now, \(H^{\mathrm{Lim}}:=h^{\mathrm{Lim}}_{+}\oplus h^{\mathrm{Lim}}_{-}\) is a metric on \(\mathcal{E}_{0}\) which induces a metric on \((\mathcal{E},\varphi)|_{\Sigma\setminus Z}\) because \((\mathcal{E},\varphi)|_{\Sigma\setminus Z}\cong(\mathcal{E}_{0},\varphi_{0} )|_{\Sigma\setminus Z}\). Let \((A^{\mathrm{Lim}},\phi^{\mathrm{Lim}})\) be the Chern connection defined by \((\mathcal{E},\varphi,H^{\mathrm{Lim}})\) over \(\Sigma\setminus Z\). Then \((A^{\mathrm{Lim}},\phi^{\mathrm{Lim}})\) is a limiting configuration that satisfies the decoupled Hitchin equations (7). The analytic Mochizuki map \(\Upsilon^{\mathrm{Moc}}\) is defined as: \[\Upsilon^{\mathrm{Moc}}:\mathcal{M}_{q}\longrightarrow\mathcal{M}^{\mathrm{ Lim}}_{\mathrm{Hit}},\quad\Upsilon^{\mathrm{Moc}}(\mathcal{E},\varphi)=(A^{ \mathrm{Lim}},\phi^{\mathrm{Lim}}). \tag{24}\] Note that \(H^{\mathrm{Lim}}\) is not unique: for any constant \(c\), the metric \(ch^{\mathrm{Lim}}_{+}\oplus c^{-1}h^{\mathrm{Lim}}_{-}\) defines the same Chern connection as \(H^{\mathrm{Lim}}\). In any case, the map \(\Upsilon^{\mathrm{Moc}}\) is well-defined. Suppose \((\mathcal{E},\varphi)\) is an S-equivalence class of a semistable Higgs bundle. Let \(H_{t}\) be the harmonic metric for \((\mathcal{E},t\varphi)\). For each constant \(C>0\), define \(\mu_{C}\) to be the automorphism of \(L_{+}\oplus L_{-}\) given by \(\mu_{C}=C{\rm id}_{L_{+}}\oplus C^{-1}{\rm id}_{L_{-}}\). As \(\mathcal{E}\cong L_{+}\oplus L_{-}\) on \(\Sigma\setminus Z\), \(\mu_{C}^{*}H_{t}\) can be regarded as a metric on \(\mathcal{E}|_{\Sigma\setminus Z}\). Take any point \(x\in\Sigma\setminus Z\) and a frame \(e_{x}\) of \(L_{+}|x\), and define: \[C(x,t):=\left(\frac{h_{L_{+}}^{\rm Lim}(e_{x},e_{x})}{H_{t}(e_{x},e_{x})} \right)^{1/2}\,.\] Writing \(\nabla_{t}+t\phi_{t}\) as the corresponding flat connection of \((\mathcal{E},t\varphi)\) under the nonabelian Hodge correspondence, then **Theorem 7.14** ([33]).: _On any compact subset \(K\) of \(\Sigma\setminus Z\), \(\mu_{C(x,t)}^{*}H_{t}\) converges smoothly to \(H^{\rm Lim}\). Additionally, there exist \(t\)-independent constants \(C_{k}\) and \(C_{k}^{\prime}\) such that_ \[|(\nabla_{t},\phi_{t})-\Upsilon^{\rm{Moc}}(\mathcal{E},\varphi)|_{\mathcal{C} ^{k}}\leq C_{k}e^{-C_{k}^{\prime}d}.\] Propositions 7.11 and 7.12 now give **Theorem 7.15**.: _When \(g\geq 3\), \(\Upsilon^{\rm{Moc}}|_{\mathcal{M}_{q}^{\rm st}}\) is discontinuous, and when \(g=2\), \(\Upsilon^{\rm{Moc}}|_{\mathcal{M}_{q}^{\rm st}}\) is continuous._ ## 8. The Compactified Kobayashi-Hitchin map In this section, we define a compactified version of the Kobayashi-Hitchin map and prove the main theorem of our paper. The Kobayashi-Hitchin map \(\Xi\) is a homeomorphism between the Dolbeault moduli space \(\mathcal{M}_{\rm Dol}\) and the Hitchin moduli space \(\mathcal{M}_{\rm Hit}\). We wish to extend this to a map \(\overline{\Xi}\) from the compactified Dolbeault moduli space \(\overline{\mathcal{M}}_{\rm Dol}\) to the compactification \(\overline{\mathcal{M}}_{\rm Hit}\subset\mathcal{M}_{\rm Hit}\cup\mathcal{M}_ {\rm Hit}^{\rm Lim}\) of the Hitchin moduli space, and to study the properties of this extended map. ### The compactified Kobayashi-Hitchin map We first summarize the results obtained above. By the construction in Section 4, there is an identification \(\partial\overline{\mathcal{M}}_{\rm Dol}\cong(\mathcal{M}_{\rm Dol}\setminus \mathcal{H}^{-1}(0))/\mathbb{C}^{*}\). Moreover, through (19) and (24), we have constructed the analytic Mochizuki map \(\Upsilon^{\rm{Moc}}:\mathcal{M}_{\rm Dol}\setminus\mathcal{H}^{-1}(0)\to \mathcal{M}_{\rm Hit}^{\rm Lim}\). Writing \((A^{\rm Lim},\phi^{\rm Lim}=\varphi+\varphi^{\dagger_{\rm Lim}})=\Upsilon^{ \rm{Moc}}(\mathcal{E},\varphi)\), then for \(w\in\mathbb{C}^{*}\), we have \[\Upsilon^{\rm{Moc}}(\mathcal{E},w\varphi)=(A^{\rm Lim},\phi^{\rm Lim}=w\varphi +\bar{w}\varphi^{\dagger_{\rm Lim}})\.\] Hence \(\Upsilon^{\rm{Moc}}\) descends to a map \(\partial\overline{\Xi}\) between \(\mathbb{C}^{*}\) orbits. \[\partial\overline{\Xi}:\partial\overline{\mathcal{M}}_{\rm Dol}=(\mathcal{M}_ {\rm Dol}\setminus\mathcal{H}^{-1}(0))/\mathbb{C}^{*}\longrightarrow\mathcal{M }_{\rm Hit}^{\rm Lim}/\mathbb{C}^{*}\,\] Together with the initial Kobayashi-Hitchin map \(\Xi:\mathcal{M}_{\rm Dol}\to\mathcal{M}_{\rm Hit}\), we obtain \[\overline{\Xi}:\overline{\mathcal{M}}_{\rm Dol}=\mathcal{M}_{\rm Dol}\cup \partial\overline{\mathcal{M}}_{\rm Dol}\longrightarrow\mathcal{M}_{\rm Hit} \cup\mathcal{M}_{\rm Hit}^{\rm Lim}/\mathbb{C}^{*}.\] Theorems 6.15 and 7.14 show that for a Higgs bundle \((\mathcal{E},\varphi)\in\mathcal{M}_{\rm Dol}\setminus\mathcal{H}^{-1}(0)\) and real \(t\), \(\lim_{t\to\infty}\Xi(\mathcal{E},t\varphi)=\partial\overline{\Xi}[(\mathcal{E}, \varphi)/\mathbb{C}^{*}]\). Thus the image of \(\overline{\Xi}\) lies in \(\overline{\mathcal{M}}_{\rm Hit}\), the closure of \(\mathcal{M}_{\rm Hit}\) in \(\mathcal{M}_{\rm Hit}\cup\mathcal{M}_{\rm Dol}\setminus\mathcal{H}^{-1}(0)\). There are natural extensions \(\overline{\mathcal{H}}_{\rm Dol}:\overline{\mathcal{M}}_{\rm Dol}\to\overline{ \mathcal{B}}\) and \(\overline{\mathcal{H}}_{\rm Hit}:\overline{\mathcal{M}}_{\rm Hit}\to\overline{ \mathcal{B}}\) such that \(\overline{\mathcal{H}}_{\rm Hit}\circ\overline{\Xi}=\overline{\mathcal{H}}_{ \rm Dol}\). In summary, there are commutative diagrams We now turn to the analysis of some properties of the compactified Kobayashi-Hitchin map. Define \[\overline{\mathcal{B}}^{\rm{eg}}=\{[(q,w)]\in\overline{\mathcal{B}}\mid q\neq 0 \text{ has simple zeros}\}\.\] This represents the compactified space of quadratic differentials with simple zeros. Let \(\overline{\mathcal{B}}^{\rm sing}=\overline{\mathcal{B}}\setminus\overline{ \mathcal{B}}^{\rm reg}\). Additionally, define the open sets \(\overline{\mathcal{M}}_{\rm Dol}^{\rm reg}=\overline{\mathcal{H}}_{\rm Dol}^{-1 }(\overline{\mathcal{B}}^{\rm reg})\) and \(\overline{\mathcal{M}}_{\rm Hit}^{\rm reg}=\overline{\mathcal{H}}_{\rm Hit}^{-1 }(\overline{\mathcal{B}}^{\rm reg})\) as the collections of elements with regular spectral curves. Now set \(\overline{\mathcal{M}}_{\rm Dol}^{\rm sing}=\overline{\mathcal{H}}_{\rm Dol}^{ -1}(\overline{\mathcal{B}}^{\rm sing})\) and \(\overline{\mathcal{M}}_{\rm Hit}^{\rm sing}=\overline{\mathcal{H}}_{\rm Hit}^{ -1}(\overline{\mathcal{B}}^{\rm sing})\) to be the sets of singular fibers. We can write \(\overline{\Xi}=\overline{\Xi}^{\rm reg}\cup\overline{\Xi}^{\rm sing}\), where \[\overline{\Xi}^{\rm reg}:\overline{\mathcal{M}}_{\rm Dol}^{\rm reg}\longrightarrow \overline{\mathcal{M}}_{\rm Hit}^{\rm reg},\quad\overline{\Xi}^{\rm sing}: \overline{\mathcal{M}}_{\rm Dol}^{\rm sing}\longrightarrow\overline{\mathcal{M }}_{\rm Hit}^{\rm sing}\.\] **Proposition 8.1**.: _The map \(\overline{\Xi}^{\rm reg}:\overline{\mathcal{M}}_{\rm Dol}^{\rm reg}\to \overline{\mathcal{M}}_{\rm Hit}^{\rm reg}\) is bijective, whereas \(\overline{\Xi}^{\rm sing}:\overline{\mathcal{M}}_{\rm Dol}^{\rm sing}\to \overline{\mathcal{M}}_{\rm Hit}^{\rm sing}\) is neither surjective nor injective._ Proof.: The bijectivity of \(\overline{\Xi}^{\rm reg}\) is established by Theorem 4.9. The non-surjectivity and non-injectivity of \(\overline{\Xi}^{\rm sing}\) follow from Theorem 6.16 and Theorem 7.15. ### Continuity properties of the compactified Kobayashi-Hitchin map In this subsection, we prove that the continuity of the compactified Kobayashi-Hitchin map is fully determined by the continuity of the analytic Mochizuki map. Let \((\mathcal{E}_{i},t_{i}\varphi_{i})\) be a sequence of Higgs bundles with real numbers \(t_{i}\to+\infty\), \(\det(\varphi_{i})=q_{i}\), \(Z_{i}=q_{i}^{-1}(0)\), and \(\|q_{i}\|_{L^{2}}=1\). We denote \(\xi_{i}=[(\mathcal{E}_{i},t_{i}\varphi_{i})]\in\mathcal{M}_{\rm Dol}\). By the compactness of \(\overline{\mathcal{M}}_{\rm Dol}\), after passing to a subsequence, we may assume there is \(\xi_{\infty}\in\partial\overline{\mathcal{M}}_{\rm Dol}\) such that \(\lim_{i\to\infty}\xi_{i}=\xi_{\infty}\). Since \(\partial\overline{\mathcal{M}}_{\rm Dol}\cong(\mathcal{M}_{\rm Dol}\setminus \mathcal{H}^{-1}(0))/\mathbb{C}^{*}\), we can select a representative \((\mathcal{E}_{\infty},\varphi_{\infty})\) of \(\xi_{\infty}\). By Lemma 4.4, we have that \((\mathcal{E}_{i},\varphi_{i})\) converges to \((\mathcal{E}_{\infty},\varphi_{\infty})\) in \(\mathcal{M}_{\rm Dol}\), and \(q_{i}\) converges to \(q_{\infty}\). We write \(Z_{\infty}=q_{\infty}^{-1}(0)\). By Proposition 4.6, \(\lim_{i\to\infty}\overline{\Xi}(\mathcal{E}_{i},t_{i}\varphi_{i})\) exists. The following result establishes the continuity of this map with respect to the analytic Mochizuki map \(\Upsilon^{\rm{Moc}}\). **Proposition 8.2**.: _Under the previous convention, \(\lim_{i\to\infty}\overline{\Xi}(\xi_{i})=\overline{\Xi}(\xi_{\infty})\) if and only if \(\lim_{i\to\infty}\Upsilon^{\rm{Moc}}(\mathcal{E}_{i},\varphi_{i})=\Upsilon^{ \rm{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty})\). In other words, \(\overline{\Xi}\) is continuous at \(\xi_{\infty}\) if and only if \(\Upsilon^{\rm{Moc}}\) is continuous at \(\xi_{\infty}\)._ Proof.: Set \[\overline{\Xi}(\mathcal{E}_{i},t_{i}\varphi_{i}) =\Xi(\mathcal{E}_{i},t_{i}\varphi_{i})=A_{i}+t_{i}\phi_{i}\,\] \[\Upsilon^{\rm{Moc}}(\mathcal{E}_{i},\varphi_{i}) =(A_{i}^{\rm{Lim}},\phi_{i}^{\rm{Lim}})\,\] \[\Upsilon^{\rm{Moc}}(\mathcal{E}_{\infty},\varphi_{\infty}) =(A_{\infty}^{\rm{Lim}},\phi_{\infty}^{\rm{Lim}})\.\] By Proposition 4.6, there exists a limiting configuration \((A_{\infty},\phi_{\infty}):=\lim_{i\to\infty}(A_{i},\phi_{i})\) over \(\Sigma\setminus Z_{\infty}\). Let \(K\) be any compact set in \(\Sigma\setminus Z_{\infty}\). Note that by the convergence assumption, there exists \(i_{0}\) such that \(Z_{i}\cap K=\emptyset\) for all \(i\geq i_{0}\). Moreover, from Theorems 6.15 and 7.14, for \(d_{i}=\min_{K}|q_{i}|\), there exist \(t\)-independent constants \(C,C^{\prime}>0\) such that (up to gauge transformations) \[|(A_{i},\phi_{i})-(A_{i}^{\rm{Lim}},\phi_{i}^{\rm{Lim}})|_{\mathcal{C}^{k}(K)} \leq Ce^{-C^{\prime}t_{i}d_{i}}.\] The convergence is uniform and exponential for fixed \(K\). Therefore, over \(K\), the size of \(|(A_{\infty},\phi_{\infty})-(A_{\infty}^{\rm{Lim}},\phi_{\infty}^{\rm{Lim}})|_{ \mathcal{C}^{k}(K)}\) is the same as the size of \(|(A_{i}^{\rm{Lim}},\phi_{i}^{\rm{Lim}})-(A_{\infty}^{\rm{Lim}},\phi_{\infty}^{ \rm{Lim}})|_{\mathcal{C}^{k}(K)}\). This proves the Proposition. #### 8.2.1. Continuity along rays We now investigate the behavior of the compactified Kobayashi-Hitchin map restricted to a singular fiber. Specifically, fix \(0\neq q\in H^{0}(K^{2})\), and denote by \([q]\) the \(\mathbb{C}^{*}\)-orbit of \(q\times 1\) in the compactified Hitchin base \(\overline{\mathcal{B}}\). Define \(\overline{\mathcal{M}}_{\rm{Dol},q}:=\overline{\mathcal{H}}_{\rm Dol}^{-1}([q])\), \(\overline{\mathcal{M}}_{\rm Hit,q}:=\overline{\mathcal{H}}_{\rm Hit}^{-1}([q])\). Then the restriction of \(\overline{\Xi}\) on \(\overline{\mathcal{M}}_{\rm{Dol},q}\) defines a map \(\overline{\Xi}_{q}:\overline{\mathcal{M}}_{\rm{Dol},q}\to\overline{\mathcal{M}}_{ \rm Hit,q}\). **Theorem 8.3**.: _Let \(q\) be an irreducible quadratic differential._ 1. _If_ \(q\) _contains only zeroes of odd order, then_ \(\overline{\Xi}_{q}\) _is continuous._ 2. _If_ \(q\) _contains a zero of even order, let_ \(\mathcal{M}_{q}=\cup_{D}\mathcal{M}_{q,D}\) _be the stratification defined earlier. Then for each_ \(D\neq 0\)_, there exists an integer_ \(n_{D}>0\) _such that for any Higgs bundle_ \((\mathcal{F},\psi)\in\mathcal{M}_{q,D}\)_, there exist_ \(n_{D}\) _sequences of Higgs bundles_ \((\mathcal{E}_{i}^{k},\varphi_{i}^{k})\) _with_ \(k=1,\ldots,n_{D}\) _such that_ 1. \(\lim_{i\to\infty}(\mathcal{E}_{i}^{k},\varphi_{i}^{k})=(\mathcal{F},\psi)\) _for_ \(k=1,\ldots,n_{D}\)_,_ 2. _if_ \(\lim_{i\to\infty}t_{i}=\infty\)_, and we write_ \[\eta^{k}:=\lim_{i\to\infty}\overline{\Xi}_{q}(\mathcal{E}_{i}^{k},\varphi_{i} ^{k}),\;\xi:=\lim_{i\to\infty}\overline{\Xi}_{q}(\mathcal{F},t_{i}\psi),\] _then_ \(\xi,\eta^{1},\ldots,\eta^{n_{D}}\) _are_ \(n_{D}+1\) _different limiting configurations._ Proof.: This follows from Theorem 6.16 and Proposition 8.2. **Theorem 8.4**.: _Suppose \(q\) is reducible, and let \(\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{st}}\) be the stable locus of \(\overline{\mathcal{M}}_{\mathrm{Dol},q}\). Then the restriction map \(\Theta_{q}^{\mathrm{Moc}}|_{\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{ st}}}:\overline{\mathcal{M}}_{\mathrm{Dol},q}^{\mathrm{st}}\to\overline{\mathcal{M}}_{ \mathrm{Hit},q}\) is discontinuous when \(g\geq 3\) and continuous when \(g=2\)._ Proof.: This follows from Propositions 7.11 and 8.2. #### 8.2.2. Varying fiber With the conventions above, suppose \((\mathcal{E}_{i},\varphi_{i})\) converges to \((\mathcal{E}_{\infty},\varphi_{\infty})\) with \(q_{\infty}\) having only simple zeros, and \(\xi_{i}=(\mathcal{E}_{i},t_{i}\varphi_{i})\) converges to \(\xi_{\infty}\) on \(\overline{\mathcal{M}}_{\mathrm{Dol}}\). Since the condition of having only simple zeros is open, the \(q_{i}\) also have simple zeros for \(i\) sufficiently large. **Proposition 8.5** (cf. [36, Thm. 2.12]).: _Suppose \(q_{\infty}\) has only simple zeros. Then, \(\lim_{i\to\infty}\overline{\Xi}(\xi_{i})=\overline{\Xi}(\xi_{\infty})\). In particular, the map \(\overline{\Xi}^{\mathrm{reg}}:\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{ reg}}\to\overline{\mathcal{M}}_{\mathrm{Hit}}^{\mathrm{reg}}\) is continuous._ Proof.: Let \(S_{i}\) denote the spectral curve of \((\mathcal{E}_{i},\varphi_{i})\) with branching locus \(Z_{i}\). Also, let \(L_{i}:=\chi_{BNR}^{-1}(\mathcal{E}_{i},\varphi_{i})\) be the eigenline bundles. By the construction in Section 6, we have \(\Upsilon^{\mathrm{Moc}}(\xi_{i})=\mathcal{F}_{*}(L_{i},\chi_{i})\), where \(\chi_{i}=-\frac{1}{2}\chi_{Z_{i}}\). Our assumption implies that \(\mathcal{F}_{*}(L_{i},\chi_{i})\) converges to \(\mathcal{F}_{*}(L_{\infty},\chi_{\infty})\) in the sense of Definition 3.2. Thus, by Theorem 3.3, we obtain the convergence of the limiting configurations: \(\lim_{i\to\infty}\Upsilon^{\mathrm{Moc}}(\xi_{i})=\Upsilon^{\mathrm{Moc}}(\xi_ {\infty})\). The claim follows from Proposition 8.2. **Theorem 8.6**.: _The map \(\overline{\Xi}^{\mathrm{reg}}:\overline{\mathcal{M}}_{\mathrm{Dol}}^{\mathrm{ reg}}\to\overline{\mathcal{M}}_{\mathrm{Hit}}^{\mathrm{reg}}\) is a homeomorphism._ Proof.: By Theorem 4.9, \(\overline{\Xi}^{\mathrm{reg}}\) is a bijection. Moreover, by Proposition 8.5, \(\overline{\Xi}^{\mathrm{reg}}\) is continuous. Finally, that \((\overline{\Xi}^{\mathrm{reg}})^{-1}\) is continuous follows directly from the construction in [31]. ## Appendix A Classification of rank 1 torsion modules for \(A_{n}\) singularities In this appendix, we review the classification result for rank 1 torsion free modules at \(A_{n}\) singularities, as given in [17]. We compute the integer invariants defined in Subsection 5.3. Let \(S\) be the spectral curve of an \(\mathrm{SL}(2,\mathbb{C})\) Higgs bundle, and \(x\) a singular point with local defining equation given by \(r^{2}-s^{n+1}=0\); this is an \(A_{n}\) singularity. Let \(p:\widetilde{S}\to S\) be the normalization, where \(p^{-1}(x)=\{\tilde{x}_{+},\tilde{x}_{-}\}\) if \(n\) is odd and \(p^{-1}(x)=\tilde{x}\) if \(n\) is even. We use \(R\) to denote the completion of the local ring \(\mathcal{O}_{x}\), \(K\) its field of fractions, and \(\widetilde{R}\) its normalization. ### \(A_{2n}\) singularity The local equation is \(r^{2}-s^{2n+1}=0\). The normalization induces a map between coordinate rings, and we can write \[\psi:\mathbb{C}[r,s]/(r^{2}-s^{2n+1})\longrightarrow\mathbb{C}[t],\quad\psi(f(r,s ))=f(t^{2n},t^{2}),\] where \(\widetilde{R}=\mathbb{C}[[t]]\) and \(R=\mathbb{C}[[t^{2},t^{2n+1}]]\subset\widetilde{R}\). According to [17, Anh. (1.1)], any rank 1 torsion free \(R\)-module can be written as \[M_{k}=R+R\cdot t^{k}\subset\widetilde{R},\quad k=1,3,\ldots,2n+1.\] Here, \(M_{k}\) is a fractional ideal that satisfies \(R\subset M_{k}\subset\widetilde{R}\), with \(M_{1}=\widetilde{R}\) and \(M_{2n+1}=R\). We may express any \(f\in M_{k}\) as \(f=\sum_{i=0}^{\frac{k-1}{2}}f_{2i}t^{2i}+\sum_{i\geq k}f_{i}t^{i}\), where \(f_{i}\in\mathbb{C}\). We are interested in the integers \(\ell_{x}:=\dim_{\mathbb{C}}(M_{k}/R)\), \(a_{\tilde{x}}:=\dim_{\mathbb{C}}(\widetilde{R}/C(M_{k}))\) and \(b_{x}=\dim_{\mathbb{C}}(\operatorname{Tor}(M_{k}\otimes_{R}\widetilde{R}))\). Thus, as a \(\mathbb{C}\)-vector space, \(M_{k}/R\) is generated by \(t^{k},t^{k+2},\ldots,t^{2n-1}\), implying that \(\ell_{x}=\frac{2n+1-k}{2}\). The conductor of \(M_{k}\) is given by \(C(M_{k})=\{u\in K\mid u\cdot\widetilde{R}\subset M_{k}\}\). By the expression of \(M_{k}\) and a straightforward computation, we have \(C(M_{k})=(t^{k-1})\), where \((t^{k-1})\) is the ideal in \(\widetilde{R}\) generated by \(t^{k-1}\). Thus, \(1,t,\ldots,t^{k-2}\) will form a basis for \(\widetilde{R}/C(M_{k})\), and we have \(a_{\tilde{x}}=k-1\). Therefore, we have \(a_{\tilde{x}}=2n-2\ell_{x}\). For \(i=0,1,\ldots,\frac{2n-1-k}{2}\), we define \(s_{i}=t^{k+2i}\otimes_{R}1-1\otimes t^{k+2i}\in M_{k}\otimes_{R}\widetilde{R}\). As \(k\) is odd, \(t^{2n+1-k-2i}\in R\) and \(t^{2n+1-k-2i}s_{i}=t^{2n+1}\otimes_{R}1-1\otimes_{R}t^{2n+1}=0\), where the last equality is because \(t^{2n+1}\in R\). Moreover, \(\{s_{1},\ldots,s_{\frac{2n-1-k}{2}}\}\) form a basis of \(\operatorname{Tor}(M_{k}\otimes_{R}\widetilde{R})\), thus \(b_{x}=\frac{2n+1-k}{2}=\ell_{x}\). ### \(A_{2n-1}\) singularity The local equation is \(r^{2}-s^{2n}=0\). The normalization induces a map between the coordinate rings: \[\psi:\mathbb{C}[r,s]/(r^{2}-s^{2n})\longrightarrow\mathbb{C}[t]\oplus \mathbb{C}[t],\quad\psi(f(r,s))=(f(t^{n},t),f(-t^{n},t))\,\] where \(\widetilde{R}=\mathbb{C}[[t]]\oplus\mathbb{C}[[t]]\) and \(R=\mathbb{C}[[(t,t),(t^{n},-t^{n})]]\cong\mathbb{C}[[(t,t),(t^{n},0)]]\). By [17, Anh. (2.1)], any rank 1 torsion free \(R\)-module can be written as: \[M_{k}=R+R\cdot(t^{k},0)\subset\widetilde{R},\quad k=0,1,\ldots,n.\] Then, \(M_{k}\) is also a fractional ideal with \(R\subset M_{k}\subset\widetilde{R}\). Moreover, \(M_{n}=R\), and \(M_{0}=\widetilde{R}\). As \(p^{-1}(x)=\{\tilde{x}_{+},\tilde{x}_{-}\}\), \(\widetilde{R}\) contains two maximal ideals, \(\mathfrak{m}_{+}=((t,1))\), \(\mathfrak{m}_{-}=((1,t))\). For \(f\in M_{k}\), we can express \(f\) as: \[f=\sum_{i=0}^{k-1}f_{ii}(t^{i},t^{i})+\sum_{l\geq 0}f_{l0}(t^{k+l},0)+f_{0l}(0,t^{k+l }),\] where \(f_{ij}\in\mathbb{C}\). Therefore, \(\ell_{x}=\dim_{\mathbb{C}}(M_{k}/R)=n-k\). Moreover, using the expression, we can compute the conductor \(C(M_{k})=((t^{k},1))\cdot((1,t^{k}))\), which implies \(a_{\tilde{x}_{\pm}}=k\). Similarly, for \(i=k,\ldots,n-1\), we define \(s_{i}=(t^{i},0)\otimes_{R}(1,1)-(1,1)\otimes_{R}(t^{i},0)\), then \((t,t)^{n-i}\cdot s_{i}=0\) and \(\{s_{k},\ldots,s_{n-1}\}\) will be a basis for \(\operatorname{Tor}(M_{k}\otimes_{R}\widetilde{R})\) and \(b_{x}=\ell_{x}\). In summary, we have the following: **Proposition A.1**.: _For the integers defined above, we have:_ * _for the_ \(A_{2n}\) _singularity, we have_ \(a_{\tilde{x}}=2n-2\ell_{x}\) _and_ \(b_{x}=\ell_{x}\)_,_ * _for the_ \(A_{2n-1}\) _singularity, we have_ \(a_{\tilde{x}_{\pm}}=n-\ell_{x}\) _and_ \(b_{x}=\ell_{x}\)
2310.15724
Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs. In this paper, we propose Variator, a parameter-efficient acceleration method that enhances computational efficiency through plug-and-play compression plugins. Compression plugins are designed to reduce the sequence length via compressing multiple hidden vectors into one and trained with original PLMs frozen. Different from traditional model acceleration methods, which compress PLMs to smaller sizes, Variator offers two distinct advantages: (1) In real-world applications, the plug-and-play nature of our compression plugins enables dynamic selection of different compression plugins with varying acceleration ratios based on the current workload. (2) The compression plugin comprises a few compact neural network layers with minimal parameters, significantly saving storage and memory overhead, particularly in scenarios with a growing number of tasks. We validate the effectiveness of Variator on seven datasets. Experimental results show that Variator can save 53% computational costs using only 0.9% additional parameters with a performance drop of less than 2%. Moreover, when the model scales to billions of parameters, Variator matches the strong performance of uncompressed PLMs.
Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
2023-10-24T11:00:07Z
http://arxiv.org/abs/2310.15724v2
# Variator: Accelerating Pre-trained Models with ###### Abstract Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs. In this paper, we propose Variator, a parameter-efficient acceleration method that enhances computational efficiency through plug-and-play compression plugins. Compression plugins are designed to reduce the sequence length via compressing multiple hidden vectors into one and trained with original PLMs frozen. Different from traditional model acceleration methods, which compress PLMs to smaller sizes, Variator offers two distinct advantages: (1) In real-world applications, the plug-and-play nature of our compression plugins enables dynamic selection of different compression plugins with varying acceleration ratios based on the current workload. (2) The compression plugin comprises a few compact neural network layers with minimal parameters, significantly saving storage and memory overhead, particularly in scenarios with a growing number of tasks. We validate the effectiveness of Variator on seven datasets. Experimental results show that Variator can save \(53\%\) computational costs using only \(0.9\%\) additional parameters with a performance drop of less than \(2\%\). Moreover, when the model scales to billions of parameters, Variator matches the strong performance of uncompressed PLMs. Our code and checkpoints can be found in [https://github.com/thunlp/Compression-Plugin](https://github.com/thunlp/Compression-Plugin). ## 1 Introduction Large pre-trained language models (PLMs) have made significant advancements in natural language processing tasks Han et al. (2021); Brown et al. (2020); Qiu et al. (2020); Bommasani et al. (2021); OpenAI (2023). It is widely observed that amplifying the model scale correlates positively with enhanced downstream performance. Nevertheless, the expansive parameter scale intrinsic to PLMs demands significant computational and storage resources. Such formidable overheads necessitate investigating alternative strategies to maintain performance while reducing costs. Many efforts have been devoted to improving the training and inference efficiency of PLMs Sun et al. (2019); Liu et al. (2022); Fan et al. (2020); Xia et al. (2022); Stock et al. (2021). These methods compress PLMs into fixed smaller sizes, and cannot fulfill the following requirements: (1) Dynamic Workload. In real-world scenarios, the system workload varies dynamically over time, while the computational resources are fixed. This implies that we can use more resources for higher performance when the workload is low, and ensure response efficiency when the workload is high. (2) Storage Efficiency. These methods typically depend on a large number of additional parameters to construct compressed models, which require amounts of memory space for model training and storage across various tasks and acceleration ratios. To address these issues, we propose a novel plug-and-play acceleration framework named Variator. As shown in Figure 1, different from compressing Figure 1: Illustration of model acceleration with compression plugins. PLMs into smaller sizes, Variator enables PLMs acceleration via devising compression plugins, which can be inserted into PLMs to enhance the inference speed. Various plugins entail different acceleration ratios, and the system can dynamically choose the appropriate one to trade off response speed and model performance depending on the workload. Moreover, Variator only necessitates plugins with minimal parameters and freezes the original parameters of PLMs, which substantially lowers the memory and storage requirements. To achieve plug-and-play acceleration, there are two main challenges: (1) Plugin Architecture: Compression plugins do not modify PLMs scales, and how to devise plugins to reduce the inference time is a challenge. (2) Plugin Training: Compression plugins only contain limited parameters, and how to effectively train plugins so that they can enhance the model speed while preserving downstream performance. As for plugin architecture, inspired by previous findings about redundancy in hidden vectors (Goyal et al., 2020; Ye et al., 2021), we design compression plugins for data compression rather than parameter compression. Specifically, compression plugins consist of hidden compression layers and hidden decompression layers. The goal of the hidden compression layers is to compress multiple hidden vectors into one, thereby diminishing the sequence length for PLMs and enabling model acceleration. Simultaneously, to preserve token-level information, we also devise decompression layers that recover the processed shorter sequence to the original length. Compression plugins can be applied in any layer of PLMs, enabling various levels of acceleration. As for plugin training, we adopt a two-step training strategy. Firstly, we train compression plugins on pre-trained PLMs with pre-training corpus. Then the compression plugins trained in the first step are used as initialization for task-specific models. In both steps, we apply knowledge distillation objectives to train the compression plugins not to alter the hidden vectors produced by PLMs. To verify the effectiveness of Variator, we conduct experiments with a widely-used pre-trained backbone, T5 (Raffel et al., 2020), on seven widely-used language understanding benchmarks. The experimental results show that Variator can save \(53\%\) computational costs using only \(0.9\%\) parameters with absolute average performance drops of \(<2\%\) compared to original downstream PLMs. When the model scales to billions of parameters, Variator can achieve nearly no performance drop. We also examine the effectiveness of Variator on a decoder-only LLM, LLaMA (Touvron et al., 2023). In addition, we conduct neuron-level analysis for compression plugins, and find that compression plugins can effectively store important information in the compressed vectors to achieve satisfactory performance with limited computational costs. ## 2 Related Work ### Model Acceleration Improving the computational efficiency of PLMs has been widely studied in recent years (Gupta and Agrawal, 2022; Zhang et al., 2022). The related work can be divided into four categories: knowledge distillation, which guides the training of compressed models with the output or middle states of original PLMs (Hinton et al., 2015; Sanh et al., 2019; Sun et al., 2019, 2020); model pruning, which removes unimportant parameters or layers from PLMs (Fan et al., 2020; Michel et al., 2019; Chen et al., 2020; Xia et al., 2022); model quantization, which converts model parameters into low-bit precision values, thus achieving acceleration on compatible devices (Stock et al., 2021; Xiao et al., 2022); and conditional computation, which only selects parts of parameters to compute outputs for each input (Zhang et al., 2022; Xin et al., 2020). Some researchers make preliminary exploration for dynamic acceleration, such as early exit (Xin et al., 2020; Matsubara et al., 2023), which attempts to skip layer computation based on instance complexity. But these works rely heavily on confidence judgment and are thus only applicable to specific tasks and model architectures. Our model, which focuses on dynamic acceleration based on system workload, parallels these works and can be intuitively combined with them to reduce computational costs further. Besides, within the realm of conditional computation there are a line of reasearches find out the redundancy of hidden vectors and focus on discarding tokens at each layer of PLMs to accelerate model inference (Goyal et al., 2020; Ye et al., 2021; Kim and Cho, 2021; Kim et al., 2022; Dai et al., 2020; Murahari et al., 2022), which inspire the design of our compression and decompression layers. But these works require to retrain the whole PLMs to achieve accleration, while Variator focuses on the parameter-efficient acceleration setting and thus enable dynamic acceleration ratio selection with minimal additional memory requirements. In addition to merge tokens, the compression layer can also be designed to dynamically prune the parameters, which we leave for future work. ### Parameter-Efficient Learning The huge parameter scale of PLMs imposes substantial costs on model training and storage. To alleviate this problem, parameter-efficient learning, also known as delta tuning, is proposed to perform task adaptation via tuning a small portion of parameters and keep other parameters frozen Liu et al. (2021); Ding et al. (2022); He et al. (2022). According to the operation of tunable parameters, delta tuning methods can be divided into: addition-based models, which introduce additional layers into PLMs Houlsby et al. (2019); Lester et al. (2021); specification-based models, which specify existing weights of PLMs as tunable Zaken et al. (2022); Guo et al. (2021); and reparameterization-based models, which rewrite the computation process of specific layers into parameter-efficient manners Hu et al. (2021). In addition, some researchers attempt to construct plug-and-play modules for retrieval augmentation Shi et al. (2023), knowledge injection Wang et al. (2021), controllable text generation Pascual et al. (2021); Madotto et al. (2020), and model debiasing Lauscher et al. (2021). In this paper, we propose a parameter-efficient acceleration model with hidden vector compression, which can save the memory and storage costs compared to traditional compression methods. ## 3 Methodology In this section, we first describe the paradigm and basic annotations for our plug-and-play model acceleration. Then we present the framework and training recipes of Variator to accelerate model inference with minimal additional parameters. To showcase the efficiency of Variator, we also conduct an analysis of the computational and storage complexity. ### Preliminary Our primary goal is to design a plug-and-play acceleration framework, which can dynamically improve the computational efficiency with multiple compression plugins. Specifically, given an PLM \(\mathcal{M}\), and the fine-tuned downstream model \(\mathcal{M}_{\mathtt{T}}\) derived from \(\mathcal{M}\), Variator aims to construct a compression plugin \(\mathcal{P}\), which can be inserted into \(\mathcal{M}_{\mathtt{T}}\) to improve the computational efficiency. That is, given an input squence \(s\), the computation costs of \((\mathcal{M}_{\mathtt{T}}+\mathcal{P})(s)\) should be lower than \(\mathcal{M}_{\mathtt{T}}(s)\). Variator is designed for dynamic workload, which means plugins with different acceleration ratios can be applied in the same downstream model \(\mathcal{M}_{\mathtt{T}}\). Therefore, the original \(\mathcal{M}_{\mathtt{T}}\) should be frozen during the training of the compression plugin \(\mathcal{P}\). ### Overall Framework Previous researches find out the redundancy in hidden vectors, which means eliminating hidden vectors is a promising direction for acceleration Goyal et al. (2020); Ye et al. (2021). Inspired by these works, our compression plugins are designed to compress hidden vectors, and thus the sequence length is reduced to speed up inference. As shown in Figure 2, compression plugins consist of two layers: a hidden compression layer and a hidden decompression layer, which are inserted before and after a vanilla neural layer, respectively. In this way, the compression layer can reduce computational overhead for the following neural layer and the decompression layer aim to restore token-level information into the output vectors. Then we will introduce these two layers in detail. **Hidden Compression Layer.** Hidden compression layers aim to reduce the sequence length. Previous token pruning methods assign importance scores for each hidden vector and discard hidden vectors with low scores, which may suffer from loss of useful information when the required compression ratio is high. Different from directly dropping hidden vectors, our hidden compression layer is designed to merge multiple vectors into one. Specifically, given the input vector sequence with \(n\) tokens, \(\mathbf{H}=\{\mathbf{h}_{0},...,\mathbf{h}_{n-1}\}\), we first split the sequence into several groups with each group containing \(k\) vectors, \(g_{i}=\{\mathbf{h}_{ik},...,\mathbf{h}_{(i+1)k-1}\}\). Then the compressed vector is calculated as the weighted average of input vectors: \[\mathbf{a} =\text{Softmax}(\mathbf{W}_{c}\text{Concat}(g_{i})+\mathbf{b}_{c}),\] \[\mathbf{g}_{i} =\sum_{j=0}^{k-1}\mathbf{a}_{j}\mathbf{h}_{ik+j},\] where \(\mathbf{W}_{c}\in\mathbb{R}^{k\times kd}(k\ll d)\) and \(\mathbf{b}_{c}\in\mathbb{R}^{k}\) are trainable parameters, and \(d\) is the dimension of hidden vectors. Then the compressed vectors are fed into the original neural layers. **Hidden Decompression Layer.** Compression layers merge multiple hidden vectors into a global vector with information for all tokens in the corresponding group. To preserve the ability to solve token-level tasks, we design hidden decompression layers, which are inserted after the original neural layer, to restore token-level information into the output vectors. Given the output of the original neural layer \(\mathbf{g}_{i}^{o}\) generated from the compressed vector \(\mathbf{g}_{i}\), we need to compute the output vectors for all \(k\) vectors in \(g_{i}\). We first concatenate the original vector \(\mathbf{h}_{ik+j}\) and the compressed output vector \(\mathbf{g}_{i}^{o}\) to combine the token-level and group-level information. Then, instead of applying a linear projection layer with high computation complexity, we adopt an Adapter (Houlsby et al., 2019) layer and a residual layer to project the concatenated vector to the output vector \(\mathbf{o}_{ik+j}\): \[\mathbf{o}_{ik+j}^{\Delta} =\mathbf{W}_{u}^{2}(\mathbf{W}_{u}^{1}\text{Concat}(\mathbf{g}_{i }^{o},\mathbf{h}_{ik+j})+\mathbf{b}_{u}^{1})+\mathbf{b}_{u}^{2},\] \[\mathbf{o}_{ik+j} =\mathbf{g}_{i}^{o}+\mathbf{o}_{ik+j}^{\Delta}.\] Here, \(\mathbf{W}_{u}^{1}\in\mathbb{R}^{r\times 2d}\), \(\mathbf{b}_{u}^{1}\in\mathbb{R}^{r}\), \(\mathbf{W}_{u}^{2}\in\mathbb{R}^{d\times r}\), \(\mathbf{b}_{u}^{1}\in\mathbb{R}^{d}\) are trainable parameters, and \(r\ll d\) refers to the bottleneck dimension of adapter layers. Both two layers only involve minimal additional parameters and computation overhead and can significantly reduce the sequence length. Besides, our proposed compression plugins can be flexibly applied in any neural layers, such as self-attention layers and feed-forward layers, allowing for different acceleration ratios. Notably, the compression and decompression layers can be implemented with other efficient operations including convolutional neural networks. Due to the high computational requirements of feed-forward layers in Transformer (Zhang et al., 2022), we attempt to apply compression plugins in feed-forward layers in most of our experiments. ### Plugin Training To mitigate information loss during the sequence compression of Variator, we design a two-step training strategy with plugin pre-training and plugin adaptation. **Plugin Pre-training.** Plugin pre-training aims to learn general information compression ability and obtain a good initialization of compression plugins for downstream models. In this step, compression plugins are trained to mitigate redundancy in the original input text. Specifically, we insert the compression plugins into the original PLM \(\mathcal{M}\), and train compression plugins on a pre-training corpus. Notably, the pre-training process is task-agnostic. It is conducted only once and caters to the requirements of all downstream tasks, which make compression plugins pratical even when PLMs scale to billions of parameters. **Plugin Adaptation.** Plugin adaptation is designed to drive compression plugins to preserve task-specific information during compression. Different tasks tend to pay attention to different information in the sequence. For example, sentiment analysis tasks usually need to maintain the information contained in emotional words, while reading comprehension tasks usually need to maintain information about the question. Therefore, it is important for compression plugins to learn different task information preferences in plugin adaptation. During plugin adaptation, compression plugins are inserted into downstream model \(\mathcal{M}_{T}\), and trained Figure 2: Illustration of Variator, which improves the computational efficiency via compressing the hidden vectors. with task data. Both steps adopt knowledge distillation as the training objectives, guiding the compression plugins not to modify the output distribution. Given output vectors of the model without compression plugins \(\mathbf{O^{\prime}}\), and output vectors of the model with compression plugins \(\mathbf{O}\), the final training loss is computed as the mean squared error (MSE) between \(\mathbf{O^{\prime}}\) and \(\mathbf{O}\): \[\mathcal{L}=||\mathbf{O^{\prime}}-\mathbf{O}||_{2}. \tag{1}\] ### Complexity Analysis In this section, we provide a complexity analysis of computational and storage overhead. Here we present the analysis with compression plugins applied in feed-forward networks (FFNs), with the input length as \(n\), hidden vector dimension as \(d\), and the middle dimension of FFNs as \(4d\). As mentioned in previous sections, \(k\) and \(r\) refer to the compression ratio and bottleneck dimension of decompression layers. **Computational Complexity.** Compression and decompression layers involve several linear projections with tiny matrices. Therefore, our compression plugins only require minimal computation costs. For each token, compression plugins contain three linear projection operations and two addition operations. The floating point operations (FLOPs) required by the compression and decompression layer are \((kd+2d+3)n\) and \((3rd+2d+r)n\), respectively. In contrast, the FLOPs of the feed-forward network are \(8nd^{2}\). The computation costs of compression plugins are only about \(\frac{1}{8d}(4+k+3r)\) of FFN, where \(k,r\ll d\). And compression plugins can reduce the computation costs of FFN to \(\frac{1}{k}\). Therefore, compression plugins can achieve significant inference speed-up for PLMs. **Storage Complexity.** Different from training the entire models to accelerate model inference, Variator relies on two projection layers to compress hidden vectors. Compression and decompression layers consist of three linear projection layers, with only \(k^{2}d+k\) and \(3rd+r+d\) parameters, respectively. In contrast, an FFN layer consists of \(8d^{2}\) parameters. To demonstrate the effectiveness of our parameter-efficient compression plugins more intuitively, we assume that \(k=4\), \(r=64\), and \(d=768\). In this way, compression plugins can save \(71.7\%\) computational costs with only \(3.4\%\) additional parameters for FFNs. ## 4 Experiments ### Datasets To evaluate the effectiveness of Variator, we use seven typical NLP datasets as evaluation benchmarks, including text classification and sequence-to-sequence generation tasks. Specifically, we adopt three natural language inference datasets, MNLI-m Williams et al. (2018), QNLI Rajpurkar et al. (2016), RTE Wang et al. (2019), two sentence similarity datasets, QQP Wang et al. (2019), MRPC Dolan and Brockett (2005), a sentiment analysis dataset, SST-2 Socher et al. (2013), and a reading comprehension dataset, SQuAD Rajpurkar et al. (2016). We apply F1 scores for MRPC, F1 scores and exact match scores (EM) for SQuAD, and accuracy for other datasets as evaluation metrics. We also present the average scores on these seven datasets, where we use EM scores of SQuAD for average. Please refer to Appendix for statistics. ### Implementation Details We adopt the widely-used pre-trained model, T5-base and T5-large Raffel et al. (2020), as our model backbone. Please refer to Appendix for results of compression plugins on BERT backbone Devlin et al. (2019). For the main experiments, we only insert the compression plugins around the feed-forward network layers in the encoder, which accounts for the majority of computational requirements. As for the training objective, we compute the MSE loss with the output vectors from the last layers. The compression ratio \(k\) is set as \(4\) for the main experiments and the bottleneck dimension \(r\) of the Adapter layers is set as \(64\). For plugin pre-training, we apply the widely-used Wikipedia corpus. The learning rate is set as \(10^{-3}\) and batch size is set as \(256\). We pre-train compression plugins for \(60\)k steps. For plugin adaptation, we apply grid search for hyper-parameter selection. We select batch size in \(\{16,32\}\), learning rate in \(\{10^{-4},5\times 10^{-5}\}\). The total training steps for each task are set as \(26\)k, and we evaluate the models every \(1\)k steps. We train all models with half-precision floating-point on NVIDIA A100 GPUs. For both plugin pre-training and adaptation, we use Adam for parameter optimization. Please refer to Appendix for more details. ### Baselines In this paper, we compare Variator with several competitive baseline models, including: (1) The original fine-tuned downstream PLMs without acceleration, which are also used as teacher models to guide the training of other compressed models. (2) The widely used model compression method, model distillation (Sanh et al., 2019). (3) Our method aims to reduce the sequence length for PLMs, which is inspired by previous token pruning models. Therefore, we also compare Variator with a typical token pruning model, LTP (Kim et al., 2022), which adopts the attention scores as importance scores, and only keeps tokens with the most scores for each layer. Notably, original token pruning models directly discard tokens for entire Transformer layers, and our models in main experiments focus on the acceleration of FFNs. Therefore, to make a fair comparison, we implement token pruning models with only skipping computation of FFNs and only keeping \(25\%\) tokens in each layer. We apply knowledge distillation objectives to train all downstream tasks for a fair comparison. ### Main Results The comparison results are shown in Table 1. To further demonstrate the effectiveness of Variator, we show the additional parameters, and FLOPs for each input required by compressed models. Here we assume the input length is \(512\) and batch size is \(1\) for calculating FLOPs. From the results, we can observe that: (1) Variator can achieve comparable results with the original PLMs using minimal additional parameters with absolute performance drops of \(<2\%\). Specifically, Variator can save \(53.2\%\) and \(45.9\%\) computation costs for T5-base and T5-large, using only \(0.9\%\) and \(0.7\%\) additional parameters. In contrast, traditional acceleration methods need to construct compressed models from scratch, which require amounts of additional parameters. Limited by amounts of parameters, switching traditional methods between different compression ratios requires large memory space or repeatedly loading compressed models from the disk. (2) Compared to the widely-used model distillation, our parameter-efficient model acceleration method achieve competitive performance with much fewer parameters, which indicates the potential of parameter-efficient model compression. (3) Compared to the token pruning baselines, our models can achieve better performance with even a small portion of parameters, which proves that merging tokens can better preserve sequence information compared to directly dropping them. ### Ablation Study To verify the effectiveness of each component of Variator, we conduct an ablation study in this section. Specifically, we show the results of compression plugins without plugin pre-training (w/o PT) or plugin adaptation (w/o PA). Besides, we also examine the effectiveness of compression and decompression layers in the ablation study. We show the model performance with compression layers replaced with a mean-pooling operation (w/o Com) or decompression layers replaced with a copy operation (w/o DeCom). We run w/o Com and w/o \begin{table} \begin{tabular}{l|c c c c c c c c c} \hline \hline Dataset & \begin{tabular}{c} MNLI-m \\ Acc. \\ \end{tabular} & \begin{tabular}{c} QNLI \\ Acc. \\ \end{tabular} & \begin{tabular}{c} QQP \\ Acc. \\ \end{tabular} & \begin{tabular}{c} RTE \\ Acc. \\ \end{tabular} & \begin{tabular}{c} SST-2 \\ Acc. \\ \end{tabular} & \begin{tabular}{c} MRPC \\ F1 \\ \end{tabular} & \begin{tabular}{c} SQuAD \\ EM/F1 \\ \end{tabular} & Avg. & Para. & FLOPs \\ \hline \multicolumn{10}{c}{T5-Base} \\ \hline Original & 86.7 & 93.0 & 91.2 & 82.9 & 94.3 & 92.6 & 82.8/90.0 & 89.1 & – & – \\ Distillation & 84.6 & 91.8 & 89.3 & 81.4 & 93.1 & 93.1 & 81.1/89.3 & 87.8 & 61.9\% & 44.3\% \\ LTP & 84.0 & 91.7 & 86.5 & 76.8 & 92.6 & 92.5 & 81.1/89.2 & 86.4 & 100.0\% & 44.3\% \\ Variator & 84.6 & 91.5 & 88.4 & 81.1 & 93.6 & 93.8 & 80.4/88.1 & 87.6 & 0.9\% & 46.8\% \\ \hline \hline \multicolumn{10}{c}{T5-Large} \\ \hline Original & 88.9 & 94.0 & 91.5 & 88.6 & 95.4 & 93.0 & 85.3/92.5 & 91.0 & – & – \\ Distillation & 88.4 & 94.2 & 90.4 & 84.3 & 94.5 & 91.9 & 81.3/90.9 & 89.3 & 59.1\% & 52.5\% \\ LTP & 87.0 & 93.1 & 88.0 & 82.5 & 94.4 & 93.3 & 84.3/91.7 & 88.9 & 100\% & 52.5\% \\ Variator & 87.1 & 93.5 & 89.4 & 85.4 & 93.7 & 92.8 & 83.1/90.7 & 89.3 & 0.7\% & 54.1\% \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results between Variator and baseline models. Here Avg. refers to the average scores on seven datasets. Para. and FLOPs refer to the ratio of the number of additional parameters and floating point operations required by the compressed methods to the original PLMs. \begin{table} \begin{tabular}{l|c c c} \hline \hline Dataset & \begin{tabular}{c} MNLI-m \\ Acc. \\ \end{tabular} & \begin{tabular}{c} SST-2 \\ Acc. \\ \end{tabular} & \begin{tabular}{c} SQuAD \\ EM/F1 \\ \end{tabular} \\ \hline Variator & **84.6** & **93.6** & **80.4/88.1** \\ w/o PT & 84.1 & 92.7 & 79.3/87.3 \\ w/o PA & 59.6 & 87.8 & 11.6/19.4 \\ w/o Com & 83.5 & 92.3 & 79.4/87.1 \\ w/o DeCom & 73.8 & 86.1 & 38.2/50.6 \\ \hline \hline \end{tabular} \end{table} Table 2: The results for ablation study. DeCom without plugin pre-training to speed up experiments. We select three tasks for the ablation study, including sentence classification, SST-2, sentence-pair classification, MNLI-m, and reading comprehension, SQuAD. The results are shown in Table 2. From the results, we can observe that: (1) Both two training steps contribute to the main model, as when anyone step is missing, the model performance drops significantly. (2) Plugin adaptation is important for all tasks. Plugin pre-training guides compression plugins to discard general redundant information contained in the input text. Therefore, for SST-2, which usually only focuses on parts of important words, compression plugins without task-specific adaptation can also achieve satisfactory results. In contrast, for SQuAD and MNLI-m, which require models to collect information from entire contexts, plugins without adaptation lead to a large performance drop. (3) Compression and decompression layers play an important role in selecting information for hidden merging and restoring token-level information, as without anyone of them, model performance drops significantly. Especially, decompression layers are quite important for preserving token-level information, and training compression plugins without decompression layers lead to large drops for the span extraction task, SQuAD. ### Effects of Compression Ratios Variator apply compression plugins to compress multiple hidden vectors into one, thus achieving inference speedup. In this section, we explore the effects of compression ratios for our compression plugins. We construct compression plugins with compression ratios as \(\{2,4,8,16,32\}\). The results are shown in Figure 3. From the results, we can find that: (1) With the compression ratio increasing, the model performance decreases as expected. But the decline rate is becoming slow, which indicates the potential for Variator to achieve higher compression ratios. (2) Variator can achieve competitive performance even when the compression ratio reaches \(32\), where Variator maintains \(95.4\%\) and \(96.7\%\) accuracy scores of original PLMs for MNLI-m and SST-2, respectively, while reducing \(69\%\) computational costs. The satisfactory performance ensures the response speed of real-world applications when the system load is high. ### Compression for Attention Layers In our main experiments, we insert the compression plugins around the FFN layers. In this section, we examine the performance of Variator when we insert compression plugins around the self-attention layers. Here we do not perform plugin pre-training. The results are shown in Table 4. For comparison, we also present the results of original models and Variator with plugins in FFN layers. From the results, we can observe that Variator with plugins in self-attention layers perform worse than plugins in FFN layers. That is because self-attention layers are designed to fuse token-level information, and inserting hidden compression layers before self-attention layers would lead to the loss of token information. Thus in the self-attention layers, only the \(k\)-gram information integration is performed, resulting in a significant performance drop. To address this issue, we improve compression plugins for self-attention layers by only compressing key and value vectors, denoted as Variator (Att-KV). With compression only for key and value vectors, Variator (Att) can achieve comparable results with Variator (FFN). And compressing key and value vectors can be further adopted in decoder-only models to reduce the sequence length of past key-value vectors, which we leave for future work. \begin{table} \begin{tabular}{l|c c} \hline \hline Dataset & MNLI-m & SST-2 \\ & Acc. & Acc. \\ \hline Original & 86.7 & 94.3 \\ Variator (FFN) & 84.1 & 92.7 \\ Variator (Att) & 81.0 & 91.9 \\ Variator (Att-KV) & 83.1 & 92.1 \\ \hline \hline \end{tabular} \end{table} Table 3: The results for compression plugins inserted around the self-attention layers. Figure 3: Model performance with different compression ratios. The horizontal lines indicate the performance of original PLMs without compression plugins. ### Scaling to PLMs with Billions of Parameters In this section, we attempt to apply our compression plugins to PLMs with billions of parameters. We adopt four variants of T5 as our backbones, including T5-base (200 million parameters), T5-large (700 million parameters), T5-XLarge (3 billion parameters), and T5-XXLarge (11 billion parameters). Following the main experiments, for each model, we conduct the two-step training process with 6k-step plugin pre-training and 26k-step plugin adaptation. We apply a parameter-efficient learning method, LoRA Hu et al. (2021), to train the task-specific downstream models to speed up the experiments. We show the results of the SST-2. As shown in Figure 4, the performance continues to improve with the increasing of backbone model sizes. Similar to previous parameter-efficient learning methods Lester et al. (2021); Ding et al. (2022), the performance gap between Variator and original PLMs becomes small when the model scales to billions of parameters. It shows the potential of Variator to be applied in nowadays general PLMs with more than 100 billion parameters, such as ChatGPT and GPT-4 OpenAI (2023). Besides, to present the effectiveness of Variator on decoder-only LLMs, we evaluate Variator with recent popular backbone LLaMA Touvron et al. (2023) with 7 billion parameters. Variator can be used for the input encoding acceleration and reduce the service latency in real-world applications. We conduct experiments with a compression ratio of 2 on the FFN layers and without plugin-pretraining to accelerate experiments. The results suggest that our approach can reduce the computational overhead while maintaining comparable performance with the original model for decoder-only LLMs. ### Neuron-Level Analysis Our compression plugins enable the feed-forward layers to process information from multiple tokens simultaneously to save computational costs. In this section, we attempt to explore the computational mechanism of our compression plugins from the perspective of activated neurons. Previous works find out that FFNs can be regarded as memory networks Geva et al. (2021); Dai et al. (2022), and the activated neurons can be used as indicators to reflect what information is preserved in the input hidden vectors. T5 adopts ReLU Nair and Hinton (2010) as the activation function, and following Zhang et al. (2022), we define activated neurons as ones with positive (non-zero) activation values. We present the average ratio of activated neurons with different compression ratios, \(k=\{1,2,4,8,16,32\}\), in Figure 5. From the results, we can observe that the ratios of activated neurons drop with the increase in compression ratios. When the compression ratio reaches \(32\), only less than 2% neurons are activated to process sequence information. In indicates that compressed hidden vectors only contain the necessary information for the sequences and discard unimportant ones. Besides, the low activated ratios also indicate the potential of the combination of Variator and neuron pruning methods Zhang et al. (2022) to further improve the computational efficiency. Then we further explore the relationship between the activated neurons of FFNs with compression plugins and the activated neurons of original FFNs. Specifically, we denote the intersection set and union set of activated neurons of \(k\) hidden vec \begin{table} \begin{tabular}{c|c c} \hline \hline & LLaMA-7B & Variator (w/o PT) \\ \hline SST-2 & 97.3 & 96.3 \\ \hline \hline \end{tabular} \end{table} Table 4: The results for compression plugins in LLaMA. Figure 4: Performance with different backbone sizes. Figure 5: The ratio of activated neurons with different compression ratios on two datasets. tors as \(\mathcal{I}\) and \(\mathcal{U}\). The set of activated neurons of compressed vector as \(\mathcal{C}\). The intersection set \(\mathcal{I}\) can be regarded as important global information for \(k\) hidden vectors, and \(\mathcal{U}\) can be regarded as all information contained in the \(k\) hidden vectors. Compression layers are used to select important information and feed them into neural layers. Therefore, we hope that \(\mathcal{I}\) is approximately a subset of \(\mathcal{C}\) and \(\mathcal{C}\) is approximately a subset of \(\mathcal{U}\). In Table 5, we present what fraction of neurons in \(\mathcal{I}\) are in \(\mathcal{C}\) and what fraction of neurons in \(\mathcal{C}\) are in \(\mathcal{U}\). From the results, we can observe that when the compression ratios are no more than \(8\), the relationship between the three sets approximately satisfies the abovementioned inclusion assumption. It proves the effectiveness of our compression plugins in preserving global information. When the compression ratios become larger (such as \(16\), \(32\)), only no more than \(70\%\) neurons in \(\mathcal{I}\) are contained in \(\mathcal{C}\). That is because with the increase of compression ratios, selecting global important information from multiple vectors becomes challenging for compression layers with limited parameters. It also shows the potential to add regularization from the neuron level for compression plugins to preserve important information. ## 5 Conclusion In this paper, we explore the parameter-efficient acceleration setting and propose Variator, which reduces the computational costs with compression plugins. The extensive experiments on seven datasets show that we can reduce \(53\%\) computational costs with only \(0.9\%\) additional parameters. In the future, we will explore more effective token-merging frameworks to improve compression plugins. Besides, we will further decouple compression plugins from specific tasks, thus we can construct compression plugins once and for all with transferability across multiple tasks. ## Acknowledgement This work is supported by the National Key R&D Program of China (No.2022ZD0116312), National Natural Science Foundation of China (No. 62236004), Tsinghua-Toyota Joint Research Fund, and Institute Guo Qiang at Tsinghua University. Author ContributionsIn the preparation and discussion of the project, Chaojun Xiao, Yuqi Luo, and Xu Han designed the model architectures. Chaojun Xiao and Yuqi Luo wrote the code and conducted the experiments. Besides, Wenbin Zhang and Pengle Zhang wrote the code for baseline models and ablation study. Chaojun Xiao wrote the initial draft. Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, and Zhiyuan Liu significantly edited and improved the paper. Maosong Sun and Jie Zhou provided valuable advice to the research. ## Limitations We discuss the limitations of Variator in this section: (1) In the experiments, we implement Variator with T5 as our backbone. It is worth exploring applying Variator in other large-scale decoder-only pre-trained models. (2) In this paper, we mainly focus on accelerating the encoding process of PLMs. Language decoding also plays an essential role in real-world applications. In the experiments, we show the potential of Variator to compress key and value vectors for acceleration. We believe Variator can also serve as a flexible framework to speed up decoding. (3) Our plug-and-play compression framework parallels other model compression methods. It is worth exploring the combination of multiple acceleration methods to achieve more efficient and effective model inference frameworks.
2305.00821
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
The ChatGPT Python API plays a crucial role in promoting Learner-Centered Instruction (LCI) and aligns with the principles of Tinker Learning, allowing students to discover their learning strategies. LCI emphasizes the importance of active, hands-on learning experiences and encourages students to take responsibility for their learning journey. By integrating the ChatGPT Python API into the educational process, students can explore various resources, generate new ideas, and create content in a more personalized manner. This innovative approach enables students to engage with the learning material deeper, fostering a sense of ownership and motivation. As they work through the Creative Learning Spiral, students develop essential skills such as critical thinking, problem-solving, and creativity. The ChatGPT Python API is a valuable tool for students to explore different solutions, evaluate alternatives, and make informed decisions, all while encouraging self-directed learning. In Tinker Learning environments, the integration of ChatGPT Python API empowers students to experiment and iterate, allowing them to find the most effective learning strategies that cater to their individual needs and preferences. This personalized approach helps students to become more confident in their abilities, leading to tremendous academic success and long-term skill development. By leveraging the capabilities of the ChatGPT Python API, educational institutions can create a more engaging, supportive, and dynamic learning environment. This approach aligns with the principles of Learner-Centered Instruction and Tinker Learning, promoting a culture of curiosity, exploration, and creativity among students while preparing them for the challenges of the fast-paced, ever-changing world.
Yun-Cheng Tsai
2023-05-01T13:40:25Z
http://arxiv.org/abs/2305.00821v1
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills ###### Abstract The ChatGPT Python API plays a crucial role in promoting Learner-Centered Instruction (LCI) and aligns with the principles of Tinker Learning, allowing students to discover their learning strategies. LCI emphasizes the importance of active, hands-on learning experiences and encourages students to take responsibility for their learning journey. By integrating the ChatGPT Python API into the educational process, students can explore various resources, generate new ideas, and create content in a more personalized manner. This innovative approach enables students to engage with the learning material deeper, fostering a sense of ownership and motivation. As they work through the Creative Learning Spiral, students develop essential skills such as critical thinking, problem-solving, and creativity. The ChatGPT Python API is a valuable tool for students to explore different solutions, evaluate alternatives, and make informed decisions, all while encouraging self-directed learning. In Tinker Learning environments, the integration of ChatGPT Python API empowers students to experiment and iterate, allowing them to find the most effective learning strategies that cater to their individual needs and preferences. This personalized approach helps students to become more confident in their abilities, leading to tremendous academic success and long-term skill development. By leveraging the capabilities of the ChatGPT Python API, educational institutions can create a more engaging supportive, and dynamic learning environment. This approach aligns with the principles of Learner-Centered Instruction and Tinker Learning, promoting a culture of curiosity, exploration, and creativity among students while preparing them for the challenges of the fast-paced, ever-changing world. Keywords:ChatGPT Python API Tinker Learning Learner-Centered Instruction Creative Learning Spiral. ## 1 Introduction Learner-Centered Instruction (LCI) fosters active learning experiences and empowers students to take charge of their educational journey [21]. This instructional approach emphasizes hands-on exploration, problem-solving, and collaboration as essential for knowledge construction [15]. LCI cultivates a culture of curiosity, exploration, and creativity, equipping students to face the rapidly evolving, dynamic world [6]. The Cone of Learning model posits that the most effective learning occurs through firsthand experiences, supplemented by practicing with the material, hearing about it, and lastly, reading about it [5]. The Tinker Learning approach advocates for learners to build their knowledge through hands-on exploration and discovery [18]. Grounded in the notion that students learn optimally by experimenting with and manipulating learning materials instead of receiving direct instructions [12], this approach closely aligns with the Cone of Learning, underscoring firsthand experience and practice as pivotal to effective learning [8]. The Creative Learning Spiral, a five-step process, guides learners through creative problem-solving [17], fostering creativity and promoting out-of-the-box thinking during challenges [19]. ChatGPT is a natural language processing technology developed by OpenAI that aims to generate fluent and coherent text. The API for this technology became available to the public in June 2020, allowing developers and researchers to harness the power of ChatGPT easily. ChatGPT is a commercial product released by OpenAI. Using the ChatGPT API is subject to their terms of service and policies [16]. We have extensively utilized the ChatGPT API in my classroom for cross-disciplinary text mining and programming teaching. This approach has proved effective in increasing students' sense of achievement and breaking through the sample size limitations in qualitative analysis. By harnessing the power of ChatGPT, students can engage in deep thinking through the practical application of programming skills, leading to a more profound understanding of the subject matter. Moreover, the ChatGPT API has enabled us to explore new avenues in text analysis and language processing. It has provided students with the opportunity to learn cutting-edge technologies and techniques. Through this teaching methodology, students have developed a strong foundation in programming, critical thinking, and problem-solving skills, equipping them with valuable skills essential for success in today's rapidly evolving technological landscape. As such, using the ChatGPT API in the classroom is a promising pedagogical approach that can lead to significant educational benefits for educators and students. Incorporating resources such as the ChatGPT Python API, this paper delineates a process involving imagination, creation, play, sharing, and feedback reception. The strategy aligns with the Tinker Learning approach and the Cone of Learning, emphasizing the significance of hands-on exploration, experimentation, and trial-and-error in learning [4]. LCI is compatible with educational models like the Cone of Learning and Tinker Learning, which accentuate firsthand experiences and practical application in learning processes [8]. These three models underscore the importance of active engagement with learning through experimentation and hands-on activities. They concur that students learn most effectively when actively involved in and encouraged to explore and construct new knowledge or skills [14]. Each model offers distinct perspectives on learning, focusing on retention [7], hands-on activities [18], and creativity cultivation [17], respectively. By understanding LCI's advantages and alignment with models such as the Cone of Learning and Tinker Learning, educators can make well-informed decisions regarding the most efficacious classroom management strategies for their students' success [11]. The structure of this paper is as follows: in Section 2, We review previous research on computer science education and learner sourcing. In Section 3, We describe our research methods and research questions. The results are presented in Section 4 and discussed in Section 5. Finally, we conclude the article in Section 6. ## 2 Literature Review ### Learner-Centered Instruction (LCI) In the recent ten years, Learner-Centered Instruction (LCI) has gained increasing prominence in education [22]. The World Economic Forum's "Education 4.0" initiative, released in May 2022, emphasizes the importance of a learner-centered teaching model that utilizes technology and innovation to equip learners with diverse skills to navigate the challenges of the fourth industrial revolution [20]. The Cone of Learning model illustrates the learning levels based on retention and difficulty. It suggests that learners retain more information when actively learning instead of passively receiving information [9]. Therefore, learner-centered instruction, which puts the learner at the center of the learning process and allows them to be actively engaged, can effectively combine this model with a new learning strategy. This approach encourages learners to take ownership of their learning and develop higher-order thinking skills, such as analysis, synthesis, evaluation, and application. By incorporating problem-based, collaborative, and inquiry-based learning into the curriculum, learners can engage in meaningful and authentic learning experiences that can result in a deeper understanding and retention of the material [1][20] ### Tinker Learning Research has shown that learner-centered approaches and related teaching strategies, such as the Tinker Learning method proposed by Seymour Papert, a professor at MIT, are effective in improving learning outcomes [13][3]. Tinker Learning is a learner-centered instruction strategy incorporating the Cone of Learning concept to create a new approach to learning [10]. The Cone of Learning theory suggests that people retain more information when actively involved in the learning process rather than passively receiving information. By combining these two concepts, Tinker Learning allows learners to actively explore and experiment with new ideas and concepts, leading to a deeper understanding and retention of the material. Through this process, learners can construct their knowledge and develop their own "learning how to learn" and 4C (creativity, critical thinking, communication, and collaboration) skills [2]. ### Creative Learning Spiral The Tinker Learning approach, also known as the Creative Learning Spiral, involves a process of playful exploration, creation, and iteration in which students develop their knowledge through the repeated cycles of imagining, creating, executing, and completing projects, as well as sharing and receiving feedback [17]. This process is similar to children's play, in which they experiment with different combinations and configurations, seeking new ideas and expressing their creativity through exploration and experimentation. A creative Learning Spiral is a teaching strategy that guides students through the process of creative thinking, from idea generation to implementation. Fig. 1 involves five steps as follows: 1. IMAGINE: trying out new ideas. 2. CREATE: exploring different solution paths. 3. PLAY: making adjustments. 4. SHARE: imagining new possibilities. 5. REFLECT: creatively expressing ideas. Students can build up their knowledge base in a spiral fashion by repeatedly attempting to develop new ideas, create, explore through play, and share and receive feedback. This approach helps students understand how creative ideas are generated and develops their skills as creative thinkers. ## 3 Methods The paper's approach to teaching students the skills of using ChatGPT Python API is based on the principles of Tinker Learning, an approach developed by MIT Professor Seymour Papert. The critical principle of Tinker Learning is that it emphasizes providing learners with "sufficient learning environments" in which they can construct their knowledge. The role of the teacher shifts from simply lecturing and demonstrating fixed examples to serving as a coach or mentor, and the classroom becomes more like a "swimming pool or sports field." The following process is an example of Methods for Analyzing Highly Cited Blockchain in Education Related Papers from 2019-2023 Using ChatGPT and LDA. Figure 1: Creative Learning Spiral by Mitchel Resnick 1. Data Collection: We collected highly cited Blockchain in Education-related papers published from 2019-2023 through academic search engines such as Google Scholar, Microsoft Academic, and Semantic Scholar. 2. ChatGPT Python API: We utilized ChatGPT Python API, a language generation model, to generate summaries for each paper. These summaries provided a brief understanding of the content of each paper. 3. LDA: We applied LDA, a topic modeling algorithm, to identify the main topics discussed in the papers. We set the number of issues and ran the model to obtain each topic's main word list and corresponding weights. 4. Analysis: The Latent Dirichelet Allocation (LDA) results were analyzed to identify the topics discussed in the highly cited Blockchain in Education-related papers. We also cross-checked the paper summaries to confirm the issues. 5. Conclusion: We used ChatGPT Python API and LDA to analyze highly cited Blockchain in Education-related papers from 2019-2023. Our analysis identified the main topics discussed in these papers, which will help to inform future research in this area. [width=1.5] Data Collection: We collected highly cited Blockchain in Education-related papers published from 2019-2023 through academic search engines such as Google Scholar, Microsoft Academic, and Semantic Scholar. 2. ChatGPT Python API: We utilized ChatGPT Python API, a language generation model, to generate summaries for each paper. These summaries provided a brief understanding of the content of each paper. 3. LDA: We applied LDA, a topic modeling algorithm, to identify the main topics discussed in the papers. We set the number of issues and ran the model to obtain each topic's main word list and corresponding weights. 4. Analysis: The Latent Dirichelet Allocation (LDA) results were analyzed to identify the topics discussed in the highly cited Blockchain in Education-related papers. We also cross-checked the paper summaries to confirm the issues. 5. Conclusion: We used ChatGPT Python API and LDA to analyze highly cited Blockchain in Education-related papers from 2019-2023. Our analysis identified the main topics discussed in these papers, which will help to inform future research in this area. [width=1.5] [MISSING_PAGE_POST] Data Collection: We used ChatGPT [width=1. may be far from the actual end. Therefore, what schools should give students now should be "learning how to learn" and the 4Cs: critical thinking, communication, collaboration, and creativity. More broadly, the curriculum should emphasize general information skills that can be integrated into daily life and, most importantly, the ability to adapt, learn new things, maintain mental balance in unfamiliar situations, and find appropriate solutions to problems. In such a world, "Less is more." Teachers do not need to teach students more information. Students must understand data, judge what is essential, and combine bits of information to form a holistic worldview. All technical hands-on courses using programming languages as the development tool are the best way to approach such a world and achieve the 4Cs. Like the brain bookshelf in the picture below, after the teacher gives students the minimum essential tools, the rest of the knowledge and ability development is for students to build their learning through hands-on work. Good education is not just about making teachers teach well but providing a "sufficient learning environment" for learners to construct their knowledge. We found that Tinker Learning was the answer we had been looking for. We want to find out how we acquired the skills of "how to learn" through this project and to use the teaching strategies of Tinker Learning to enable students to develop the skills of "how to learn." ### Tinker Learning, Live Coding, and Live debugging The goal of teaching is "learning how to learn" and the 4Cs. Because no one can honestly know how the future will change and any assumptions may be far from reality, schools should focus on teaching students "how to learn" and the 4Cs: critical thinking, communication, collaboration, and creativity. The curriculum should emphasize general information skills that can be integrated into daily life and, most Figure 2: The example of Methods for Analyzing Highly Cited Blockchain in Education Related Papers from 2019-2023 Using ChatGPT and LDA. importantly, the ability to adapt, learn new things, maintain mental balance in unfamiliar situations, and find appropriate solutions to problems. In such a world, "less is more." Instead of overwhelming students with information, it is more critical for them to understand data, judge what information is essential, and combine bits of information to form a holistic view of the world. Technical, hands-on courses using programming languages as development tools can provide an excellent approach to achieving these goals. Tinker Learning, Live Coding, and Live debugging are teaching methods in this course. Tinker Learning involves a process of playful exploration, creation, and iteration. It is similar to children's play, where students experiment with different combinations and configurations, seeking new ideas and expressing their creativity through exploration and experimentation. This process helps students develop their knowledge through repeated cycles of imagining, creating, executing, and completing projects as sharing and receiving feedback. Live Coding and Live Debug involve students writing code to solve real-world problems, practicing identifying problems, designing projects, planning actions, collecting data, solving problems, and making decisions. These methods transform the traditional teacher-centered classroom into a more learner-centered environment, creating a "co-learning and co-creation" atmosphere at the school where the role of the teacher shifts from simply presenting fixed examples to serving as a coach. Using GitHub, teachers can help students track code submissions and changes, fostering the development of cross-disciplinary professionals with practical implementation skills. Tinker Learning promotes active, hands-on learning and encourages students to take ownership of their learning process through live coding, debugging, and tracking code submissions and changes via the GitHub platform. ### Implementation of the Study A program to analyze the records of all students on GitHub following the research framework. The teacher should guide the students to find a problem in their daily lives that interests them and discuss how to use the tools learned in the week to solve the problem. The teacher should demonstrate how they used specific techniques and tools to solve the problem and how these techniques can be extended. Through the Tinker Learning teaching strategy, students can try new ideas, explore different paths to solve problems, make adjustments, imagine new possibilities, and creatively express their views. The Creative Learning Spiral process allows students to understand how creativity develops from idea to implementation and become creative thinking practitioners. By repeating the process of trying to imagine, create, execute, and complete creations through play, sharing, and receiving feedback, students' knowledge is built up step by step like a spiral. Figure 3: Our Creative Learning Spiral for teaching students the skills of ChatGPT Python API. Fig. 3 shows our Creative Learning Spiral for teaching the skills of ChatGPT Python API to students, incorporates the following five steps: 1. IMAGE: The ChatGPT Python API can assist students in exploring and refining their ideas by providing them with relevant information and insights based on their input. 2. CREATE: The ChatGPT Python API can be used as a tool to help students develop their projects by generating text and language models based on their requirements. 3. PLAY: Students can use the ChatGPT Python API to analyze and understand large amounts of text data, identify patterns and relationships, and draw insights from their findings. By working with the API, students can develop their analytical skills and gain a deeper understanding of the subject matter. 4. SHARE: GitHub can serve as a powerful platform for students to work together on coding projects, share code snippets, and provide feedback to one another. By incorporating the ChatGPT Python API in their projects, students can collaborate on analyzing large amounts of text data and gain insights from their findings. 5. REFLECT: By receiving feedback and reviews from their peers and instructors, students can reflect on what they have learned and how they can improve their skills in the future. The ChatGPT Python API can provide them with a valuable tool for identifying improvement areas and refining their problem-solving approach. This approach emphasizes active engagement and experimentation as key components of effective learning and uses the Creative Learning Spiral process to guide students through creatively solving problems. ## 4 Results This section will explore how students can demonstrate their proficiency and practical implementation skills by creating a portfolio of finished projects on their GitHub accounts. By analyzing the progress of their projects over time, we can track their growth and learn throughout the course. To achieve this, we will obtain all students' GitHub accounts, and using action research and case study methods, we will observe their weekly learning progress. ### Participants' GitHub Our approach involves comparing and analyzing changes in the students' code from the initial blank framework to the progress made during their first assignments and the final completed content. We can infer how they construct their knowledge by repeatedly verifying the students' assignments. The GitHub link containing the records of all 44 out of 45 students who made progress in the first three assignments through the tutorials is available at [https://reurl.cc/7jzggN](https://reurl.cc/7jzggN). It is evident that only two students did not receive full marks in the second assignment, and six students did not achieve full effects in the first assignment. Fig. 4 shows all participants' performance in assignments, which are available on GitHub sub-sheets, including HW1-HW3, HW4-HW5, and the final project. These assignments aim to confirm students' skills in data visualization, integrating programmatic skills, and exploring large amounts of data on the internet. We will use their GitHub accounts to document their growth and progress over time, ensuring they have a work portfolio demonstrating their skills upon completing the course. Python is recommended as a beginner-friendly programming language due to its intuitive syntax, availability of resources and libraries, cross-platform compatibility, and numerous use cases. The ChatGPT Python API and GitHub can promote critical thinking, collaboration, creativity, and communication skills in a more learner-centered approach. ### Students Learning Performance and Evaluation All student assignments and projects must be submitted to the GitHub version control platform, with five assignments and one mini-hackathon project to build the habit of turning ideas into work. Through the automatic version control tracking mechanism on GitHub, the submission tracking. 1. Assignment 1, worth 15 points, is designed to use Sets to apply intersection union difference sets, allowing students to choose the dataset problem they want to solve for the first assignment. With the ChatGPT Python API, students can use natural language processing to analyze and understand their chosen dataset and generate insights and recommendations based on the data. 2. Assignment 2 is worth 15 points and is designed to confirm that all students are proficient in using JSON and Python Dict features with semi-structured data to solve problem styles. Analyze a mix of structured and semi-structured data. Verify that students can apply all syntax flexibly to the problem they are trying to solve. With the ChatGPT Python API, students can generate natural language descriptions of their solutions and analyze large amounts of text data to gain insights into the structure and organization of the data. 3. Assignment 3 is worth 15 points. It is designed to confirm that all students understand the inductive logic of text structure and can quickly batch-process large amounts of repetitive key content extraction through data regularization. Assure that all students can successfully use web crawling skills to extract large amounts of data of interest for analysis and application in their projects. With the ChatGPT Python API, students can use natural language processing to remove key content and insights from large amounts of text data, allowing them to quickly and efficiently analyze and organize the data. 4. Assignment 4, with a score of 15, is designed to confirm that all students can use data visualization and related analysis tools to visually present large amounts of data of interest to them and perform in-depth interpretation and analysis. With the ChatGPT Python API, students can generate natural language descriptions of their visualizations and gain insights into the patterns and relationships in the data. 5. Assignment 5, worth 15 points, is designed to confirm that all students can integrate the programmatic skills built in the previous four assignments to present a large amount of data on the Internet that they are interested in, along with textual exploration skills, and conduct an in-depth exploration. The final project was designed to confirm that all students were able to integrate the acquired skills from the previous ten weeks and that they were able to take a global view and explore the text of a large amount of data on the Internet that they were interested in, along with co-occurring network analysis Figure 4: The performance of all participants in all assignments. skills. With the ChatGPT Python API, students can generate natural language descriptions of their findings and insights and collaborate with others on their analysis and exploration of the data. 6. The final project is worth 100 points and accounts for 25% of the total grade. The design is to design a user experience solution that incorporates all the acquired development skills into the problem they want to solve and the object they want to serve and visually represent the flow of use. With the ChatGPT Python API, students can generate natural language descriptions of their user experience solution and gain insights into the effectiveness of their solution based on user feedback and analysis of user behavior. We can use the ChatGPT Python API in assignments and projects to provide students with personalized and adaptive learning experiences. The API can generate natural language responses, analyze text data, and provide targeted feedback and support. Additionally, it can develop customized learning materials for students based on their learning styles. Incorporating the ChatGPT API promotes a learner-centered approach to learning and helps students build their problem-solving skills. Table 1 shows that forty-one students participated in the classroom feedback, which showed that students thought that this teaching method could stimulate learning interest and allow good interaction between teachers and students in the classroom. Students can track each other's changes and progress after submitting their codes and get the most realistic results of learning effectiveness. The live interactive video recording in the classroom will analyze the actual process of students working on the tasks and let students tell their progress and changes. With the confirmation of Tinker Learning teaching mode, students can construct their knowledge body step by step with the assistance of Live Coding & Live Debug. ## 5 Discussion This section will discuss several key ideas related to the teaching approach and learning process, such as Tinker Learning, Learner-Centered Instruction, Cone of Learning, and Creative Learning Spiral. These concepts emphasize hands-on, experiential learning and encourage students to explore and discover concepts independently. By implementing various learning methods, such as collaborative learning, hands-on projects, mini-hackathon, flipped teaching, live video recordings of teaching operations, and program examples, students can demonstrate their implementation strategies for applying technology to educational training. The ChatGPT Python API has significant potential to enhance the learning experience for students and promote learner-centered instruction. By leveraging natural language processing, machine learning, and text analysis, educators can provide students with a more personalized and adaptive learning experience. The ChatGPT API can generate natural language responses, allowing more engaging and interactive interactions between students and the program. This can be especially beneficial for students who may require additional approval or who learn at a different pace from their peers. Additionally, the ChatGPT API can analyze large amounts of text data, providing educators with insights into students' understanding of the subject matter and identifying areas where students may struggle. Using ChatGPT, educators can provide personalized support and guidance, generate personalized learning materials, and offer targeted feedback to help students improve. Overall, the ChatGPT Python API offers a powerful tool for educators to promote learner-centered instruction and provide students with a more personalized and adaptive learning experience, ultimately leading to more effective learning outcomes. \begin{table} \begin{tabular}{|l|r|r|r|} \hline & Very Much in Line & Still meets & Disagree \\ \hline The course syllabus is arranged appropriately & 80\% & 12.50\% & 7.50\% \\ \hline Teaching to stimulate interest in learning & 75\% & 15\% & 10\% \\ \hline Teachers teach from the heart & 82.50\% & 10\% & 7.50\% \\ \hline Good interaction between teachers and students & 87.50\% & 12.50\% & 0\% \\ \hline The evaluation method is reasonable & 77.50\% & 17.50\% & 5\% \\ \hline \end{tabular} \end{table} Table 1: Course Evaluation. There are forty-one students participated in the classroom feedback. ## 6 Conclusion In our programming course, we have leveraged the ChatGPT Python API to enhance students' sense of accomplishment and promote deeper thinking through qualitative analysis. By introducing problem situations and demonstrating how to solve them using Python language and packages, students have gained hands-on experience and practical skills that they can use to solve similar problems. We have also implemented the Tinker Learning teaching strategy, which encourages students to actively participate in writing code and constructing their own knowledge body. Using the ChatGPT API, students have explored new avenues in text analysis and language processing, enabling them to analyze larger samples and gain deeper insights into the subject matter. This has led to a significant increase in students' sense of accomplishment and motivation to continue learning. By encouraging students to think about how to apply the tools demonstrated to solve real-life problems, we have promoted the development of the 4C skills: critical thinking, communication, collaboration, and creativity. Using the ChatGPT API, students have expanded their problem-solving skills and developed a deeper understanding of the subject matter. We can verify their growth and progress by analyzing their records on GitHub. Our teaching methodology has helped students develop the ability to "learn how to learn" and build their own knowledge body, leading to a more profound understanding of programming concepts and principles. Overall, the ChatGPT API has played a crucial role in enhancing our teaching strategy and promoting students' sense of accomplishment and motivation. By incorporating problem-solving strategies and encouraging active participation, we have developed students' 4C skills and equipped them with valuable skills that will serve them well in their future careers.
2302.02049
Non-trivial topology in rare-earth monopnictides from dimensionality reduction
Thin films of rare-earth monopnictide semimetals are expected to turn into semiconductors due to quantum confinement effect, which lifts the overlap between electron pockets at Brillouin zone edges and hole pockets at the zone center. Instead, taking non-magnetic LaSb as an example, we find the emergence of a quantum spin Hall insulator phase in LaSb(001) films as the thickness is reduced to 7, 5, or 3 monolayers. This is attributed to a strong quantum confinement effect on the in-plane electron pockets, and the lack of quantum confinement on the out-of-plane pocket in reciprocal space projected onto zone center, leading to a band inversion. Spin-orbit coupling opens a sizeable non-trivial gap in the band structure of the thin films. Such effect is shown to be general in rare-earth monopnictides and may lead to interesting phenomena when coupled with the 4f magnetic moments present in other members of this family of materials.
Dai Q. Ho, Ruiqi Hu, D. Quang To, Garnett W. Bryant, Anderson Janotti
2023-02-04T01:16:16Z
http://arxiv.org/abs/2302.02049v1
# Non-trivial topology in rare-earth monopnictides from dimensionality reduction ###### Abstract Thin films of rare-earth monopnictide semimetals are expected to turn into semiconductors due to quantum confinement effect, which lifts the overlap between electron pockets at Brillouin zone edges and hole pockets at the zone center. Instead, taking non-magnetic LaSb as an example, we find the emergence of a quantum spin Hall insulator phase in LaSb(001) films as the thickness is reduced to 7, 5, or 3 monolayers. This is attributed to a strong quantum confinement effect on the in-plane electron pockets, and the lack of quantum confinement on the out-of-plane pocket in reciprocal space projected onto zone center, leading to a band inversion. Spin-orbit coupling opens a sizeable non-trivial gap in the band structure of the thin films. Such effect is shown to be general in rare-earth monopnictides and may lead to interesting phenomena when coupled with the \(4f\) magnetic moments present in other members of this family of materials. Rare-earth monopnictides (RE-Vs), with simple rock-salt crystal structure composed of two interpenetrating face-centered cubic (\(fcc\)) sublattices (Figure 1a), are fully compensated semimetals with electron pockets centered at three X points and two hole pockets at \(\Gamma\), as shown in Figure 1b for LaSb. RE-V thin films have been epitaxially grown on III-V semiconductors [1, 2, 3] and sought as structurally perfect (defect-free) epitaxial contacts [1, 4], as nanoparticles embedded in III-Vs for thermoelectrics, infrared and terahertz generation and detection [5, 6, 7, 8, 9, 10]. Some members of this material family display interesting properties such as extremely large magneto-resistance and non-trivial topological phase including a recently observed unusual magnetic splitting phenomenon [11, 12, 13, 14, 15, 16, 17]. The relatively small overlap in energy between the electron and hole pockets have led researchers to expect that this overlap could be lifted and a band gap would ultimately be opened in structures with reduced dimension such as nanoparticles or thin films due to the quantum confinement effect (QCE). So far, this has not been realized, and the reasons for it have remained unknown [6, 7, 18]. Using first-principles calculations for free standing slabs of LaSb (taken as an example, here) we show that the overlap in energy between the electron pockets lying in the film-plane directions (X\({}_{x}\) and X\({}_{y}\)) and the hole pockets at \(\Gamma\) indeed decreases upon dimensional reduction in thin films, but the electron pocket along the film-plane normal direction, at X\({}_{z}\), which is projected onto \(\overline{\Gamma}\) in the 2D Brillouin zone (BZ) of the thin film, remains. This direction dependent behavior is attributed to the directional nature of the electron pocket wavefunctions, originating from the rare-earth \(5d\) orbitals, which maintains the metallic character in thin films. However, for films that are 7 or less monolayers (ML) thick, we show that orbital-selective QCE may ultimately lead to an inverted band gap which produces helical spin-momentum locked edge states in quasi 2D nanoribbons. Calculations of the \(\mathbb{Z}_{2}\) invariant, spin-resolved local density of states (LDOS), and spatial distribution of wavefunctions associated with edge modes confirm the non-trivial topology in the electronic structure of these ultra-thin films. This material system provides a novel platform for realizing the intrinsic quantum spin Hall (QSH) insulator phase in experiment different from the two well-known examples of semiconductor quantum wells [19, 20] and monolayer transition metal dichalcogenides [21, 22, 23, 24], as well as some other recent proposals [25, 26, 27]. The calculated lattice parameter of LaSb is 6.547 A and 6.518 A using the DFT-GGA and the HSE06 hybrid functional, respectively, both in good agreement with previous calculations [28] and the experimental value of 6.488 A [29] which is adopted in this study. The computed band structure of primitive bulk LaSb is shown in Figure 1d. There are three electron pockets (denoted by \(\alpha\) centered at X\({}_{x}\), X\({}_{y}\), and X\({}_{z}\)) from bands derived mainly from La 5\(d\) orbitals, and two hole pockets (\(\beta\) and \(\delta\) centered at \(\Gamma\)) from bands derived from Sb 4\(p\) orbitals, crossing the Fermi level, leading to full compensation, i.e., the total numbers of electrons and holes are equal. This overlap is overestimated in DFT-GGA, resulting in an artificially higher carrier concentration compared to experimental values, while HSE06 gives carrier concentration in better agreement with experiments [28]. As can be seen in Figure 1b, the electron pockets are highly anisotropic with each ellipsoid's semi-major axis pointing along the corresponding Cartesian axis, and the semi-minor axis being perpendicular to it. The hole pockets, however, are more isotropic with an inner spherical-like pocket \(\beta\) and an outer warped double pyramid pocket \(\delta\). Therefore, we expect that the electron pockets will be more sensitive to QCE upon dimensional reduction in thin films than the hole counterparts. The effect of quantum confinement on the electron pockets can be understood by inspecting the single-particle wavefunctions at the three X points shown in Figure 1f-h. At X\({}_{x}\) (X\({}_{y}\)), the wavefunction is composed of La \(d_{yz}\) (\(d_{xz}\)) orbital (see details on projected band structure in Figure S1), resulting in a low dispersion along the semi-major axis \(\Gamma\)\(-\)X\({}_{x}\) (\(\Gamma\)\(-\)X\({}_{y}\)) in reciprocal space because the orbital interaction occurs along the direction normal to their charge distribution as evidenced in Figure 1f(g). Meanwhile, the same band has a higher dispersion along the perpendicular direction, i.e., the electron pocket's semi-minor axis X\(-\)W thanks Figure 1: Electronic structure of a typical semimetallic RE-V. (a) Conventional rock-salt crystal structure, the green and cyan arrows representing the unit vectors of bulk primitive unit cell and tetragonal unit cell, respectively, (b) Fermi surface of the primitive bulk semimetallic RE-V featuring two hole pockets at \(\Gamma\) and three equivalent electron pockets at X points, (c) 3D BZ of RE-V \(fcc\) crystal structure and its projection on (001) plane (yellow plane) and also shown is the 3D BZ of the tetragonal unit cell (right square green prism inside the BZ of the primitive \(fcc\)); orbital-resolved band structure of semimetallic RE-V using (d) primitive unit cell, (e) 4-atom tetragonal unit cell at \(k_{z}=0\), more bands observed than that of the primitive unit cell due to band folding phenomenon; visualization of the wavefunctions of the electron pockets centered at (f) X\({}_{x}\), (g) X\({}_{y}\), and (h) X\({}_{z}\) (yellow and cyan lobes represent \(+\) and \(-\) sign of the wavefunctions, respectively). to the orbital interaction happening in the plane of their distribution (\(yz\) and \(xz\) planes for pockets at X\({}_{x}\) and X\({}_{y}\), respectively). At X\({}_{z}\), on the other hand, the wavefunction is composed of La \(d_{xy}\) orbital, i.e., lying in the \(xy\) plane with small overlap along the [001] or \(z\) direction (Figure 1h). Therefore, we anticipate that QCE in [001] oriented thin films will affect mostly the electron pockets whose wavefunctions are largely distributed in the planes normal to the film plane, i.e., the pockets centered at X\({}_{x}\) and X\({}_{y}\); the electron pocket at X\({}_{z}\) will be marginally affected by the QCE because the corresponding wavefunction is spreading in the plane of the film.[30, 31] This can be checked by examining the band structure of thin free-standing slabs. To understand the electronic structure of [001] oriented films, we first discuss the band structure of LaSb in the tetragonal unit cell, with two formula units, at \(k_{z}=0\), as shown in Figure 1e which has similar symmetry to that of the (001) films, i.e., we can build a (001) film unit cell from the tetragonal unit cell. Its corresponding BZ is highlighted by the green right square prism inside the BZ of the \(fcc\) primitive cell in Figure 1c. The electron pocket at \(\Gamma\), overlapping in momentum and energy with the hole pockets, corresponds to the pocket at X\({}_{z}\) due to the folding of the BZ along \(k_{z}\) (see details from projected band structure in Figure S2). It can also be viewed as the projection on the \(k_{x}\)-\(k_{y}\) plane of the ellipsoid along \(k_{z}\) in the Fermi surface of the primitive cell (Figure 1b). Note that the overlap and crossing of electron and hole pockets near \(\Gamma\) in Figure 1e is a consequence of having a larger unit cell compared to the primitive cell. Otherwise, they would open an anti-crossing gap in the presence of spin orbit coupling (SOC). In fact, the electron pocket at X\({}_{z}\) and hole pockets at \(\Gamma\) belong to distinct symmetry irreducible representations (irreps), and therefore, the interaction between them is prohibited by symmetry, i.e., the C\({}_{4}\) rotation operation along \(\Gamma\)\(-\)X\({}_{z}\).[32, 33] However, for finite thickness films along [001], this symmetry is lifted, and the overlap in both energy and momentum becomes viable.[33] At the M point, there are two electron bands with different dispersion: the lower dispersion band composed of La \(d_{xz}\), originally from the pocket at X\({}_{y}\), with the dispersion along the semi-major axis \(\Gamma\)\(-\)X\({}_{y}\), and the higher dispersion counterpart originating from the projection of the semi-minor axis X\({}_{x}-\)W\({}_{z}\) of the pocket at X\({}_{x}\) from the adjacent BZ (the path highlighted in blue in Figure 1c). To summarize, the three electron pockets of the bulk in primitive unit cell form are residing in two locations of the tetragonal unit cell BZ: X\({}_{z}\) at \(\Gamma\), X\({}_{x}\) and X\({}_{y}\) at M, which would ultimately determine the effect of size quantization upon dimensional reduction. The electronic band structure of a free standing slab with 15 MLs is shown in Figure 2a, along \(\overline{\rm M}-\overline{\Gamma}-\overline{\rm X}\), where \(\overline{\rm M}\) can be viewed as the projection of the two high symmetry points X\({}_{y}\) and X\({}_{x}\) (from the adjacent BZ), \(\overline{\Gamma}\) is the projection of \(\Gamma\) and X\({}_{z}\), and \(\overline{\rm X}\) is the projection of the L points onto the 2D BZ of the thin films (Figure 1c). We note that the electron pocket Figure 2: Electronic structure of freestanding films LaSb with the thickness of (a) 15-ML, (b) 9-ML, (c) 7-ML, (d) 5-ML, (e) 3-ML, (f) 1-ML. Due to the orbital-selective QCE on the electron pockets at \(\overline{\rm M}\) and \(\overline{\Gamma}\), a phase transition from topologically trivial semimetal in 15-ML film to non-trivial insulator in 7-, 5-, 3-ML films and eventually trivial insulator in 1-ML film can be observed. at \(\overline{\rm M}\) (corresponding to X\({}_{x}\) and X\({}_{y}\)) is shifted upwards in energy while the electron pocket at \(\overline{\Gamma}\) (projected from X\({}_{z}\)) remains nearly unchanged. At the \(\overline{\rm M}\) point, there are two sets of electron pocket sub-bands with quite different dispersion: one set having low dispersion (high effective mass) while the other set exhibiting large dispersion (low effective mass). This can be understood from the orbital character of the bands (Figure S3) and the directional nature of their interaction as analyzed above for the M point of the tetragonal unit cell case. The low dispersion set of sub-bands stems from the La \(d_{xz}\) orbital interaction taking place along the semi-major axis \(\Gamma\)\(-\)X\({}_{y}\). Meanwhile, the high dispersion set of sub-bands originates from the semi-minor axis X\({}_{x}\)\(-\)W\({}_{z}\) due to the in-plane interaction of La \(d_{yz}\), resulting in a stronger interaction and hence a smaller effective mass. The gaps at the X\({}_{x}\) and X\({}_{y}\) points in the bulk transfer to the gap at the \(\overline{\rm M}\) point of the film. At \(\overline{\Gamma}\), however, electron pockets overlaps with hole pockets, maintaining the semimetallic phase at this thickness of the film. In addition, the energy separation of the sub-bands at \(\overline{\rm M}\) is associated with the dispersion, or effective mass, of the electron band at X\({}_{x}\) (X\({}_{y}\)). Consequently, this energy separation is larger than that between the sub-bands of the electron pockets at \(\overline{\Gamma}\) due to QCE; the latter is associated with the coupling of the \(d_{xy}\) orbitals in the film plane, hence less affected by QCE. The electronic structure of the 9-ML thick slab is shown in Figure 2b, indicating a semimetal on the brink of having the electron pocket at \(\overline{\rm M}\) above the Fermi level. For the 7-ML and 5-ML thick slabs, Figures 2c,d show that the electron pockets at \(\overline{\rm M}\) are above the Fermi level due to the QCE, and inverted band gaps at \(\overline{\Gamma}\) are opened in the presence of SOC, indicating the emergence of a 2D (or quasi 2D) QSH phase [19, 34, 35]. The gaps at \(\overline{\rm M}\) and at \(\overline{\Gamma}\) are enlarged in the 3-ML thick film (Figure 2e). At the ultimate limit of 1-ML thick, LaSb film becomes a normal insulator thanks to the QCE lifting up the overlap (at \(\overline{\Gamma}\)) between the projected electron pocket from \(X_{z}\) and hole pockets, leading to a normal band ordering (Figure 2f). As mentioned above, the electronic structure of LaSb films evolves from a semimetal with trivial topology to a non-trivial insulator as the film thickness decreases down to a few MLs. The topologically non-trivial electronic structure follows a band inversion, as shown in Figure 3b for the case of 3-ML thick film. To confirm the topological non-triviality, we calculated the \(\mathbb{Z}_{2}\) topological invariant using the Fu-Kane formula [36] as well as the Fukui-Hatsugai method [37], obtaining \(\mathbb{Z}_{2}=1\). The non-trivial topological nature is further corroborated by following the evolution of the Wannier charge center (WCC) as a function of momentum, shown in Figure 3e, which essentially displays the non-trivial topology as the red-dashed reference line cutting through the WCC evolution line an odd number of times. Furthermore, based on the bulk-boundary correspondence, we expect edge states of the QSH insulator to show the spin-momentum locking behavior [34]. We searched for the presence of spin-polarized edge modes associated with the non-trivial topology by calculating the band structure of a nanoribbon made from the 3-ML film. We find that the nanoribbon must be at least 50 MLs wide (\(\sim\)16 nm) to avoid interactions between the two opposite edges of the Figure 3: Signature of the QSH insulating phase in 3-ML thick film. (a) 2D BZ of the thin film and 1D BZ of the ribbon made of the film (blue line) with periodic direction along \(x\), (b) projected band structure around the Fermi level (set at 0 eV) exhibiting an inverted band gap at \(\overline{\Gamma}\), (c) electronic structure of the ribbon made from the 3-ML thick film obtained from \(ab\)\(initio\) calculations using openMX showing the presence of edge states crossing at Fermi level as highlighted by red lines in the inset, (d) LDOS of a semi-infinite wide ribbon made from the 3-ML thick LaSb obtained from Wannier orbital-based TB Hamiltonian and the spin-polarized LDOS of the edge states shown in the inset, (e) Wannier charge center evolution of the 3-ML thick LaSb freestanding film displaying the non-trivial topological property as the red dashed reference line crossing the evolution line (represented by filled navy blue circles) an odd number of time. ribbon. The electronic structure of the 3-ML thick, 50-ML wide nanoribbon is shown in Figure 3c, where the edge states cross at the four-fold degenerate Dirac point \(\overline{\overline{\Gamma}}\), represented by a pair of red lines in the vicinity of \(\overline{\overline{\Gamma}}\) in the inset. The corresponding spin-resolved LDOS in Figure 3d, obtained by the iterative Green function method [38] for a semi-infinite structure, and the spatial distribution of spin-resolved wavefunctions of edge states shown in Figure S4, clearly demonstrate the helical spin-momentum locking of the edge modes, thus confirming the presence of the QSH phase in the 3ML-thick film. For 7-ML or 5-ML thick films, there are two set of band inversions (see Figure S5a,e), thus no time-reversal symmetry-protected edge states are expected, i.e., spin scattering from spin up to spin down channels on the same edge is possible. [39, 40] Therefore, they are classified as trivial insulators with the topological invariant \(\mathbb{Z}_{2}=0\). However, one pair of the inverted bands can be removed when a small biaxial tensile strain is applied to films deposited on an appropriately chosen substrate. Such an approach has been investigated theoretically [41] and recently demonstrated experimentally in the case of RE-Vs grown on III-V substrates. [42] This biaxial (or epitaxial) strain allows to selectively lift the overlap between one pair of electron and hole sub-bands, resulting in an odd number of band inversions, so that a QSH phase can also be realized in 5-ML and 7-ML thick films (see Figure S5c,d,h). When taking a closer look at the thickness-dependence of the electronic structure of the thin films, there is a striking difference between films having even and odd number of monolayers (Figure 4 and Figure S6), particularly near the Fermi level. As the thickness of the film decreases, the QCE becomes stronger, resulting in gradually larger sub-band. The QCE also affects the size of the gap at the \(\overline{\mathrm{M}}\) and \(\overline{\Gamma}\) points. Thinner films with an odd number of MLs show larger gaps with decreasing film thickness. The same trend is seen for films with an even number of MLs. Remarkably, the gap at \(\overline{\Gamma}\) oscillates when going from an even to an odd number of ML, as detailed in Figure 4 for the bands around \(\overline{\Gamma}\). The gap is essentially zero for films having an even number of MLs but becomes sizable for films with an odd number of MLs. Figure 4: Electronic structures in the proximity of the \(\overline{\Gamma}\) point of LaSb freestanding thin films with an even/odd number of MLs. Red dashed and blue lines indicate bands without and with SOC, respectively. Irreps of relevant bands are denoted by capitalized Greek letters. This oscillation in the gap size of the thin films with even and odd number of MLs can be explained by the difference in their symmetry. The films having an odd number of MLs belong to a symmorphic symmetry group (space group number 123 with point group \(P4/mmm\)) while the films with an even number of MLs possess a nonsymmorphic symmetry group (space group number 129 with point group \(P4/mmm\)). This leads to a distinction in the symmetry of the bands around the Fermi levels as shown in Figure 4e,f taking 3-ML and 2-ML films as examples. For the 3-ML film, one electron band and two hole bands overlap in the proximity of \(\overline{\Gamma}\), leading to a band inversion at \(\overline{\Gamma}\) in the absence of SOC (represented by red dashed lines). Crucially, one of the hole bands has the same symmetry as that of the electron band (irrep \(\Sigma_{2}\), Figure 4e), leading to a gap opening due to their mutual interaction. However, the other hole band with a different irrep (\(\Sigma_{1}\)) does not interact with the electron counterpart, resulting in a semimetallic phase with a degeneracy of the two hole bands at the \(\overline{\Gamma}\) point. This inverted band gap semimetal with a degenerate point at the zone center is similar to bulk HgTe [19]. The degeneracy in bulk HgTe is protected by its crystal symmetry and is stable even in the presence of SOC; it is only lifted and a tiny gap opens when the bulk crystal symmetry is broken, i.e., in the thin film structure (quantum well). For the 3-ML LaSb film, however, the degeneracy is not stable under the influence of SOC since it is from the original degeneracy of the top two hole bands at the X\({}_{z}\) point of the bulk when not taking SOC into consideration. As a consequence, when turning on SOC, a sizeable inverted gap comparable to the SOC splitting between \(\beta\) and \(\delta\) bands at the bulk X point is observed. On the other hand, for the 2-ML system the two hole bands and the electron band have different symmetries, characterized by irreps \(\Sigma_{1}\), \(\Sigma_{2}\), and \(\Sigma_{4}\), respectively (Figure 4f). Therefore, in the absence of SOC, those bands would cross if they were overlapping in energy since their interactions are prohibited by symmetry (as in the case of 4ML-thick film shown in Figure 4d). When SOC is turned on, the highest hole band and the electron band overlap in energy, leading to a SOC-induced gap opening. However, this gap is tiny since the SOC plays a role as a small perturbation, the relevant orbitals interaction is not allowed by symmetry. A similar argument can be applied to understand the sizeable inverted gap in the other films with odd numbers of MLs such as the 5-ML and 7-ML, as well as the observed essentially zero gaps in the films with an even number of MLs such as the 4-ML and 6-ML films. The effects discussed here are expected to occur for all materials in the family of RE-Vs with RE=La,..., Lu and V=As, Sb, Bi since they have the same crystal structure and similar orbital composition of the bands. Note, however, that some of the Bi-based compounds already show non-trivial topology in the 3D bulk due to the crossing of the Bi \(6p\) and the rare-earth \(5d\) bands near the X point [14, 15]. The As- and Sb-based compounds are all topologically trivial semimetals, and hence are all expected to become topologically non-trivial when made as few monolayers thick, [001] oriented ultra-thin films. Apart from the case of La-, Y-, and Lu-based compounds, where the RE \(f\) orbitals are completely empty or filled, other RE-Vs exhibit magnetic ordering at low temperatures, opening an avenue to combining non-trivial topology with spin magnetism, potentially leading to novel phenomena such as the emergence of 2D antiferromagnetic topological insulators [43, 44, 45], the quantum anomalous Hall effect in Chern insulators [46], and Fermi arcs due to an unusual magnetic splitting [17]. Experiments have already demonstrated the epitaxial growth of RE-Vs thin films on conventional III-V substrates, down to few MLs thick. Scanning tunneling spectroscopy measurements on GdSb/GaSb has explored the possibility of turning GdSb semimetal into a semiconductor in ultra-thin films, yet failing [42, 47]. The authors, however, were not searching for edge states at that time. We also argue that the presence of metallic interface states at the interface with III-Vs [48] might impose difficulties in identifying the spin-momentum locked states and that perhaps a different substrate with different bonding at the interface might be required. In the case of LaSb, we propose the use of rock-salt MgTe, CaTe, or SrTe for which interface metallic states would be mitigated, facilitating the observation of the spin-momentum locked edge states. In conclusion, we have investigated the electronic structure of non-magnetic RE-V thin films at various thicknesses using first-principles calculations employing the hybrid functional HSE06, taking LaSb as a case study. The quantum confinement in thin films along [001] has dissimilar effects on the electron pockets centered at equivalent X points at the zone edges owing to their orbital composition. The directional nature of the wavefunction associated with the electron pocket at X\({}_{z}\) point, distributed mainly in the film plane with weak out-of-plane interactions, makes this pocket resistant to finite size effects, leading to an inverted band gap opening in the limit of very thin films. Interestingly, at the limit of 3-ML, we have observed the emergence of a QSH insulating phase characterized by the non-trivial \(\mathbb{Z}_{2}\) topological invariant and the presence of helical spin-momentum locked edge states. Similar observation can be seen for 5-ML and 7-ML films with an applied small biaxial tensile strain. Our results indicate that this effect is general for the RE-Vs, with a possibly interesting combination of 4\(f\) magnetism with topological band structures in ultrathin films. ## 2 Computational methods First-principles calculations based on density functional theory [49, 50] as implemented in the VASP code [51] have been performed for structural optimization and electronic structure calculation. The screened hybrid exchange-correlation functional of Heyd-Scuseria-Ernzerhof (HSE06) [52, 53] was employed in all cases except the calculations of the electronic structures of nanoribbons because these calculations required very large unit cells. In that case, the generalized gradient approximation (GGA) functional of Perdew-Burke-Ernzerhof (PBE) [54, 55] was used instead. A plane-wave basis set with up to kinetic cutoff energy of 400 eV, and 12\(\times\)12\(\times\)12 \(k\)-point mesh for bulk LaSb and 10\(\times\)10\(\times\)1 \(k\)-point mesh for thin films were used, respectively. Due to large unit cell sizes of the LaSb nanoribbons, first-principles calculations searching for the presence of edge states were performed using the OpenMX code [56, 57, 58] with a localized basis set. The symmetry irreducible representations (irreps) of electronic bands were determined using the irvsp code [59]. Maximally localized Wannier functions (MLWFs) based effective Hamiltonian were constructed by the Wannier90 code [60] using La \(d\) and Sb \(p\) as projectors. Edge state spectra were then obtained by using the surface Green function for semi-infinite systems [38] as implemented in WannierTools [61]. This research was supported by the NSF through the UD-CHARM University of Delaware Materials Research Science and Engineering Center (No. DMR-2011824). It made use of the computing resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE), supported by the National Science Foundation grant number ACI-1053575. Additional details are available on orbital-resolved (projected) band structures of LaSb primitive unit cell, 4-atoms tetragonal unit cell, free standing slab with 15-ML thick; spatial distribution of spin-resolved wavefunctions of edge states in 3-ML thick, 50-ML wide LaSb ribbon; the evolution of the topology of electronic structures in LaSb thin films having 5-ML and 7-ML thick under biaxial tensile strain; and electronic band structure of even and odd number of ML in the whole Brillouin zone. This material is available free of charge via the ACS Publications website at DOI: [http://pubs.acs.org](http://pubs.acs.org).
2305.06288
Framed Combinatorial Topology with Labels in $\infty$-Categories
Framed combinatorial topology is a recent approach to tame geometry which expresses higher-dimensional stratified spaces via tractable combinatorial data. The resulting theory of spaces is well-behaved and computable. In this paper we extend FCT by allowing labelling systems of meshes and trusses in $\infty$-categories, and build an alternative model of FCT by constructing $\infty$-categories that classify meshes and trusses. This will serve as the foundation for future work on models of higher categories based on generalised string diagrams and the study of generalised tangles.
Lukas Heidemann
2023-05-10T16:22:16Z
http://arxiv.org/abs/2305.06288v1
# Framed Combinatorial Topology with Labels in \(\infty\)-Categories ###### Abstract Framed combinatorial topology is a recent approach to tame geometry which expresses higher-dimensional stratified spaces via tractable combinatorial data. The resulting theory of spaces is well-behaved and computable. In this paper we extend FCT by allowing labelling systems of meshes and trusses in \(\infty\)-categories, and build an alternative model of FCT by constructing \(\infty\)-categories that classify meshes and trusses. This will serve as the foundation for future work on models of higher categories based on generalised string diagrams and the study of generalised tangles. ###### Contents * 1 Introduction * 2 Trusses * 3 Stratified Spaces * 4 Meshes * 5 Labelled Meshes ## 1 Introduction Framed combinatorial topology [1] is a recent approach to tame geometry. Well-behaved stratified spaces are equipped with a canonical ordering of coordinate dimensions, called a framing, leading to a very tractable theory. Meshes represent a stratified space as a sequence of 1-dimensional bundles, each adding one additional coordinate in the order of the framing; trusses are purely combinatorial representations of framed isomorphism classes of meshes which faithfully capture all the geometric structure of meshes in a form that is amenable to computer implementation. In this paper we extend FCT with labels in \(\infty\)-categories. **Motivation.** A central application of FCT is in models of higher-dimensional categories based on a generalisation of string diagrams to arbitrary dimensions [1, 2]. Figure 1: A string diagram expressed both geometrically as a mesh and combinatorially as a truss. The two representations are equivalent up to a contractible choice of diagram layout.
2310.02258
A Neural Scaling Law from Lottery Ticket Ensembling
Neural scaling laws (NSL) refer to the phenomenon where model performance improves with scale. Sharma & Kaplan analyzed NSL using approximation theory and predict that MSE losses decay as $N^{-\alpha}$, $\alpha=4/d$, where $N$ is the number of model parameters, and $d$ is the intrinsic input dimension. Although their theory works well for some cases (e.g., ReLU networks), we surprisingly find that a simple 1D problem $y=x^2$ manifests a different scaling law ($\alpha=1$) from their predictions ($\alpha=4$). We opened the neural networks and found that the new scaling law originates from lottery ticket ensembling: a wider network on average has more "lottery tickets", which are ensembled to reduce the variance of outputs. We support the ensembling mechanism by mechanistically interpreting single neural networks, as well as studying them statistically. We attribute the $N^{-1}$ scaling law to the "central limit theorem" of lottery tickets. Finally, we discuss its potential implications for large language models and statistical physics-type theories of learning.
Ziming Liu, Max Tegmark
2023-10-03T17:58:33Z
http://arxiv.org/abs/2310.02258v2
# A Neural Scaling Law from Lottery Ticket Ensembling ###### Abstract Neural scaling laws (NSL) refer to the phenomenon where model performance improves with scale. Sharma & Kaplan analyzed NSL using approximation theory and predict that MSE losses decay as \(N^{-\alpha}\), \(\alpha=4/d\), where \(N\) is the number of model parameters, and \(d\) is the intrinsic input dimension. Although their theory works well for some cases (e.g., ReLU networks), we surprisingly find that a simple 1D problem \(y=x^{2}\) manifests a different scaling law (\(\alpha=1\)) from their predictions (\(\alpha=4\)). We opened the neural networks and found that the new scaling law originates from _lottery ticket ensembling_: a wider network on average has more "lottery tickets", which are ensembled to reduce the variance of outputs. We support the ensembling mechanism by mechanistically interpreting single neural networks, as well as studying them statistically. We attribute the \(N^{-1}\) scaling law to the "central limit theorem" of lottery tickets. Finally, we discuss its potential implications for large language models and statistical physics-like theories of learning. ## 1 Introduction Neural scaling laws (NSL), the phenomenon where model performance improves as the model size scales up, has been widely observed in deep learning [1; 2; 3; 4; 5; 6; 7; 8]. Typically the losses follow \(\ell\propto N^{-\alpha}\) where \(N\) is the number of model parameters, and \(\alpha>0\) is the scaling exponent. Understanding the mechanisms of neural scaling laws is important both theoretically and empirically. Currently, two main theories of neural scaling laws are proposed: one is the _approximation theory_[3; 9; 10], claiming that the scaling law comes from regression on a data manifold. In particular, the scaling exponent is \(\alpha=4/d\) where \(d\) is the intrinsic input dimension [3; 9] or maximum arity [10]. The other is the _quanta theory_[8], which suggests that the scaling law comes from the hierarchy of subtasks, and the scaling exponent \(\alpha\) depends on the heavy-tailedness of the subtask distribution. This paper aims to reveal yet another mechanism of neural scaling law from _lottery ticket ensembling_. Our theory is motivated by empirical observations in an extremely simple setup, i.e., training two-layer (ReLU or SiLU) networks with the Adam optimizer to fit the squared function \(y=x^{2}\). The approximation theory would predict that \(\alpha=4\) for ReLU networks, but cannot predict \(\alpha\) for SiLU networks. The quanta theory is also not applicable to these cases. Our empirical results are quite intriguing (shown in Figure 1c): For ReLU networks, the loss curve decay as \(N^{-4}\) at the beginning, but soon slows down to \(N^{-1}\) as \(N\) increases. For SiLU networks, the loss curve behaves as \(N^{-1}\) consistently. Now comes the question: what gives rise to this \(N^{-1}\) scaling law? By reverse engineering single networks and doing statistical analysis on a population of networks, we attribute the new neural scaling law to _lottery ticket ensembling_: a network with width \(N\) contain \(n\) "lottery tickets" (\(n\propto N\)), ensembling of which can reduce the variance of outputs as \(n^{-1}\propto N^{-1}\). Lottery tickets refer to sub-networks which can, by their own, achieve good performance [11]. The idea of ensembling is similar to bagging in machine learning where weak learners are aggregated to obtain strong learners, although in neural networks such ensembling strategy is not designed manually but rather emergent from training. The paper is organized as follows: In Section 2, we review the approximation theory and show our empirical results manifesting a scaling law \(\ell\propto N^{-1}\) deviating from the approximation theory. To understand the new scaling law, in Section 3 we reverse engineer neural networks to find out why. Hinted by the observation of symmetric neurons and peaks in the loss histogram, we find the existence of "lottery tickets". Then in Section 4 we observe a central limit theorem for lottery tickets, which attributes the \(N^{-1}\) scaling law to variance reduction as in central limit theorem. Finally in Section 5 we discuss our theory's implications for large language models and statistical physics-like theories of learning. ## 2 A New Scaling Law Not Explained by Approximation Theory In this section, we first review the prediction of NSL by Sharma & Kaplan [9] using the approximation theory. Then we show one simple example which demonstrates a neural scaling law deviating from the approximation theory. ### The Old Tale: Approximating Functions On Data Manifold Sharma & Kaplan [9] predicts that, when data is abundant, the MSE loss achieved by well-trained ReLU neural networks scales as \(N^{-\alpha}\) with \(\alpha=4/d\), where \(d\) is the intrinsic dimensionality of data manifold. The basic idea is: A ReLU network with \(N\) parameters is able to fit \(O(N)\) data points accurately. If these data points are uniformly placed on the grid of a \(d\)-dimensional hypercube (see Figure 0(a)), then there are \(O(N^{1/d})\) points along each dimension, with lattice constant \(h=O(N^{-1/d})\). ReLU networks are piecewise linear functions, so the leading error term (according to Taylor expansion) is second-order, i.e., \(h^{2}\). Considering the squared function in the definition of MSE, we know the MSE loss scales as \(h^{4}=O(N^{-4/d})\). ### Experiments: Discovery of A New Scaling Law **Setup** Let us consider a simple example. The network has only one hidden layer with \(N\) neurons, with a scalar input \(x\in[-2,2]\) and aiming to predict \(y=x^{2}\) (Figure 0(b), more examples in Appendix A). The network is parametrized as \[f(x)=\sum_{i=1}^{N}v_{i}\sigma(w_{i}x+b_{i})+c, \tag{1}\] where \(\sigma\) is the activation function. When the activation function is ReLU, the approximation theory predicts that the MSE loss scales as \(N^{-4}\) since \(d=1\). When the activation function is SiLU Figure 1: (a) The \(\ell\propto N^{-4/d}\) scaling law from Sharma and Kaplan [9] can be understood from approximating a \(d\)-dimensional function while data points lie uniformly inside a hypercube. (b) Our simple setup is training a two-layer SiLU or (Leaky)ReLU network (one hidden layer with width \(N\)) to fit the squared function \(y=x^{2}\). (c) A surprising \(N^{-1}\) scaling emerges for SiLU networks and at the tail of ReLU networks, while [9]’s prediction \(N^{-4}\) only appears at the early stage of ReLU. (\(\sigma(x)=x/(1+e^{-x})\), a smoothed version of ReLU), no existing theory is able to predict the scaling law. We are thus curious to run empirical experiments to see: (1) Is the \(N^{-4}\) scaling law valid in our case? (2) what is the scaling law for SiLU networks? **Empirical Results** We train neural networks with various width \(N=\{10,20,30,40,50\}\) for LeakyReLU and SiLU activations, respectively. We train neural networks with the Adam optimizer for 50000 steps; the initial learning rate is \(10^{-2}\), reducing the learning rate by \(\times 0.2\) every 10000 steps. Since neural networks can be sensitive to initializations, we train 1000 networks with different random seeds and use the median loss value (histograms are shown in Figure 3). The MSE losses versus width \(N\) is shown in Figure 0(c). For ReLU, we find that the loss starts off as \(N^{-4}\), agreeing with the approximation theory, but then quickly transits to a much slower decay \(N^{-1}\). By contrast for SiLU, the MSE loss decays slowly but consistently as \(N^{-1}\). The \(N^{-1}\) scaling law is unexpected, suggesting a new mechanism not discovered before. In the next section, we aim to understand this new scaling law, by reverse engineering SiLU networks. **Remark: Optimization** Notice that the SiLU function \(\sigma(x)=x/(1+e^{-x})\) expands to be a quadratic function around its minimum \(\sigma(x)\approx A(x-x_{*})^{2}+B\)1, so ideally one can carefully construct parameters of a \(N=1\) network to make it approximate the squared function arbitrarily well, which requires the weights \(w\) and \(v\) to be \(w\to 0\), \(v\rightarrow+\infty\) but maintains \(w^{2}v=1/A\) constant. In practice, optimization cannot find such extreme solution, otherwise loss would become perfectly zero even for \(N=1\). Footnote 1: \(A\approx 0.109,x_{*}\approx 1.278,B=-0.278\). ## 3 Mechanistically Understanding Lottery Tickets **Existence of symmetric neurons in wide networks** To understand what is happening inside a wide network, we train an extremely wide network with \(N=10000\) SiLU neurons. We plot \(N\) neurons \((w_{i},b_{i})\) (weights and biases for the first layer) for \(i=1,\cdots,N\) in Figure 1(a). There are two interesting patterns: (1) most neurons are concentrated around a bell-shaped curve, indicating an attractor manifold; (2) the distribution is symmetric with respect to \(w\): if there is a neuron at \((w,b)\), then there is a "symmetric" neuron around \((-w,b)\). The existence of the attractor manifold may not be too surprising, since low-loss solutions should live in a subspace smaller than the entire parameter space. The existence of symmetric neurons, however, is somewhat unexpected and requires more detailed study, as we carry out below. **Two-neuron networks** We conjecture that, if symmetric neurons are universal enough, we should be able to observe them even in a network with 2 hidden neurons. Hence we train SiLU networks with only 2 hidden neurons to fit the squared function (see Figure 1(c)). Since such narrow networks are strongly affected by initializations, we train 1000 networks with different random seeds. We show the histogram of MSE losses of these trained networks in Figure 1(b), finding that it contains many peaks, suggesting the existence of bad local minima in loss landscapes. In particular, we are interested in Figure 2: Evidence of lottery tickets. (a) For an extremely wide network \(N\)=10000, the distribution of weights and biases in the first layer display an intriguing symmetry, i.e., there exist symmetric neurons \((w,b)\) and \((-w,b)\). (b) We train a thousand \(N=2\) networks independently and show the histogram of their losses. The histogram display a few peaks, suggesting existence of a few different local minima or “algorithms”. We call the peak with lowest loss “lottery tickets”. (c) We find lottery tickets to have symmetric neurons, which guarantee that the network represents an even function. the peak with the lowest loss, which we call "lottery tickets", elaborated below. We also attempt to understand other peaks in parameter space and in algorithmic space in Appendix B. A randomly chosen lottery ticket has parameters (illustrated in Figure 2c): \[(w_{1},w_{2},b_{1},b_{2},v_{1},v_{2},c) \tag{2}\] \[= (-0.83010483,0.83010304,-1.16842330,-1.16842365,5.68583536,5.6858 86636,3.1539197)\] where we observe \(w_{1}\approx-w_{2}\), \(b_{1}\approx b_{2}\), \(v_{1}\approx v_{2}\), which correspond to symmetric neurons. The benefits of symmetric neurons can be understood from Taylor expansion. Setting \(w_{1}=-w_{2}=w\), \(b_{1}=b_{2}=b\), \(v_{1}=v_{2}=v\), the network represents an even function \[f(x)=v\sigma(wx+b)+v\sigma(-wx+b)+c,\quad f(x)=f(-x), \tag{3}\] and Taylor expands as \[f(x)=(2\sigma(b)u+c)+u\sigma^{\prime\prime}(b)w^{2}x^{2}+\frac{1}{12}u\sigma^ {\prime\prime\prime\prime}(b)w^{4}x^{4}+..., \tag{4}\] so a neural network can adjust its parameters to make sure \(2\sigma(b)u+c=0\), \(u\sigma^{\prime\prime}(b)w^{2}=1\), leaving the quartic term as the leading error term. The take-away is: utilizing symmetric neurons is very effective at approximating the squared function, however, it relies on luck for networks to find such strategy (justified the terminology "lottery tickets"). ## 4 A Central Limit Theorem of Lottery Tickets In the previous section, we show the loss histogram of width \(N=2\) networks contains many peaks, and define the peak with the lowest loss as "lottery tickets". How would the story change for wider networks (i.e., larger \(N\))? **Loss histograms** For each width \(N=1,2,3,4,10,20,30,40,50\), we train 1000 networks independently (with different random seeds) and plot their loss histograms in Figure 3. For narrow networks (\(N=1,2,3,4\)), the loss histogram contains many peaks, suggesting the existence of many bad local minima, but on the other hand, suggesting the existence of "lottery tickets" which win over other local minima. For wider networks (\(N=10,20,30,40,50\)), the loss histograms are single-peaked. This reminds us of the central limit theorem of random variables: the average of arbitrary weird-shaped distribution would converge to a Gaussian distribution as more and more random variables are averaged. This suggests that maybe there are many lottery tickets inside a wide network and then ensembled. Below we propose a theory for it. **Theory: Ensembling Lottery Tickets Leads to \(N^{-1}\) Scaling** For a wide network, there exist \(n>1\) lottery tickets. The \(i^{\rm th}\) lottery ticket represents a function \(f_{i}(x)\approx f(x)\) whose error term \(e_{i}(x)\equiv f_{i}(x)-f(x)\). We define function norm as \(|f|^{2}=\int_{-\infty}^{\infty}f^{2}(x)p(x)dx\)2, where \(p(x)\geq 0\) is the distribution of \(x\). Typically \(|e_{i}(x)|^{2}\ll|f|^{2}\). The network can utilize the last linear layer to Figure 3: “Central limit theorem” of lottery tickets. For each width \(N\), we train 1000 networks independently and plot their loss histograms. For small \(N\), the distribution is multi-modal, i.e., shows more than one peaks; for large \(N\), the distribution becomes more single-peaked. ensemble these lottery tickets such that \[f_{E}(x)=\sum_{i=1}^{n}a_{i}f_{i}(x)=(\sum_{i=1}^{n}a_{i})f(x)+\sum_{i=1}^{n}a_{i} e_{i}(x). \tag{5}\] We want to minimize: \[|f_{E}(x)-f(x)|^{2}=\left|(\sum_{i=1}^{n}a_{i}-1)f(x)+\sum_{i=1}^{n}a_{i}e_{i}( x)\right|^{2}. \tag{6}\] If we assume \(f(x)\) to be orthogonal to \(e_{i}(x)\), and \(e_{i}(x)\) is orthogonal to \(e_{j}(x)\) for \(j\neq i\) (in general we don't need orthogonality conditions 3), then the above equation simplifies to Footnote 3: If \(\langle e_{i},e_{j}\rangle\neq 0\), i.e., they are not orthogonal, we can always redefine \(e_{j}^{\prime}\equiv e_{j}-\frac{\langle e_{i},e_{i}\rangle}{\langle e_{i},e_ {i}\rangle}e_{i}\) such that \(\langle e_{i},e_{j}^{\prime}\rangle=0\). In summary, we can always make lottery tickets to be orthogonal in the proper bases. However, \(k\) lottery tickets may actually have only \(n<k\) bases. In fact, lottery tickets are highly correlated (as we show in Appendix C), so here \(n\) should be understood as the number of independent/orthogonal lottery tickets. \[|f_{E}(x)-f(x)|^{2}=(\sum_{i=1}^{n}a_{i}-1)^{2}|f|^{2}+\sum_{i=1}^{n}a_{i}^{2} |e_{i}|^{2} \tag{7}\] Since \(|f|^{2}\gg|e_{i}|^{2}\), we want \(\sum_{i=1}^{N}a_{i}=1\) such that the coefficient before \(|f|^{2}\) becomes zero. The second term can be lower bounded by (using Cauchy-Schwarz inequality): \[(\sum_{i=1}^{n}a_{i}^{2}|e_{i}|^{2})(\sum_{i=1}^{n}\frac{1}{|e_{i}|^{2}}) \geq(\sum_{i=1}^{n}a_{i})^{2}=1 \tag{8}\] The equality holds when \(a_{i}|e_{i}|^{2}=C\) for all \(i\), meaning that for a better/worse lottery ticket (small/large \(|e_{i}|^{2}\)), a larger/smaller weight \(a_{i}\) is applied. Finally we arrive that \[|e_{E}|^{2}\equiv|f_{E}(x)-f(x)|^{2}\geq(\sum_{i=1}^{n}\frac{1}{|e_{i}|^{2}}) ^{-1} \tag{9}\] If we further assume \(n\) lottery tickets to be of equal quality, i.e., \(|e_{i}|^{2}=|e|^{2}\), then \(|e_{E}|^{2}=|e|^{2}/n\), meaning that the MSE loss decays as \(n^{-1}\). A width \(N\) network on average has \(n\propto N\) lottery tickets, so the \(n^{-1}\) scaling can further translate to \(N^{-1}\). **The benefits of large widths from ensembling or synergy** Ensembling is only one benefit of wider models. In some sense, ensembling is trivial because you can train two half-sized models independently and then ensemble them. Are there benefits of large widths beyond ensembling, i.e., two sub-networks might synergize in a smart way that smaller networks cannot simulate? To see the benefits of synergy, we need to first subtract the effect of ensembling. The loss of networks Figure 4: Do the benefits of large widths come from plain ensembling or more complicated synergy of smaller subparts? In each plot ”\(N\)” means a network with width \(N\), ”\(N/2\)”\(+\)”\(N/2\)” means two networks with width \(N/2\) are ensembled. If these two loss histograms are different (e.g., \(N=2,4\)), this means more complicated synergy is in place beyond ensembling. If the two loss histograms are similar (e.g., \(N=20,40\)), this means the role of synergy is vanishing, and the benefit of larger widths solely comes from ensembling. \(N/2\) and \(N\), obeys distributions \(L_{N/2}\sim p_{N/2}(\ell)\) and \(L_{N}\sim p_{N}(\ell)\), respectively. Ensembling two \(N/2\) networks, according to our theory above, would give loss \(\tilde{L}_{N}=(1/L_{N/2}+1/L_{N/2})^{-1}\), obeying the distribution \(\tilde{p}_{N}(\ell)\). We then compare the difference between \(\tilde{p}_{N}(\ell)\) and \(p_{N}(\ell)\); larger (smaller) difference means more (less) synergy between two \(N/2\) networks. In practice, we obtain the histogram of \(\tilde{p}_{N}(l)\) by randomly drawing two losses from the histogram of \(\tilde{p}_{N/2}(l)\) and compute the harmonic mean (divided by 2). Our empirical results are shown in Figure 4: narrower networks have stronger synergy, while wider networks are almost ensembling with nearly no synergy. For example, an \(N=2\) network is much better than ensembling of two \(N/2=1\) networks (\(2\gg 1+1\)); an \(N=4\) network is (slightly) better than the ensembling of two \(N/2=2\) networks (\(4>2+2\)); an \(N=20\) or \(N=40\) network does as well as ensembling of two half-width networks (\(20\approx 10+10,40\approx 20+20\)). Although in Section 3, we only analyzed \(N=2\) lottery tickets, \(N=4\) lottery tickets are even better, i.e., one cannot construct an \(N=4\) lottery ticket from simply ensembling two \(N=2\) lottery tickets. This may imply that wider networks may not only provide **more** lottery tickets, but also provide **better** lottery tickets when synergy is present. **Implications for large language models** Lottery ticket ensembling would expect the loss to be \(\ell\propto w^{-1}\) where \(w\) is the width of the network. For large language models, people use \(N\) to represent the number of parameters, which is roughly \((\operatorname{width})^{2}\times\operatorname{depth}\). Since the depth-width ratio [12] is usually fixed when scaling language models, we have \(w\propto N^{1/3}\) (\(N\) is the number of parameters) hence \(\ell\propto N^{-1/3}\). Interestingly, Hoffmann et al. [7] reports that 4: Footnote 4: \(D\) is the dataset size. In our case \(D\to\infty\) so we can ignore the loss dependence on \(D\). \[\ell(N,D)=E+\frac{A}{N^{0.34}}+\frac{B}{D^{0.28}}, \tag{10}\] whose model size scaling exponent \(0.34\) is quite close to our prediction \(1/3\). We would like to investigate in the future whether this really supports our theory or it is a pure coincidence. ## 5 Related works and Discussions **Neural Scaling laws (NSL)** has been widely observed in deep learning [1, 2, 3, 4, 5, 6, 7, 8]. Theories are proposed to explain NSL, including approximation theory based on intrinsic input dimension [9] or maximum arity [10], quanta theory based on subtask decomposition [8]. Our work prosposes another NSL mechanism namely lottery ticket mechanism. It is still unclear how to disentangle these mechanisms, and whether there is a dominant mechanism in deep networks. **Emergence** refers to the phenomenon where a model has abrupt improvements in performance when its size is scaled up [13], although this can depend on specific metrics [14]. Our discussion on ensembling versus synergy may further provide a useful tool to categorize types of emergence: trivial (pure ensembling) or non-trivial (with synergy). **Ensembling** is not new in machine learning. However, ensembling methods are usually manually designed [15]. This work shows that ensembling can be emergent from network training. A very related concept is redundancy, which has been shown both for vision tasks [16], language tasks [17], and even simple math datasets [18, 19]. **Mean field theory** suggests that infinitely wide neural networks approach kernel machines, and finite networks with finite width \(N\) deviates from the limiting kernel by order \(1/N\)[20, 21, 22], which agrees with our lottery ticket central limit theorem. This is a pleasant agreement: our analysis starts from narrow networks (lottery tickets) and extends to wide networks, while mean field theory starts from infinitely wide networks and extends to (finitely) wide networks. Unifying two theories would provide clearer mental pictures, left for future work. **Limitations** The derivation of the \(N^{-1}\) scaling requires the assumption that lottery tickets are unbiased. If all lottery tickets share a non-zero bias, then the loss would plateau to a non-zero value due to the bias term. We indeed observe this phenomenon for some cases in Appendix A. The analysis in this paper is mainly based on a toy example; how general our claims are for other tasks and architectures need more study in the future. ## Acknowledgement We would like to thank Eric Michaud, Isaac Liao and Zechen Zhang for helpful discussions. ZL and MT are supported by IAIFI through NSF grant PHY-2019786, the Foundational Questions Institute and the Rothberg Family Fund for Cognitive Science.
2304.09292
A model of phase-coupled delay equations for the dynamics of word usage
Cycles of word usage have been described using an integro-differential Volterra model close to a Hopf bifurcation. Here we transform this system to a phase model, which allows us to phase-couple the words and address the observation of coherent oscillations in word usage.
Alejandro Pardo Pintos, Diego Shalom, Enzo Tagliazucchi, Gabriel Mindlin, Marcos Trevisan
2023-04-18T20:58:15Z
http://arxiv.org/abs/2304.09292v3
# A model of phase-coupled delay equations for the dynamics of word usage ###### Abstract Word use presents regular oscillations mounted over slowly varying trends. These oscillations have been recently interpreted in terms of fashion-like cycles of interest and saturation, and modelled using a logistic equation with distributed delay. Here we show that the communities of semantically related words are partially synchronized. To account for this, we model the words of each community using logistic equations connected with a Kuramoto coupling. In this way, we test the simple hypothesis that the change in the occurrence of a word depends linearly on the occurrence of its semantic neighbours. We show that this simple model reproduces the coherence observed in the experimental communities using a single global coupling across multiple languages, regardless of the network topology. Our results build confidence on a universal model of language usage based on the interaction between cognitive foes and the sociocultural context. + Footnote †: Author contributions: GM and MT designed the research; APP and DES analyzed the data; APP, ET and MT wrote the paper. ## I Introduction Most language changes occur without the express knowledge of speakers and produce large-scale effects [1]. Some of these effects have recently been the subject of quantitative studies [2; 3] using massive databases currently at our disposal [4; 5]. Of particular interest here is the analysis of the largest corpus of books available [6], which unveiled the dominance of 16-years oscillations in the frequency of occurrence of words, mounted over slowly varying trends [7] (Figure 1a). To explore the linguistic information carried by trends and oscillations, in a previous study we splitted both components from each word and grouped them by similarity [8]. The result of this operation is that words with similar trends form groups that characterize socio-cultural periods, as shown in Figure 1b. In this example, the trends show a maximum in the early \(20^{\text{th}}\) century, putting together keywords of the post-industrial society. Other groups gather keywords of the industrial revolution, the world was and the digital era, among many others. Instead, when the trends are disregarded and only the oscillatory components of the words are considered, communities of semantically related words emerge, as the chemistry group in Figure 1c. Other groups with similar oscillations are related to economy, law, the army, medicine, etc.. Here, we refer to communities of words with similar trends as _keywords_, and groups of words with similar oscillations as _semantic fields_. The observation of regular oscillations led us to propose a basic mechanism for word usage [8] that is common to other cultural objects with life cycles, such as fashion [9; 10]. According to this interpretation, the words that belong to a field of interest increase their occurrences until sustained consumption produces a saturation that decreases usage, eventually leaving the topic ready to regain attention. We made this mechanism operative with the Volterra logistic model [11; 12] \[\dot{u}=Ru\left[1-\frac{1}{k}\int_{-\infty}^{t}G(t-\tau)u(\tau)\,d\tau\right], \tag{1}\] where \(R\) is the rate of growth of a word, and \(G(t)\) a weighting factor that indicates how much emphasis should be given to the earlier times to determine its inhibitory effect in the present. We used the strong kernel \(G(\tau)=4\tau/\bar{\tau}^{2}\)\(e^{-2\tau/\bar{\tau}}\), a distributed delay with a maximum influence \(\bar{\tau}/2\) years in the past. With this kernel, equation 1 can be rewritten as the 3-dimensional system [13] \[\left\{\begin{array}{l}\dot{u}=Ru\left(1-w/k\right)\\ \dot{v}=2/\bar{\tau}\left(u-v\right)\\ \dot{w}=2/\bar{\tau}\left(v-w\right),\end{array}\right. \tag{2}\] which is indeed capable of producing cycles. In fact, a standard bifurcation analysis [14] revealed that a stable periodic solution is created through a Hopf bifurcation as the delay is increased above the critical value \(\bar{\tau}=4/R\) (Figure 2a). The equilibrium \(u^{*}=k\) represents the static usage of a word, a theoretical limit that is perturbed by the socio-cultural context, identified with the experimental trends. Although simple, the model of equations 2 driven by trends \(k(t)\) fits the time series of word usage [8], with parameters \(R\) and \(\bar{\tau}\) distributed along the Hopf bifurcation, as shown in Figure 2a. According to this view, word usage is represented by externally driven units poised near the birth of self-sustaining oscillations. Although this model captures some central aspects of word usage, it treats each word in isolation, and therefore fails to explain the organization of words in communities [15]. The high degree of oscillatory coherence of the semantic communities, as displayed in the example of Figure 1c, suggests that words are phase-coupled. We tested this hypothesis by connecting Equations 2 with a Kuramoto model. ## III Results To map the system of equations 2 into a phase model [16], we translate the positive equilibrium \((u,v,w)^{*}=k(1,1,1)\) to the origin. In the region \(\bar{\tau}>8/27R\) (above the lower curve in Figure 2), the linear part has two complex roots, \(\Lambda\) and \(\bar{\Lambda}\) and a real negative root, \(\Lambda_{Re}\). Changing coordinates to the basis of eigenvectors and expressing the system in cylindrical coordinates yields \[\begin{pmatrix}\dot{r}\\ \dot{\phi}\\ \dot{z}\end{pmatrix}= R_{z}(\phi)\begin{pmatrix}\mathrm{Re}(\Lambda)&-\mathrm{Im}( \Lambda)&0\\ \mathrm{Im}(\Lambda)&\mathrm{Re}(\Lambda)&0\\ 0&0&\Lambda_{Re}\end{pmatrix}\begin{pmatrix}r\cos\phi\\ \sin\phi\\ z\end{pmatrix}+\] \[+R_{z}(\phi)\begin{pmatrix}\mathrm{Re}(nl_{1})\\ \mathrm{Im}(nl_{1})/r\\ nl_{3}\end{pmatrix} \tag{3}\] where \(R_{z}(\phi)\) is the rotation matrix along the direction \(z=(1,-1,1)\), as sketched in Figure 2b; \(T\) is the matrix of eigenvectors and \(nl_{1,3}\) are the transformed nonlinear terms of equation 2 (see Methods). This allows us to represent a community \(\mathcal{X}\) of \(N\) words by an equal number of systems of Equations 3, adding an all-to-all, purely sinusoidal Kuramoto coupling to the \(\phi_{i}\) variable [17] \[\frac{\lambda}{N}\sum_{j\in\mathcal{X}}\sin(\phi_{j}-\phi_{i}), \tag{4}\] where \(\lambda\) is a global coupling weight. Is this model capable of reproducing the coherence observed for semantic Figure 1: **Word usage consists on cycles mounted over slowly varying trends.** (a) Frequency of usage of the words _time, work_, and _god_ over the last three centuries, showing oscillations mounted on a slowly varying trend (black). Wavelet analysis revealed the dominance of oscillations with periods \(\sim 15\) years across languages (bottom panel). (b) We analyzed trends and oscillations separately: for each community of words, we show the normalized trends in the upper panel and the normalized oscillations in the lower panel. Similar trends correspond to keywords of sociocultural periods. The trends are similar, and the oscillations are more variable. (c) Similar oscillations correspond to semantically related words, as shown by the chemistry group. In this case, the oscillations of the words are synchronized, but the trends present high variability. fields and keywords? To answer this question, we first begin by characterizing our experimental data. In Figure 3a, we show the distribution of the communities ranked by size, and in Figure 3b-Exp, we show the coherence \(\bar{\rho}\) averaged across communities for semantic fields and keywords (see Methods). By construction, semantic fields exhibit highly synchronized oscillations with a coherence of \(\bar{\rho}\sim 0.5\) across languages. As expected, the coherence of the keywords was lower, \(\bar{\rho}\sim 0.35\) (Figure 3b-Exp). Part of this coherence stems from finite size effects; to estimate this contribution, we shuffled the words between communities and recomputed the order parameter to obtain a baseline of \(\bar{\rho}\sim 0.2\) (Figure 3b-Shuffled). To test the ability of our model to reproduce these properties, we simulated each community of \(N\) words with an equal number of nodes controlled by Equations 2. The initial conditions \((u_{0},v_{0},w_{0})\) and parameters \((R,\bar{\tau})\) were selected at random, the latter within the region with grey points around the Hopf bifurcation shown in Figure 2a. We then integrated the system expressed in cylindrical coordinates (Equations 3) driven by experimental trends and phase-coupled with a global weigth \(\lambda\) (Equation 4). We then invert the map to recover the usage \(u(t)\) of all words in the community, over which we computed the order parameter \(\rho\). Repeating this procedure across communities gives us the mean order parameter \(\bar{\rho}\) (see Methods section). Our simulations show that communities of nodes connected with a global coupling weight \(\lambda_{t}=0.11\) years\({}^{-1}\) produce a mean coherence compatible with that observed for semantic fields across languages, \(\bar{\rho}\sim 0.5\) (Figure 3c-TrendCooupling). The keywords are perhaps more interesting because they allow the exploration of the relative roles of external driving and coupling. To see this, we begin with uncoupled non-driven nodes, for which we obtain coherence levels similar to those of shuffled words (Figure 3c-NoTrendNoCooupling). When the external drive is turned on, coherence increases (Figure 3c-TrendNoCooupling). This reflects the fact that collectively driven oscillators can be partially synchronized without being currently coupled [18; 19]. Although the effect is slight, this shows that external driving indeed contributes to the coherence of the keyword communities [20]. In fact, when the coupling is turned on, the coherence of keywords across languages (\(\bar{\rho}\sim 0.35\)) is reached for \(\lambda_{k}=0.06\) years\({}^{-1}\) (Figure 3c-TrendCooupling). Thus far, we have explored semantic fields and keywords separatedly. In each case, we showed that word communities can be described as nodes of slowly driven logistic equations linked by a Kuramoto coupling. Semantic fields and keywords differ only in their coupling weight, with \(\lambda_{t}\sim 2\lambda_{k}\). We then address the problem of a complete network. By construction, the communities of semantic fields are mutually exclusive (a noun does not belong to more than one topic), and the same applies to communities of keywords. However, semantic fields and keywords are not independent of each other, as illustrated by the word Figure 2: **Time series of word usage are fitted by a logistic model with strong kernel.** (a) Bifurcation diagram for Equation 2. The origin is a saddle node. The other equilibrium, \((u,v,w)^{*}=k(1,1,1)\), undergoes a Hopf bifurcation at \(\bar{\tau}=4/R\) (upper black line). The oscillations become increasingly damped until they disappear at \(\bar{\tau}=8/27R\) (lower black line). The dimension not shown is attractive across the parameter space. English nouns fitted by the model are shown as grey points (mean growth rate \(R=0.5\pm 0.2\) years\({}^{-1}\) and mean delay \(\bar{\tau}=8\pm 3\) years). Simulations were performed by selecting paramters randomly from an area similar to the fitted series. This region was fitted to match the main coherence of the shuffled series (Figure 1) (b). In the Hopf bifurcation, a limit cycle is created in the plane of normal \((1,-1,1)\). The system can be rewritten in cylindrical coordinates \((r,\phi,z)\) that allows connecting the units of a word community through the phase variable \(\phi\). _copper_ in Figure 1, which belongs to both the chemistry topic and the keywords of the early \(20^{\text{th}}\) century. Hence, building a global picture of a complete network requires mixing both community types. This is performed using the following coupling for the complete network: \[\frac{\lambda_{k}}{\kappa_{k_{i}}}\sum_{j\in\mathcal{N}_{k}}\sin(\phi_{j}-\phi_ {i})+\frac{\lambda_{k}}{\kappa_{t_{i}}}\sum_{j\in\mathcal{N}_{t}}\sin(\phi_{j} -\phi_{i}), \tag{5}\] where \(\mathcal{N}_{t}\) (\(\mathcal{N}_{k}\)) is the topic (keyword) community to which the word \(i\) belongs and \(\kappa_{i}\) is the degree (number of links) of the node \(i\) in each community. The dark blue dot in Figure 3d shows the coupling weights \(\lambda_{t}\) and \(\lambda_{k}\) that reproduce the experimental coherence. As expected, the mixture of communities required slightly higher values than those found when keywords and semantic fields were considered separatedly. Finally, we explored the robustness of our results with respect to the network topology. For this sake, we relaxed the fully connected condition, simulating communities of \(N\) words using \(N/2\), \(N/4\) and \(N/8\) random connections while ensuring a finite distance between any two nodes. In Figure 3d we see that when the number of connections within a community decreases, both coupling weigths increase to reach the experimental coherence level. To summarize, we show that the dynamics of word usage can be modelled as communities of phase-coupled logistic equations with distributed delays. ## IV Conclusions Logistic equations with distributed delays have a long-standing tradition in mathematical biology [14]. Over the years, this bare-bones model has also proven useful for a variety of social phenomena, such as the upheavals of popular content in social media [21] and the cycles in word usage, interpreted as the interplay between interest and saturation [8]. Here we deal with the synchrony observed in different groups of words that tend to oscillate together in-phase. To account for this, we derived a phase map of the system, and linked the units with a Kuramoto coupling. Our main finding is that the oscillatory activity of the model Figure 3: **Phase-coupled logistic equations account for the coherence observed in word communities.** (a) Topic and keyword communities ranked by size. (b) Mean coherence \(\rho\) across communities for experimental data. (c) Mean coherence \(\rho\) across communities for simulated data. Simulations of isolated units governed by Equation 2 with constant \(k\) (no trend no coupling) are less coherent than the same units driven by the experimental trends \(k=k(t)\) (trend no coupling). Weakly coupled driven units (trend coupling) exhibit coherence levels that are compatible with the experimental data. (d) Values of \(\lambda_{s}\) and \(\lambda_{w}\) for which simulations with coupling according to equation 5 reach levels of coherence similar to the experimental ones when considering the network of all the words. The errorbars correspond to the step of the explored grid. reaches the coherence observed in the experimental communities, regardless of their topology and using a single global coupling for all the analyzed languages. The Kuramoto model relates in a very simple way to the original variables of the logistic equation. In fact, for a linear coupling of the state vectors, the corresponding phase function does not contain second or higher harmonics, as in the Kuramoto model [22]. According to this, the change in the occurrence of a word depends linearly on the occurrence of its semantic neighbours. This simple relationship between the words of a given semantic field allows us to increase confidence in this model for word usage dynamics. A final remark regarding the scale of description of the problem. In this study we focus on the nouns present throughout the last three centuries, excluding from the analysis the words that enter or leave the corpus in that period and the changes in the word clusters that may occur when the network is analyzed at a higher temporal resolution. Here we describe the network associated with the long-term dynamics of word usage, a necessary first step before describing the dynamics of clusters at a higher resolution, for which the present description in the phase domain offers a vast battery of analysis tools [23; 24]. ## Materials and Methods **Data processing.** Google Books is a massive corpus of lexical data extracted from approximately eight million books (6% of all books ever published) that has been widely used for research. Despite its size, the database is not free from biases [25], which we addressed in [8]. Briefly, we collected tokens of the most common nouns converted to singular forms in English (10,403), Spanish (8,064), French (6,291), German (3,341), and Italian (2,995) from Google Books 2019 [26], retaining only nouns with at least \(10^{6}\) appearances per year over the last 300 years. We then computed the word frequency \(x(t)=n(t)/N(t)\), where \(n(t)\) is the number of appearances of noun \(n\), and \(N(t)\) is the size of the corpus in year \(t\). Singular spectral analysis (SSA) [27] was used to extract the trends \(k(t)\), computed as the non-cyclic components of the time series \(x(t)\)[28]. The oscillatory components \(x(t)-k(t)\) were low-pass filtered (\(f<1/6\) years\({}^{-1}\)) to avoid possible random sampling effects in database loading [29]. **Clustering.** The semantic fields and keywords were computed from the correlation matrices of trends \(k(t)\) and oscillations \(x(t)-k(t)\) for the extracted nouns in English, Spanish, German, French, and Italian. The communities were determined using a cutoff for the correlations of 0.04 for trends and 0.5 for oscillations; these values ensure maximum correlation between series compatible with a cluster size distribution that follows Zipf's law. Communities with fewer than 10 words were discarded. **Phase coherence.** We transformed the oscillations \(x(t)-k(t)\) into phase variables \(\theta(t)\) using the Hilbert transform [30]. The collective rhythm of a community of \(N\) words was then computed using the order parameter averaged over the last three centuries, \(\rho=\left\langle\left|\sum_{j=1}^{N}e^{i\theta_{j}(t)}/N\right|\right\rangle\). The mean coherence across communities, \(\bar{\rho}\) was then computed. Figure 3 shows the distribution of mean coherence values across communities for different languages. Coherence values were normally distributed (Kolmogorov-Smirnov test: \(p>0.05\) for all languages and conditions). A two-sample t-test showed that only the distribution of experimental data (Fig. 3a keywords) and the simulations of coupled externally driven units (Figure 3b trend coupling) are equivalent across languages. **Derivation of the phase model.** Translating the equations 2 to the slowly evolving point \((k,k,k)\) we have \[\begin{pmatrix}\dot{u}\\ \dot{v}\\ \dot{w}\end{pmatrix}=\begin{pmatrix}0&0&-R\\ 2/\bar{\tau}&-2/\bar{\tau}&0\\ 0&2/\bar{\tau}&-2/\bar{\tau}\end{pmatrix}\begin{pmatrix}u\\ v\\ w\end{pmatrix}-\begin{pmatrix}\frac{R}{k}uw+\dot{k}\\ \dot{k}\\ \dot{k}\end{pmatrix}.\] Considering the linear part, we obtain the characteristic equation \(\Lambda^{3}+4/\bar{\tau}\,\Lambda^{2}+4/\bar{\tau}^{2}\,\Lambda+4R/\bar{\tau} ^{2}=0\). For \(\bar{\tau}>8/27R\) (above the lower curve in Figure 2), we have two complex conjugate roots \(\Lambda_{1}\) and \(\Lambda_{2}\) and a real negative root \(\Lambda_{3}\): \[\Lambda_{1}=\frac{1}{3\bar{\tau}}\left(L_{1}+L_{2}-4\right)\] \[\Lambda_{2,3}=\frac{1}{3\bar{\tau}}\left[-\frac{1}{4}\left(L_{1} +L_{2}\right)-4\pm j\frac{\sqrt{3}}{2}\left(L_{2}-L_{1}\right)\right],\] where \(L_{1}=2^{1/3}C^{1/3}\), \(L_{2}=2^{5/3}C^{-1/3}\), and \(C=3\sqrt{3R\bar{\tau}}\sqrt{27R\bar{\tau}-8}-27R\bar{\tau}+4\) is complex. Changing coordinates to the basis of eigenvectors, the system reads \[\begin{pmatrix}\dot{x_{1}}\\ \dot{x_{2}}\\ \dot{x_{3}}\end{pmatrix}=\begin{pmatrix}\Lambda_{1}&0&0\\ 0&\Lambda_{2}&0\\ 0&0&\Lambda_{3}\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\end{pmatrix}-T^{-1}\begin{pmatrix}\frac{R}{k}\sum\limits_{i,j}x_{i}x_{j}A_{ i}^{2}+\dot{k}\\ \dot{k}\\ \dot{k}\end{pmatrix} \tag{6}\] with \(A_{i}=1+\Lambda_{i}\bar{\tau}/2\), \(x_{1}\) and \(x_{2}\) complex conjugate variables and \(T\) the matrix of eigenvectors \[T=\begin{pmatrix}A_{1}^{2}&A_{2}^{2}&A_{3}^{2}\\ A_{1}&A_{2}&A_{3}\\ 1&1&1\end{pmatrix}.\] This system can be expressed as: \[\begin{pmatrix}\dot{\bar{r}}\\ \dot{\bar{\phi}}\\ \dot{z}\end{pmatrix}= R_{z}(\phi)\begin{pmatrix}\operatorname{Re}(\Lambda_{1})&- \operatorname{Im}(\Lambda_{1})&0\\ \operatorname{Im}(\Lambda_{1})&\operatorname{Re}(\Lambda_{1})&0\\ 0&0&\Lambda_{3}\end{pmatrix}\begin{pmatrix}r\cos\phi\\ \sin\phi\\ z\end{pmatrix}+\] \[+R_{z}(\phi)\begin{pmatrix}\operatorname{Re}(nl_{1})\\ \operatorname{Im}(nl_{1}/r)\\ nl_{3}\end{pmatrix} \tag{7}\] where \(nl_{1}\) and \(nl_{3}\) stand for the nonlinear terms of the \(\dot{x}_{1}\) and \(\dot{x}_{3}\) in equation 6, \[nl_{1}=\frac{L_{1}^{2}(L_{1}^{2}-4)}{18R\bar{\tau}D}\cdot\left( \frac{R}{k}\sum_{i,j}x_{i}x_{j}A_{i}^{2}+\dot{k}\right)-\] \[-\frac{\dot{k}}{3L_{1}^{2}}\left[2(C+2+B-L_{1}^{2})+\sqrt{3}j(C+2 -B)\sum_{i=1}^{2}(-1)^{i+1}\right],\] \[nl_{3}=\frac{L_{1}^{2}\left[-(L_{1}^{2}-4)+\sqrt{3}j(L_{1}^{2}+4) \right]}{36R\bar{\tau}D}\left(\frac{R}{k}\sum_{i,j}x_{i}x_{j}A_{i}^{2}+\dot{k}\right)\] \[+\frac{L_{1}\left[(C+2-B)-\sqrt{3}j(C+2+B)\right]}{27R\bar{\tau}D} \dot{k}+\] \[+\left[1+\frac{(L_{1}^{5}/2-4L_{1}^{2}+E)+\sqrt{3}j(L_{1}^{5}/2-4 L_{1}^{2}-E)}{108R\bar{\tau}D}\right]\dot{k}\] where \(B=L_{1}^{4}/8+2L_{1}\), \(D=4\sqrt{27R\bar{\tau}-8}/(3\sqrt{3R\bar{\tau}})-4-C\) and \(E=L_{1}^{4}-8L_{1}\). The matrix \(R_{z}(\phi)\) is the usual rotation matrix along the \(z\) axis \[R_{z}(\phi)=\begin{pmatrix}\cos\phi&-\sin\phi&0\\ \sin\phi&\cos\phi&0\\ 0&0&1\end{pmatrix}.\] Equation 7 represents a phase map of the slowly driven logistic equation with a strong kernel across the entire oscillatory region above the lower curve \(\bar{\tau}=8/(27R)\) in Figure 2a. This expression was combined with the Kuramoto phase-coupling equation 4 to model the communities of words. **Data and code availability.** All the datasets are publicly available at [31]. The matlab codes and processed data used to generate the Figures of this work and the word communities are available at [32]. ###### Acknowledgements. This study was partially funded by the University of Buenos Aires (UBA) and the Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) through the grant PIP-11220200102083CO.
2310.05537
ParFam -- (Neural Guided) Symbolic Regression Based on Continuous Global Optimization
The problem of symbolic regression (SR) arises in many different applications, such as identifying physical laws or deriving mathematical equations describing the behavior of financial markets from given data. Various methods exist to address the problem of SR, often based on genetic programming. However, these methods are usually complicated and involve various hyperparameters. In this paper, we present our new approach ParFam that utilizes parametric families of suitable symbolic functions to translate the discrete symbolic regression problem into a continuous one, resulting in a more straightforward setup compared to current state-of-the-art methods. In combination with a global optimizer, this approach results in a highly effective method to tackle the problem of SR. We theoretically analyze the expressivity of ParFam and demonstrate its performance with extensive numerical experiments based on the common SR benchmark suit SRBench, showing that we achieve state-of-the-art results. Moreover, we present an extension incorporating a pre-trained transformer network DL-ParFam to guide ParFam, accelerating the optimization process by up to two magnitudes. Our code and results can be found at https://github.com/Philipp238/parfam.
Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
2023-10-09T09:01:25Z
http://arxiv.org/abs/2310.05537v3
# ParFam - Symbolic Regression Based on Continuous Global Optimization ###### Abstract The problem of symbolic regression (SR) arises in many different applications, such as identifying physical laws or deriving mathematical equations describing the behavior of financial markets from given data. Various methods exist to address the problem of SR, often based on genetic programming. However, these methods are usually quite complicated and require a lot of hyperparameter tuning and computational resources. In this paper, we present our new method _ParFam_ that utilizes parametric families of suitable symbolic functions to translate the discrete symbolic regression problem into a continuous one, resulting in a more straightforward setup compared to current state-of-the-art methods. In combination with a powerful global optimizer, this approach results in an effective method to tackle the problem of SR. Furthermore, it can be easily extended to more advanced algorithms, e.g., by adding a deep neural network to find good-fitting parametric families. We prove the performance of ParFam with extensive numerical experiments based on the common SR benchmark suit SRBench, showing that we achieve state-of-the-art results. Our code and results can be found at [https://github.com/Philipp238/parfam](https://github.com/Philipp238/parfam). Symbolic Regression Machine Learning Global Optimization. ## 1 Introduction _Symbolic regression_ (SR) describes the task of finding a symbolic function that accurately represents the connection between given input and output data. At the same time, the function should be as simple as possible to ensure robustness against noise and interpretability. This is of particular interest for applications where the aim is to (mathematically) analyze the resulting function afterward or get further insights into the process to ensure trustworthiness, for instance, in physical or chemical sciences (Quade et al., 2016; Angelis et al., 2023; Wang et al., 2019). The range of possible applications of SR is therefore vast, from predicting the dynamics of ecosystems (Chen et al., 2019), forecasting the solar power for energy production (Quade et al., 2016), estimating the development of financial markets (Liu and Guo, 2023), analyzing the stability of certain materials (He and Zhang, 2021) to planning optimal trajectories for robots (Oplatkova and Zelinka, 2007), to name but a few. Moreover, as Angelis et al. (2023) points out, the number of papers on SR has increased significantly in recent years, highlighting the relevance and research interest in this area. SR is a specific regression task in machine learning that aims to find an accurate model without any assumption by the user related to the specific data set. Formally, a symbolic function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) that accurately fits a given data set \((x_{i},y_{i})_{i=1,\dots,N}\subseteq\mathbb{R}^{n}\times\mathbb{R}\) is sought, i.e., it should satisfy \(y_{i}=f(x_{i})\) for all data points, or in the case of noise \(y_{i}\approx f(x_{i})\) for all \(i\in\{1,\dots,N\}\). Since, in this general setting, there are no assumptions on the structure of possible models, the search space is infinite-dimensional. In practice, however, it is necessary to specify the model space in some sense, and all methods rely in one way or another on certain implicit assumptions in the modeling process. For example, _genetic programming_ (GP) methods, one of the most common classes of solution algorithms, require the choice of base functions that can be combined to build the model (Schmidt and Lipson, 2009; 2010). Nevertheless, unlike other regression tasks, SR aims at finding a simple symbolic and thus interpretable formula while assuming as little as possible about the unknown function. In contrast to SR, solutions derived via _neural networks_ (NNs), for instance, lack interpretability and traditional regression tasks typically assume a strong structure of the unknown function like linearity or polynomial behavior. To tackle SR problems, the most established methods are based on _genetic programming_(Augusto and Barbosa, 2000; Schmidt and Lipson, 2009; 2010; Cranmer, 2023), but nowadays there also exist many solution algorithms that make use of other machine learning methods, in particular neural networks (Udrescu and Tegmark, 2020; Martins and Lampert, 2017; Desai and Strachan, 2021; Makke et al., 2022). However, even though there have been many attempts with complicated procedures to search through the infinite-dimensional space of functions, many of them show unsatisfactory results when evaluated on complex benchmarks: La Cava et al. (2021) evaluate 13 state-of-the-art SR algorithms on the _SRBench_ ground-truth problems: the Feynman (Udrescu and Tegmark, 2020) and Strogatz (La Cava et al., 2016) problem sets. Both data sets consist of physical formulas with varying complexities, where the first one encompasses 115 formulas and the latter 14 ordinary differential equations. Out of the 13 algorithms evaluated by La Cava et al. (2021), all algorithms find at most \(30\%\) of the formulas of each problem set in the given time of 8 CPU hours, except for AI Feynman (Udrescu and Tegmark, 2020). AI Feynman, which is based on recursive function simplification inspired by the structure of the Feynman data set, is able to recover more than 50% of the Feynman equations but fails for more than 70% for the Strogatz problems. The rates are even worse for data sets incorporating noise (La Cava et al., 2021; Cranmer, 2023). In addition to AI Feynman, we are only aware of one other algorithm, proposed by Holt et al. (2023) after the benchmark by La Cava et al. (2021), which has demonstrated superior performance on the SRBench ground-truth data sets while following the SRBench time limit. In this paper, we introduce the novel algorithm _ParFam_ that addresses SR by leveraging the inherent structure of physical formulas and, thereby, translating the discrete optimization problem into a continuous one. This grants users precise control over the search space and facilitates the incorporation of gradient-based optimization techniques. More precisely, we apply _basin-hopping_, which combines a global random search with a local search algorithm (Wales and Doye, 1997). Originally, this algorithm was designed to solve molecular problems and, thus, is suitable for very high-dimensional landscapes. The details of ParFam are introduced in Section 2.1. Notably, despite its straightforward nature, ParFam achieves state-of-the-art performance on the Feynman and Strogatz data set as demonstrated in Section 3.1. Moreover, this structure enables the simple application of pre-trained NNs to reduce the dimensionality of the search space. This concept is exemplified by our prototype, _DL-ParFam_, introduced in Section 2.2. Through experimentation on a synthetic data set, we demonstrate that DL-ParFam significantly surpasses ParFam, cf. Section 3.2. Our ContributionsOur key contributions are as follows: 1. We introduce ParFam, a new method for SR leveraging the inherent structure of physical formulas and, thereby, translating the discrete optimization problem into a continuous one. This results in the following advantages: (1) Enabling gradient-based optimization techniques; (2) Efficient but simple and user-friendly setup; (3) State-of-the-art performance on the ground-truth problems of La Cava et al. (2021), the Feynman and Strogatz data sets. 2. Furthermore, we introduce the extension DL-ParFam, which shows how the structure of ParFam allows for using a pre-trained NN, overcoming the limitations of previous approaches. Related workTraditionally, genetic programming have been used for SR to heuristically search the space of equations given some base functions and operations (Augusto and Barbosa, 2000; Schmidt and Lipson, 2009; 2010; Cranmer, 2023). However, due to the accomplishments of neural networks across diverse domains, numerous researchers aimed to leverage their capabilities within the realm of SR. Udrescu and Tegmark (2020), for instance, have employed an auxiliary NN to evaluate data characteristics. In a similar vein, Martius and Lampert (2017), Sahoo et al. (2018), and Desai and Strachan (2021) used compact NN architectures with physically meaningful activation functions such as \(\sin\) and \(\cos\), enabling stochastic gradient descent for the search of symbolic functions. The approach by Petersen et al. (2021), on the contrary, relies on _reinforcement learning_ (RL) to explore the function space, where a policy, modeled by a recurrent neural network, generates candidate solutions. Mundhenk et al. (2021) combined this concept with genetic programming such that the RL algorithm iteratively learns to identify a good initial population for the GP algorithm, resulting in superior performance compared to individual RL and GP approaches. Given the simplicity of sampling functions and evaluating them, several endeavors have emerged to train NNs using synthetic data to predict underlying functions. Initial simpler approaches were limited by the variability of the given data set (Biggio et al., 2020) or of the functions (Li et al., 2022). However, these limitations can be effectively circumvented by more advanced approaches using the transformer architecture (Biggio et al., 2021; Kamienny et al., 2022; Holt et al., 2023). Apart from the algorithms evaluated by La Cava et al. (2021), Deep Generative Symbolic Regression (DGSR) by Holt et al. (2023) and unified Deep Symbolic Regression (uDSR) by Landajuela et al. (2022) are the only algorithms--to the best of our knowledge--which have been evaluated on the whole Feynman data set and outperformed the state-of-the-art AI Feynman in the symbolic recovery rate. Notably, DGSR's success has only been possible by the computationally expensive pre-training of an encoder-decoder architecture using RL instead of gradient-based methods to learn invariances of the functions and an additional finetuning step during inference by performing neural guided priority queue training (NGPQT) as introduced by Mundhenk et al. (2021). uDSR builds upon the already well-performing AI Feynman and adds pre-training, genetic programming, reinforcement learning, and linear regression to it. Unlike for the SRBench benchmark (La Cava et al., 2021), Landajuela et al. (2022) evaluate their method with a time limit of 24 instead of 8 hours, which is why we omit uDSR from our comparisons in Section 3. ParFam shares conceptual proximity with EQL (Martius and Lampert, 2017; Sahoo et al., 2018), as both methods assume a structure of general formulas, effectively translating SR into a continuous optimization problem. However, while ParFam aims to guarantee a unique parameterization for each function, EQL exhibits many redundancies that inflate the parameter space. EQL relies on the local minimizer ADAM (Kingma and Ba, 2014) for coefficient optimization. On the contrary, ParFam leverages the reduced dimensionality of the parameter space by applying global optimization techniques for the parameter search, which mitigates the issues of local minima. Furthermore, ParFam maintains versatility, allowing for straightforward inclusion of operations like logarithms, roots, and division within unary operators--in contrast to EQL as priorly noted by Petersen et al. (2021). Similar to DL-Parfam, Liu et al. (2023) enhanced EQL with a pre-training step. However, this approach still suffers from the listed structural limitations of EQL. ## 2 Methods In the following section, we first introduce our new method ParFam, that exploits a well-suited representation of possible symbolic functions to which an efficient global optimizer can be applied. Afterward, we discuss the extension DL-ParFam, which aims to enhance ParFam by utilizing deep learning to obtain better function representations. ### ParFam The aim of SR is to find a simple and thus interpretable function that describes the mapping underlying the data \((x_{i},y_{i})_{i=1,\dots,N}\) without many additional assumptions. Typically, a set of base functions, such as \(\{+,-,^{-1},\exp,\sin,\sqrt{}\}\), is predetermined. The primary goal of an SR algorithm is to find the simplest function that uses only these base functions to represent the data, where simplicity is usually defined as the number of operations. Since most algorithms make no other assumptions on the function they are looking for, this approach results in a search space that grows exponentially in the number of base functions, dimensions of \(x\), and depth of the expression trees. To reduce the complexity of the search space on the one hand and to obtain more meaningful results on the other hand, some methods apply filters to prevent the output of unwanted or "unnatural" functions. For instance, Petersen et al. (2021) prevent their algorithm from creating compositions of trigonometric functions as \(\sin\circ\cos\) since these are rarely encountered in any scientific domain. Given that the main idea of SR is to gain knowledge of scientific processes, such structural assumptions appear to be reasonable. This is also the motivation for restricting the search space in our approach. Furthermore, we choose the function space such that it can be represented by a parametric family, and the proper expression can be found by applying a continuous global optimizer. #### 2.1.1 The Structure of the Parametric Family The main motivation for ParFam is that most functions appearing in real-world applications can be represented by functions of certain parametric families. More precisely, we assume that they can be written in the form \[f_{\theta}(x)\coloneqq Q_{k+1}(x,g_{1}(Q_{1}(x)),g_{2}(Q_{2}(x)),\dots,g_{k}(Q_{ k}(x))), \tag{1}\] where \(Q_{1},...,Q_{k+1}\) are rational functions, \(g_{1},...,g_{k}\) are the unary base functions, which cannot be expressed as rational functions, like \(\sin\), \(\exp\), \(\surd\) etc., and \(x\in\mathbb{R}^{n}\) is the input vector. Moreover, \(\theta\in\mathbb{R}^{m}\) denotes the coefficients of the individual polynomials, i.e., of the numerators and denominators of \(Q_{1},...,Q_{k+1}\), which are the learnable parameters of this family of functions. The degrees \(d_{i}^{1}\) and \(d_{i}^{2}\), \(i\in\{1,\ldots,k+1\}\), of the numerator and denominator polynomials of \(Q_{1},...,Q_{k+1}\), respectively, and the base functions \(g_{1},...,g_{k}\) are chosen by the user. Depending on the application, even specialized custom functions can be added to the set of base functions. This versatility and its simplicity make ParFam a highly user-friendly tool, adaptable to a wide range of problem domains. In Appendix A, we explain how to incorporate specific base functions to avoid numerical issues and further implementation details. The parametric family we consider excludes composite functions such as \(\sin\circ\cos\) similarly to Petersen et al. (2021). This is rooted in the structure of physical formulas we observe, as can be seen, e.g., in the set of ground-truth problems from SRBench (La Cava et al., 2021), which consists of 129 physically meaningful formulas and only includes one function that does not follow equation 1: \[\sqrt{(x_{1}^{2}+x_{2}^{2}-2x_{1}x_{2}\cos(\theta_{1}-\theta_{2}))}\text{ (Udrescu and Tegmark, 2020, I.29.16)}.\] Furthermore, the "Cambridge Handbook of Physics Formulas" (Woan, 2000) contains more than 2,000 equations from the major physics topics, among which only a handful do not follow the structure of equation 1. #### 2.1.2 Optimization Restricting the search space to functions of the parametric family given by equation 1 yields the advantage that we can translate the discrete SR problem into a continuous one, as now the task is to find the parameters of the rational functions \(Q_{1},...,Q_{k+1}\) such that \(f_{\theta}\) approximates the given data \((x_{i},y_{i})_{i=1,\ldots,N}\), i.e., we aim to minimize the average \(l_{2}\)-distance between \(y_{i}\) and \(f_{\theta}(x_{i})\). As we aim for preferably simple functions to derive interpretable and easy-to-analyze results, a regularization term \(R(\theta)\) is added to encourage sparse parameters. In total, we aim to minimize the loss function \[L(\theta)\coloneqq\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-f_{\theta}(x_{i}) \right)^{2}+\lambda R(\theta), \tag{2}\] where \(\lambda>0\) is a hyperparameter to control the weight of the regularization. Here, we choose \(R(\theta)=\|\theta\|_{1}\) as a surrogate for the number of non-zero parameters, which is known to enforce sparsity in other areas, e.g., NN training (Bishop, 2006; Goodfellow et al., 2016). In Appendix A, we discuss how to deal with the regularization of the coefficients of rational functions in detail. Although the SR problem is now transformed into a continuous optimization problem, due to the presence of many local minima, it is not sufficient to apply purely local optimization algorithms like gradient descent or BFGS. This is also discussed by Nocedal and Wright (2006) and shown in our comparison study in Appendix B. To overcome these local minima, we instead rely on established (stochastic) global optimization methods. Here, we choose the so-called _basin-hopping_ algorithm, originally introduced by Wales and Doye (1997), which combines a local minimizer, e.g., BFGS (Nocedal and Wright, 2006), with a global search technique inspired by Monte-Carlo minimization as proposed by Li and Scheraga (1987) to cover a larger part of the parameter space. More precisely, we use the implementation provided by the SciPy library (Virtanen et al., 2020). Each iteration consists of three steps: 1. Random perturbation of the parameters. 2. Local minimization, e.g., with the BFGS method. 3. Acceptance test based on the function value of the local optimum. The basic idea of the algorithm is to divide the complex landscape of the loss function into multiple areas, leading to different optima. These are the so-called basins. The random perturbation of the parameters allows for hopping between these basins and the local search (based on the real loss function) inbetween improves the results and ensures that a global minimum is reached if the correct basin is chosen. For the acceptance test, the criterion introduced by Metropolis et al. (1953) is taken. Following the optimization with basin-hopping, a finetuning routine is initiated. In this process, coefficients that fall below a certain threshold are set to 0, and the remaining coefficients are optimized using the L-BFGS method, starting from the previously found parameters. The threshold is gradually increased from \(10^{-5}\) to \(10^{-2}\) to encourage further sparsity in the discovered solutions. This step has been found to be crucial in enhancing the parameters initially found by basin-hopping. ### DL-ParFam As discussed in the related work section, there have been multiple attempts in recent years to leverage pre-training for SR, as synthetic data can be easily generated. Even though modern approaches are able to handle flexible data sets in high dimensions (Biggio et al., 2021; Kamienny et al., 2022), they fail to incorporate invariances in the function space, e.g., \(x+y\) and \(y+x\) are seen as different functions, as pointed out by Holt et al. (2023). Holt et al. resolve this by evaluating the generated function to compute the loss and update the network using RL. This effectively solves the invariance problem, as can be seen by their state-of-the-art symbolic recovery rate on the Feynman data set. However, evaluating each function during the training instead of comparing its symbolic expression with the ground-truth is computationally expensive. Moreover, due to the non-differentiability, the network has to be optimized using suitable algorithms like Policy gradient methods. Here, we propose our approach DL-ParFam, which aims to combine the best of both worlds. The idea is to use an NN that, given a data set \((x_{i},y_{i}=f(x_{i}))_{i=1,\dots,N}\), predicts a sparsity pattern on the coefficients \(\theta\) of the parametric family in equation 1. This sparsity pattern or mask specifies which parameters should be variable and learned in the ParFam algorithm and which can be ignored and set to a fixed value of \(0\). The idea of DL-ParFam is visualized in Figure 1. This approach yields significant improvements compared to ParFam and other pre-training based SR methods: * Compared to ParFam: DL-ParFam strongly reduces the dimensions of the optimization problem considered in ParFam, effectively reducing the computation time and success rate for any global optimizer. * Compared to other pre-training based SR methods: DL-ParFam predicts the structure of the function, which can be directly compared with the ground-truth and, thereby, avoids the evaluation on the data grid in every training step and yields an end-to-end differentiable pipeline. In addition, DL-ParFam adeptly handles function invariances, as we guarantee that each set of parameters uniquely defines a function via the structure of ParFam. The primary purpose of this subsection is to demonstrate the potential of utilizing ParFam as a means of structuring scientific formulas beyond the direct optimization presented so far. Our intent is not to present an implementation of DL-ParFam in this paper that can rival existing deep learning-based methods on complex benchmarks like the Feynman data set. This decision is driven by the myriad challenges inherent in benchmarking, such as high dimensionality, diverse data point distributions, varying numbers of data points and dimensions, and a plethora of base functions. While these challenges can be addressed using deep learning techniques, as demonstrated by Biggio et al. (2021), Kamienny et al. (2022), and Holt et al. (2023), they require specialized architectures as transformers. For now, we opt for a straightforward implementation of DL-ParFam to demonstrate its effectiveness on synthetic data. The vanilla implementation, which we consider here, uses a simple fully-connected feedforward neural network \(NN:\mathbb{R}^{N}\rightarrow\mathbb{R}^{m}\) which takes as input the data \((y_{i})_{i=1,\dots,N}\) and outputs a mask \(c\in[0,1]^{m}\), where \(c_{i}\) represents the likelihood that \(\theta_{i}\) is needed to represent the sought symbolic function, i.e., \(c_{i}\approx 0\) indicates that the parameter \(\theta_{i}\) is supposed to be 0 and \(c_{i}\approx 1\) indicates \(\theta_{i}\neq 0\). To reduce the dimensionality of the NN, we only take the output data \(y\) of the functions as input to the NN. Thus, we implicitly assume that the input data is sampled on the same grid \((x_{i})_{i=1,\dots,N}\subseteq\mathbb{R}^{n}\) for all data sets. To ensure this, we train the NN on synthetically generated data \((y^{j}=(y^{j}_{i})_{i=1,\dots,N},c^{j})_{j=1,\dots,K}\), where \(c^{j}\in[0,1]^{m}\) is some mask and \(y^{j}_{i}=f_{\theta_{j}}(x_{i})\) is the output of the corresponding function evaluated on the fixed grid point \(x_{i}\in\mathbb{R}^{n}\). As a loss function with respect to the parameters \(w\) of the NN, we define \[L_{\textsf{NN}}(w)\coloneqq\sum_{j=1}^{K}\text{BCE}(NN(y^{j}),c^{j}), \tag{3}\] Figure 1: Schematic illustration of the DL-ParFam method. where \(\mathrm{BCE}:[0,1]^{m}\times[0,1]^{m}\to\mathbb{R}\) denotes the binary-entropy loss, i.e., \[\mathrm{BCE}(\bar{c},c)\coloneqq\frac{1}{K}\sum_{l=1}^{m}-\left(c_{l}\log(\bar{ c}_{l})+(1-c_{l})\log(1-\bar{c}_{l})\right). \tag{4}\] Another important difference from previous approaches, not outlined before, is that DL-ParFam combines the experience gained through pre-training with the power of the method ParFam, which is highly competitive on its own. In Section 3.2, we show the value of this combination. ## 3 Benchmark In Section 3.1, we evaluate ParFam on the Feynman (Udrescu and Tegmark, 2020) and Strogatz (La Cava et al., 2016) data sets and report its performance in terms of the symbolic recovery rate, the coefficient of determination \(R^{2}\), and the complexity of the derived formula showing that our simple setup significantly outperforms most existing methods and reaches state-of-the-art in SR. In Section 3.2, we study DL-ParFam, revealing the vast potential of adding pre-training to ParFam. ### ParFam After the introduction of the SR benchmark (SRBench) by La Cava et al. (2021), several researchers, including Mundhenk et al. (2021), Holt et al. (2023), Kamienny et al. (2022), and Biggio et al. (2021), have reported their findings using the SRBench's ground-truth data sets. These data sets are the Feynman and the Strogatz data set. Feynman data setThe Feynman data set consists of 119 physical formulas taken from the Feynman lectures and other seminal physics books (Udrescu and Tegmark, 2020). Some examples can be found in Appendix C. The formulas depend on a maximum of 9 independent variables and are composed of the elementary functions \(+,-,*,/,\sqrt{,}\exp,\log,\sin,\cos,\tanh,\arcsin\) and \(\arccos\). Following La Cava et al. (2021), we omit three formulas containing \(\arcsin\) and \(\arccos\) and one data set where the ground-truth formula is missing. Additionally, since the data sets contain more data points than required for ParFam and this abundance of data slows down the optimizer, we only consider a subset of 500, for the experiments without noise, and 1,000, for the experiments with noise, data points of the training data for each problem. Strogatz data setThe Strogatz data set introduced by La Cava et al. (2016) is the second ground-truth problem set included in SRBench (La Cava et al., 2021). It consists of 14 non-linear differential equations describing seven chaotic dynamic systems in two dimensions, listed in Appendix D. Each data set contains 400 samples. MetricsTo ensure comparability with the results evaluated on SRBench, we use the same evaluation metrics as La Cava et al. (2021). First, we report the symbolic recovery rate, which is the percentage of equations ParFam recovered. Second, we consider the coefficient of determination \[R^{2}\coloneqq 1-\frac{\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i=1}^{N}( y_{i}-\bar{y})^{2}}, \tag{5}\] where \(\hat{y}_{i}=f_{\theta}(x_{i})\) represents the model's prediction and \(\bar{y}\) the mean of the output data \(y\). The closer \(R^{2}\) is to 1, the better the model describes the variation in the data. It is a widely used measure for goodness-of-fit since it is independent of the scale and variation of the data. Lastly, we report the complexity of our formula based on the number of mathematical operations following the definition in SRBench. The original data sets do not include any noise. However, similar to La Cava et al. (2021), we additionally perform experiments with noise by adding \(\epsilon_{i}\sim N\left(0,\sigma^{2}\frac{1}{N}\sum_{i=1}^{N}y_{i}^{2}\right)\) to the targets \(y_{i}\), where \(\sigma\) denotes the noise level. HyperparametersThe hyperparameters of ParFam can be divided into two subsets. The first subset defines the parametric family \((f_{\theta})_{\theta\in\mathbb{R}^{m}}\), e.g., the degree of the polynomials and the set of base functions. A good choice for this set is highly problem-dependent. However, in the absence of prior knowledge, it is advantageous to select a parametric family that is sufficiently expansive to encompass a wide range of potential functions. In this context, we opt for \(\sin\), \(\exp\), and \(\sqrt{}\) as our base functions. For the "input rational functions" \(Q_{1},\ldots,Q_{k}\), we set the degrees of the numerator and denominator polynomials to 2. For \(Q_{k+1}\), we set the degree of the numerator polynomial to 4 and the denominator polynomial to 3. This choice results in a parametric family with hundreds of parameters, making it challenging for global optimization. To address this issue, we iterate through various smaller parametric families, each contained in this larger family, see Appendix E for details. The second set of hyperparameters defines the optimization scheme. Here, we set the regularization parameter to \(\lambda=0.001\), the number of iterations for basin-hopping to \(10\), and the maximal number of BFGS steps for the local search to \(100\) times the dimension of the problem. Our choice of parameters is summarized in Table 4 in Appendix F. ResultsFollowing La Cava et al. (2021), we allow a maximal training time of \(8\) CPU hours and a maximal number of function evaluations of \(1,000,000\). In Figure 1(a), we present the symbolic recovery rate on both data sets together. ParFam, AI Feynman, and DGSR exhibit exceptional performance, outperforming all other competitors by a substantial margin (over \(25\%\)). It is important to note that AI Feynman performs particularly well on the Feynman data sets but fails on the Strogatz data set, as shown in Appendix G, indicating that the algorithm is tailored to the Feynman data set. Since DGSR was not tested on noisy data and AI Feynman is strongly influenced by noise, ParFam outperforms all competitors at a noise level of \(\sigma=0.01\). Furthermore, Figure 1(b) shows the accuracy solution, which is the percentage of formulas for which \(R^{2}>0.999\) holds on the test sets. Here, ParFam outperforms all competitors with and without noise. It is important to note that Holt et al. (2023) reported a similar metric but with \(R^{2}>0.999\) instead of \(R^{2}>0.999\). However, DGSR achieved a value of \(90.95\%\) for this less strict metric, compared to ParFam's \(97.67\%\). Figure 3 reveals that ParFam achieves a mean \(R^{2}\) significantly better than its competitors, albeit with slightly more complex formulas. Note that the mean, rather than the median, is shown in this figure since both ParFam and AI Feynman solve over 50% of the formulas without noise, causing their median performance to simply reflect this high success rate. The corresponding plot showing the median and additional results can be found in Appendix G. ### DL-ParFam In this subsection, we aim to demonstrate the potential of DL-ParFam by conducting experiments on synthetically generated data sets. Synthetic data setsTo evaluate DL-ParFam, we employ synthetic data sets due to its current prototype status, which limits its ability to handle complex data sets like the Feynman set. To generate these, we fix the grid \(x_{i}=-10+0.1i\), \(i\in\{1,...,N=200\}\), and choose one set of model hyperparameters to define a specific parametric family \((f_{\theta})_{\theta\in\mathbb{R}^{m}}\). Here, we choose the base functions \(\sin\) and \(\sqrt{}\), set the degree of all numerator polynomials to \(2\) and of all denominator polynomials to \(0\). Then, we sample \((\theta^{j})_{j\in\{1,...,K\}}\subset\mathbb{R}^{m}\) following the scheme described in Appendix H. For each \(\theta^{j}\), \(j\in\{1,\dots,K\}\), and each \(x_{i}\) we evaluate \(f_{\theta^{j}}(x_{i})=y_{i}^{j}\) to obtain \(K\) data sets \(((x_{i})_{i=1,...,N},(y_{i}^{j})_{i=1,...,N},\theta^{j})_{j=1,...,K}\). For our numerical tests, we create two different data sets: The first includes 2,000,000 equations for training the neural network of DL-ParFam, 10,000 for its validation, and another 10,000 for its testing. The second set consists of \(100\) formulas to compare DL-ParFam with ParFam. Our hyperparameter choices are summarized in Table 6 in Appendix I. Figure 2: Symbolic recovery and accuracy solution rate (percentage of data sets with \(R^{2}>0.999\) for the test set) on the SRBench ground-truth problems (Feynman and Strogatz data sets). Pre-training of DL-ParFamWe construct the neural network of DL-ParFam as a feedforward NN with one hidden layer containing 200 neurons. It is trained as described in Section 2.2 using the Adam optimizer with a learning rate of 0.0001 and 20,000 epochs with 500 batches, which takes less than 4h on a TITANRTX GPU. In addition, to predict a mask \(c\in\{0,1\}^{m}\) as described in Section 2.2 we set \(c_{k}=0\) if \(NN((y_{i})_{i=1,\dots,N})<0.2\) and \(c_{k}=1\) otherwise. MetricsTo evaluate the performance of the NN, we report two metrics: the covering score and the average successful cover size. The covering score describes the percentage of formulas for which the mask includes the non-zero parameters, i.e., \(\theta_{k}^{j}\neq 0\) implies \(c_{k}^{j}=1\). The average successful cover size is the mean over the means of \(c^{j}\) across all formulas for which the NN succeeded at identifying the non-zero parameters. Ideally, this value should be as small as possible, indicating that the mask size is minimized while still effectively capturing the non-zero parameters. To understand the influence of the NN in DL-ParFam, we evaluate DL-ParFam and ParFam against each other on the second synthetic data set and report the symbolic recovery rate. Here, we assume for both methods that the perfect choice of model parameters is known, i.e., the same that were used to create the data sets. This assumption allows us to assess the relative performance of DL-ParFam and ParFam rather than evaluating their general performance. ResultsThe NN reaches a covering score of \(91.32\%\) on the test data set with an average successful cover size of \(26.62\%\). This indicates that the pre-training helps to reduce the number of parameters by almost 3/4 in \(91.32\%\) of the formulas. The relative performance of ParFam and DL-ParFam is shown in Table 1, which reveals that DL-ParFam solves consistently \(16\)-\(32\%\) more equations than ParFam while requiring only approximately half the time. ## 4 Discussion and Conclusion This work introduces ParFam along with its potential extension, DL-ParFam. Despite its inherent simplicity, ParFam demonstrates remarkable performance, as shown in Section 3.1. Furthermore, its adaptable structure makes it highly versatile for specific application scenarios. While DL-ParFam currently only exists in a prototype form, it already shows \begin{table} \begin{tabular}{c c c c c} \multicolumn{2}{c}{} & \multicolumn{2}{c}{**ParFam**} & \multicolumn{2}{c}{**DL-ParFam**} \\ \hline \# Iterations & **Symbolic recovery** & **Training time** & **Symbolic recovery** & **Training time** \\ 50 & 32\% & 13s & 48\% & 7s \\ 100 & 30\% & 24s & 46\% & 12s \\ 500 & 43\% & 116s & 66\% & 55s \\ 1000 & 37\% & 221s & 69\% & 107s \\ \end{tabular} \end{table} Table 1: Relative performance and runtime of ParFam and DL-ParFam on the synthetic data set Figure 3: Results on the SRBench ground-truth problems. Points indicate the mean over all problems. the feasibility and potential of integrating pre-training--a crucial direction in SR as pointed out by Kamienny et al. (2022); Biggio et al. (2021); Holt et al. (2023)--into the ParFam framework. LimitationsWhile the structure of the parametric family of ParFam is undoubtedly its greatest asset in tackling SR, it can also be considered its most significant constraint, given that it imposes a tighter constraint on the function space compared to other methods. However, Figure 2 illustrates, on the one hand, that several algorithms theoretically capable of identifying specific formulas do not always achieve this in practice. On the other hand, it demonstrates that even if ParFam restricts the function space too much, it still manages to find a formula that approximates the original function with remarkable accuracy. Another limitation of ParFam is that optimizing high-dimensional problems (\(>\)\(10\) independent variables) is computationally expensive, given the exponential growth in the number of parameters with respect to the number of variables. Future WorkSubsequent efforts will concentrate on advancing both ParFam and DL-ParFam. With ParFam, several avenues remain unexplored, encompassing diverse forms of regularization, alternative parametrizations, and the potential incorporation of custom-tailored optimization techniques. Nonetheless, our primary focus will be on DL-ParFam, driven by its promising potential, as evidenced by our experiments. Numerous design choices await exploration, including data sampling strategies, choice of loss function, architecture selection, and more. Existing research in this direction will undoubtedly serve as valuable guidance (Kamienny et al., 2022; Biggio et al., 2021; Holt et al., 2023). We anticipate that these advancements will facilitate the deployment of even more expansive parametric families, thereby mitigating the limitations outlined earlier. #### Reproducibility statements In our repository [https://github.com/Philipp238/parfam](https://github.com/Philipp238/parfam), we include the results of our experiments and the code and instructions necessary to use our algorithms and reproduce all experiments shown in Section 3. Furthermore, we report all settings used in the experiments in the Appendices F and I. ## Acknowledgments This publication was supported by LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Lander as well as by the Hightech Agenda Bavaria. Furthermore, G. Kutyniok was supported in part by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. G. Kutyniok also acknowledges support from the Munich Center for Machine Learning (MCML) as well as the German Research Foundation under Grants DFG-SPP-2298, KU 1446/31-1 and KU 1446/32-1 and under Grant DFG-SFB/TR 109 and Project C09.
2308.10416
A short elementary proof of Beben and Theriault's theorem on homotopy fibers
Beben and Theriault proved a theorem on the homotopy fiber of an extension of a map with respect to a cone attachment, which has produced several applications. We give a short and elementary proof of this theorem.
Daisuke Kishimoto, Yuki Minowa
2023-08-21T01:54:28Z
http://arxiv.org/abs/2308.10416v2
# A short elementary proof of Beben and ###### Abstract. Beben and Theriault proved a theorem on the homotopy fiber of an extension of a map with respect to a cone attachment, which has produced several applications. We give a short and elementary proof of this theorem. Key words and phrases:homotopy fiber, cone attachment, Whitehead product 2010 Mathematics Subject Classification: 55P35, 55Q15 ## 1. Introduction It is a fundamental problem in algebraic topology to describe how the homotopy type of a space changes after a cone attachment, and the problem has been intensely studied in connection with LS category. Describing the effect of a cone attachment on the homotopy type of a related space is of particular importance too. For example, relations between \(\Omega X\) and \(\Omega(X\cup CA)\) were studied in [6, 7, 8]. Beben and Theriault [2] considered a homotopy commutative diagram (1.1) where \(B\) is path-connected, the middle row is a homotopy cofibration, and the two columns are homotopy fibrations. They gave a nice description of \(F^{\prime}\) in terms of \(F\). To state this result, we set notation. The half smash product of spaces \(X\) and \(Y\) is defined by \[X\rtimes Y=(X\times Y)/(*\times Y).\] Let \(\epsilon\colon\Sigma\Omega X\to X\) denote the evaluation map. **Theorem 1.1**.: _Consider the homotopy commutative diagram (1.1). If the map \(\Omega p\colon\Omega E\to\Omega B\) has a right homotopy inverse \(s\colon\Omega B\to\Omega E\), then there is a homotopy cofibration_ \[A\rtimes\Omega B\xrightarrow{\theta}F\xrightarrow{h}F^{\prime}.\] _Moreover, if \(A\) is a suspension, then there is a homotopy commutative diagram_ (1.2) The homotopy commutative diagram (1.1) appears in several contexts, and so Theorem 1.1 has been applied to produce several interesting results such as a loop space decomposition of certain manifolds [3, 4, 9, 10, 11]. However, Beben and Theriault's proof of Theorem 1.1 is long and complicated, which needs a delicate analisys of the action of the loop space on the fiber of a fibration and to consider relative Whitehead products. In this paper, we provide a short and elementary proof of Theorem 1.1. Our proof is basically an elementary analysis of (homotopy) pushouts, and does not need a delicate analysis of the action in a fibration and relative Whitehead products. We will always assume that every space has a non-degenerate basepoint and every map is basepoint preserving. ### Acknowledgement The first author was partially supported by JSPS KAKENHI JP22K03284 and JP19K03473, and the second author was partially supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123. ## 2. Proof We consider the commutative diagram (1.1), and assume that the map \(\Omega p\colon\Omega E\to\Omega B\) has a right homotopy inverse. Let \[\mathsf{E}=\{(x,l)\in E\times B^{[0,1]}\mid p(x)=l(0)\}\] and define a map \(\mathsf{p}\colon\mathsf{E}\to B\) by \(\mathsf{p}(x,l)=l(1)\). Let \[\mathsf{F}=\{(x,l)\in E\times B^{[0,1]}\mid p(x)=l(0),\,l(1)=*\}.\] Then we get a fibration sequence \[\mathsf{F}\xrightarrow{\mathsf{j}}\mathsf{E}\xrightarrow{\mathsf{p}}B \tag{2.1}\] and a homotopy action \[\Gamma\colon\mathsf{F}\times\Omega B\to\mathsf{F},\quad((x,l),l^{\prime})\mapsto (x,l*l^{\prime}).\] Clearly, we may replace the homotopy fibration \(F\xrightarrow{j}E\xrightarrow{p}B\) in (1.1) with the fibration sequence (2.1). In particular, the map \(f\colon A\xrightarrow{f}\) will be replaced with the composite \(\mathsf{f}\colon A\xrightarrow{f}E\xrightarrow{\mathrm{incl}}\mathsf{E}\). Since \(\mathsf{p}\circ\mathsf{f}\simeq*\) and \(\mathsf{p}\colon\mathsf{E}\to B\) is a fibration, there is a map \(\mathsf{f}^{\prime}\colon A\to\mathsf{E}\) such that \(\mathsf{f}\simeq\mathsf{f}^{\prime}\) and \(\mathsf{p}\circ\mathsf{f}^{\prime}=*\). Then we may assume \(\mathsf{p}\circ\mathsf{f}=*\). Let \(p_{i}\) denote the \(i\)-th projection. Let \(\operatorname{Cyl}(g)\) denote the mapping cylinder of a map \(g\colon X\to Y\). Let \(i\colon X\to\operatorname{Cyl}(g)\) and \(q\colon I_{g}\to Y\) denote the inclusion and the projection, respectively. **Lemma 2.1**.: _There is a commutative diagram_ Proof.: By the definition of the map \(\Gamma\), there is a homotopy commutative diagram Then the statement is proved by the usual homotopy extension argument. Since we are assuming that the map \(\Omega p\colon\Omega E\to\Omega B\) has a right homotopy inverse, the map \(\Omega p\colon\Omega E\to\Omega B\) has a right homotopy inverse too, say \(\mathsf{s}\colon\Omega B\to\Omega E\). In particular, the map \(\Omega B\to\mathsf{F}\) is null-homotopic, and we fix a null homotopy \(H\colon C\Omega B\to\mathsf{F}\). Let \(\bar{\epsilon}\colon C\Omega X\to X\) denote the composite \(C\Omega X\xrightarrow{\operatorname{proj}}\Sigma\Omega X\xrightarrow{\epsilon}X\). **Lemma 2.2**.: _There is a commutative diagram_ _where the map \(q^{\prime}\) is homotopic to the projection \(q\colon\operatorname{Cyl(j+\bar{\epsilon})}\to\mathsf{E}\)._ Proof.: There is a homotopy commutative diagram Then the usual homotopy extension argument proves the statement. Let \(X\stackrel{{\sim}}{{\rtimes}}Y=(X\times Y)\cup(*\times CY)\). We define a map \(\rho\colon X\stackrel{{\sim}}{{\rtimes}}\Omega Y\to X\lor Y\) by the induced map between the pushouts of the two rows in the commutative diagram Then \(\rho\) is natural with respect to \(X\) and \(Y\). **Lemma 2.3**.: _There is a homotopy commutative diagram_ _where \(\widetilde{\Gamma}\) is an extension of \(\Gamma\)._ Proof.: Consider a diagram The right top face and the right front face is commutative by the above construction, and the right face is commutative by Lemma 2.1. Clearly, other faces are commutative, and so the above diagram is commutative. Then by taking the pushouts of all rows, we get a commutative diagram By construction, this commutative diagram extends to a homotopy commutative diagram Thus the proof is finished. To identify the fiber \(F^{\prime}\) in (1.1), we will use: **Lemma 2.4** ([5, Appendix HL, Proposition]).: _Let \(\{F_{i}\to E_{i}\to B\}_{i\in I}\) be an \(I\)-diagram of homotopy fibrations over a common path-connected base \(B\). Then the sequence_ \[\operatorname*{hocolim}_{I}F_{i}\to\operatorname*{hocolim}_{I}E_{i}\to B\] _is a homotopy fibration._ We recall a well known property of homotopy pushouts. See [1, Proposition 6.2.6] for a proof. **Lemma 2.5**.: _Let \(W\) denote the homotopy pushout of a cotriad_ \[X\xleftarrow{g}Y\to Z.\] _If \(g\) is a cofibration, then the projection \(W\to X\cup_{Y}Z\) is a homotopy equivalence._ Let \(\tilde{\mathfrak{f}}\colon A\to\mathsf{F}\) be the lift of the map \(\mathfrak{f}\). **Proposition 2.6**.: _There is a homotopy cofibration_ \[A\xleftarrow{\widetilde{\rtimes}}\Omega B\xrightarrow{\tilde{\theta}} \mathsf{F}\to F^{\prime}.\] Proof.: By Lemma 2.1, there is a commutative diagram where all columns are homotopy fibrations. Let \(\mathsf{F}^{\prime}\) be the oushout of the top row. Then by Lemmas 2.4 and 2.5, we get a homotopy fibration \(\mathsf{F}^{\prime}\to\mathsf{E}\cup CA\to B\), hence a homotopy equivalence \[F^{\prime}\simeq\mathsf{F}^{\prime}.\] Since the map \(\Omega\mathsf{p}\) has a right homotopy inverse, the restriction of \(i\colon\mathsf{F}\times\Omega B\to\operatorname{Cyl}(\Gamma)\) to \(*\times\Omega B\) is null-homotopic. Then we get a commutative diagram where all columns are homotopy cofibrations and \(\widetilde{\Gamma}\colon\mathsf{F}\widetilde{\rtimes}\,\Omega B\to\mathsf{F}\) is an extension of \(\Gamma\). Let \(\mathsf{F}^{\prime\prime}\) be the pushout of the bottom row. Then since homotopy pushouts commute with taking cofibers, we get a homotopy cofibration \(*\to\mathsf{F}^{\prime}\to\mathsf{F}^{\prime\prime}\) by Lemma 2.5, hence a homotopy equivalence \[\mathsf{F}^{\prime}\simeq\mathsf{F}^{\prime\prime}.\] On the other hand, since \(CA\,\widetilde{\rtimes}\,\Omega B\) is contractible, we have a homotopy cofibration \[A\,\widetilde{\rtimes}\,\Omega B\xrightarrow{\widetilde{\Gamma}_{\circ}( \widetilde{\rtimes}\,1)}\mathsf{F}\to\mathsf{F}^{\prime\prime}.\] Thus the proof is finished. **Lemma 2.7**.: _There is a natural map \(w\colon\Omega X*\Omega Y\to X\,\widetilde{\rtimes}\,\Omega Y\) satisfying the commutative diagram_ Proof.: Consider a commutative diagram Then by taking the pushouts of the three rows, we obtain the diagram in the statement. Let \(i\colon X\to X\,\widetilde{\rtimes}\,Y\) denote the inclusion, and let \(\overline{w}\) denote the composite \[X*\Omega Y\xrightarrow{E*1}\Omega\Sigma X*\Omega Y\xrightarrow{w}\Sigma X\, \widetilde{\rtimes}\,\Omega Y.\] where \(E\colon Y\to\Omega\Sigma Y\) is the suspension map. By definition, \(\overline{w}\) is natural with respect to \(X\) and \(Y\). **Lemma 2.8**.: _The map_ \[\overline{w}+i\colon(X*\Omega Y)\vee\Sigma X\to\Sigma X\,\widetilde{\rtimes} \,\Omega Y\] _is a homotopy equivalence._ Proof.: There is a commutative diagram where all columns are homotopy cofibrations. Then since the inclusion \(i\colon\Sigma X\to\Sigma X\,\widetilde{\rtimes}\,C\Omega Y\) is a homotopy equivalence, we get a homotopy cofibration \[X\ast\Omega Y\xrightarrow{\overline{w}}\Sigma X\,\widetilde{\rtimes}\,\Omega Y \xrightarrow{q}\Sigma X\] by taking the pushouts of the three rows. By definition, we have \(q\circ i\simeq 1\), and so there is a homotopy commutative diagram where the two rows are homotopy cofibrations. Then the middle vertical map is a homotopy equivalence, completing the proof. Now we are ready to prove Theorem 1.1. Proof of Theorem 1.1.: Since the projection \(A\,\widetilde{\rtimes}\,\Omega B\to A\rtimes\Omega B\) is a homotopy equivalence, the first statement follows from Proposition 2.6. Suppose \(A=\Sigma A\). By Lemmas 2.3 and 2.7, there is a homotopy commutative diagram (2.2) By definition, the map \(\hat{\theta}\colon A\,\widetilde{\rtimes}\,\Omega B\to\widehat{F}\) in Proposition 2.4 is homotopic to the composite \[A\,\widetilde{\rtimes}\,\Omega B\xrightarrow{1\,\widetilde{\rtimes}\,A}A \,\widetilde{\rtimes}\,\Omega\mathsf{E}\xrightarrow{\widetilde{\rtimes}\,1} \mathsf{F}\,\widetilde{\rtimes}\,\Omega\mathsf{E}\xrightarrow{\widetilde{ \Gamma}\circ(1\,\widetilde{\rtimes}\,\mathsf{p})}\mathsf{F}.\] Then the composite \(\overline{A}\ast\Omega B\xrightarrow{\overline{\pi}}A\,\widetilde{\rtimes}\, \Omega B\xrightarrow{\widetilde{\theta}}\mathsf{F}\xrightarrow{\mathsf{F}}\) is homotopic to the left-bottom perimeter of (2.2) which is the Whitehead product \([\mathsf{f},\epsilon\circ\Sigma\mathsf{s}]\). By the definition of \(\hat{\theta}\), the composite \[A\xrightarrow{i}A\,\widetilde{\rtimes}\,\Omega B\xrightarrow{\widetilde{ \theta}}\mathsf{F}\xrightarrow{j}\mathsf{E}\] equals \(\mathsf{f}\). Thus by Lemma 2.8, the second statement is proved, and therefore the proof is finished by applying the natural homotopy equivalences \(\overline{A}\ast\Omega B\simeq A\wedge\Omega B\), \(A\,\widetilde{\rtimes}\,\Omega B\simeq A\rtimes\Omega B\) and \(\mathsf{F}\simeq F\).
2310.04629
The composition and thermal properties of a cool core lacking a brightest cluster galaxy
We present a multiwavelength observation of a cool core that does not appear to be associated with any galaxy, in a nearby cluster, Abell~1142. Its X-ray surface brightness peak of $\lesssim2$ keV is cooler than the ambient intracluster gas of $\gtrsim3$ keV, and is offset from its brightest cluster galaxy (BCG) by 80 kpc in projection, representing the largest known cool core -- BCG separation. This BCG-less cool core allows us to measure the metallicity of a cluster center with a much-reduced contribution from the interstellar medium (ISM) of the BCG. XMM-Newton observation reveals a prominent Fe abundance peak of $1.07^{+0.16}_{-0.15}$ Z$_{\odot}$ and an $\alpha/$Fe abundance ratio close to the solar ratio, fully consistent with those found at the centers of typical cool core clusters. This finding hints that BCGs play a limited role in enriching the cluster centers. However, the discussion remains open, given that the $\alpha/$Fe abundance ratios of the orphan cool core and the BCG ISM are not significantly different. Abell~1142 may have experienced a major merger more than 100 Myr ago, which has dissociated its cool core from the BCG. This implies that the Fe abundance peak in cool core clusters can be resilient to cluster mergers. Our recent IRAM 30-m observation did not detect any CO emission at its X-ray peak and we find no evidence for massive runaway cooling in the absence of recent AGN feedback. The lack of a galaxy may contribute to an inefficient conversion of the ionized warm gas to the cold molecular gas.
Yuanyuan Su, Francoise Combes, Valeria Olivares, Gianluca Castignani, Pablo Torne, Reinout van Weeren
2023-10-06T23:51:33Z
http://arxiv.org/abs/2310.04629v1
# The composition and thermal properties of a cool core lacking a brightest cluster galaxy ###### Abstract We present a multiwavelength observation of a cool core that does not appear to be associated with any galaxy, in a nearby cluster, Abell 1142. Its X-ray surface brightness peak of \(\lesssim 2\) keV is cooler than the ambient intracluster gas of \(\gtrsim 3\) keV, and is offset from its brightest cluster galaxy (BCG) by 80 kpc in projection, representing the largest known cool core - BCG separation. This BCG-less cool core allows us to measure the metallicity of a cluster center with a much-reduced contribution from the interstellar medium (ISM) of the BCG. XMM-Newton observation reveals a prominent Fe abundance peak of \(1.07^{+0.16}_{-0.15}\) Z\({}_{\odot}\) and an \(\alpha\)/Fe abundance ratio close to the solar ratio, fully consistent with those found at the centers of typical cool core clusters. This finding hints that BCGs play a limited role in enriching the cluster centers. However, the discussion remains open, given that the \(\alpha\)/Fe abundance ratios of the orphan cool core and the BCG ISM are not significantly different. Abell 1142 may have experienced a major merger more than 100 Myr ago, which has dissociated its cool core from the BCG. This implies that the Fe abundance peak in cool core clusters can be resilient to cluster mergers. Our recent IRAM 30-m observation did not detect any CO emission at its X-ray peak and we find no evidence for massive runaway cooling in the absence of recent AGN feedback. The lack of a galaxy may contribute to an inefficient conversion of the ionized warm gas to the cold molecular gas. keywords: galaxies: clusters: intracluster medium -- radio lines: general -- X-rays: galaxies: clusters -- galaxies: clusters: individual (Abell 1142) ## 1 Introduction The intracluster medium (ICM), constituting 90% of the cluster baryons, is the reservoir of nearly all metals that have ever been produced by member galaxies. The ICM is therefore an ideal and unique laboratory to study the enrichment process of the Universe as well as to constrain models of supernova nucleosynthesis. The bulk of the ICM of various different clusters tends to have Fe metallicities consistent with a constant value of 0.3 solar (Urban et al., 2017), which supports early enrichment: chemical elements were deposited and mixed into the intergalactic medium (IGM) before clusters formed (Urban et al., 2017; Werner et al., 2013; Simionescu et al., 2015). However, the uncertainty of these measurements is greater than 20% and the measurements were performed with a spatial resolution of a few hundred kpc, therefore a non-uniform composition cannot be ruled out. Furthermore, the observed Fe mass in the ICM, when integrated to large radii, far exceeds what can be produced by the visible stellar population (Sarkar et al., 2022; Ghizzardi et al., 2021; E Blackwell et al., 2021), which is difficult to reconcile with the prevailing enrichment model. Another outstanding puzzle has arisen in the study of cluster centers. Fe metallicity peaks sharply towards cluster centers in cool core (CC) clusters, which is generally considered to be due to the metals accumulated in the interstellar medium (ISM) of the brightest cluster galaxy (BCG). The composition of the gas at the cluster centers, as inferred from the abundance ratios, is consistent with that of our solar system (e.g., Mernier et al., 2018). The enrichment of the IGM, before the formation of galaxy clusters, is likely to have a supersolar \(\alpha\)/Fe ratio, due to the relatively short star formation timescales for core collapse supernova (SNc). The stellar mass loss from the BCG may have enriched cluster centers with additional type Ia supernova (SNIa) yields, after the gravitational collapse of clusters, as it would take a much longer time to form white dwarfs. The observed abundance pattern at the cluster centers would require these two sources of metals to compensate for each other to produce the solar ratios for nearly all clusters studied so far. This striking coincidence has been coined as the "ICM solar composition paradox" (Mernier et al., 2018). Fe metallicity also increases towards the center of non cool core (NCC) clusters but with a much smaller gradient, which has been attributed to mergers disrupting the cluster central metallicity peak (Leccardi et al., 2010). Cluster cool cores also feature a sharp X-ray surface brightness peak, where the gas is expected to cool to below 100 K and rapidly form stars at rates of 100-1000 M\({}_{\odot}\) yr\({}^{-1}\), exceeding the observed star formation rates by orders of magnitudes (Peterson et al., 2003). Mechanical feedback from the active galactic nucleus (AGN) of the BCG is likely preventing this hot ICM from cooling, as revealed by the ubiquitous X-ray cavities at cluster cool cores (see Fabian (2012) for a review). Recent observations of a high redshift (\(z=1.7\)) cluster, SpARCS1049+56, have added new insight into this picture (Webb et al., 2017; Castignani et al., 2020; Hlavacek-Larrondo et al., 2020). Stars are forming at its center, at an enormous rate of \(860\pm 140\) M\({}_{\odot}\) yr\({}^{-1}\), in association with a large reservoir of molecular gas of \(1.1\times 10^{11}\) M\({}_{\odot}\). The cold gas and star formation can in principle be fueled by massive, runaway cooling of the intracluster gas, perhaps precisely due to the absence of AGN feedback, as its X-ray peak is offset from the BCG. A relevant object was discovered in the nearby Universe -"an orphan cloud" serendipitously detected on the outskirt of Abell 1367 (Ge et al., 2021): an isolated cloud (no optical counterpart) with an effective radius of 30 kpc, detected in X-rays, H\(\alpha\), and CO, which may have been stripped off from an unidentified member galaxy (Jachym et al., 2022). Abell 1142 is a nearby galaxy cluster that has connections to both SpARCS1049+56 and the orphan cloud. It is undergoing a major merger and the spectroscopically confirmed member galaxies in Abell 1142 display a nearly bimodal redshift distribution peaking at \(z=0.0325\) and 0.0375, respectively (Figure 1; also see Su et al. (2016)). _Chandra_ Advanced CCD Imaging Spectrometer (ACIS)-S observation reveals that its X-ray peak is 80 kpc east of the BCG, IC 664, a massive elliptical galaxy at \(z=0.0338\)(Su et al., 2016). This X-ray emission is unlikely to be a background cluster as this field is covered by SDSS and there is no known optical counterpart at higher redshift (Su et al., 2016). Abell 1142 presents the largest known offset between a cluster cool core and the BCG1. It provides a valuable opportunity, in the nearby Universe, to unambiguously measure the metallicity of a cluster cool core itself, with a minimized impact of the ISM of the BCG, and to test the role of the AGN feedback at cluster centers. Previous Chandra observations reveal a temperature of 2 keV and a metal abundance of 1 Z\({}_{\odot}\) at the cool core, cooler and more metal rich than the ambient ICM of 3 keV and 0.2 Z\({}_{\odot}\). Notably, its metallicity profile is consistent with that derived for typical cool-core clusters and deviates from that of non cool core clusters (see Su et al. (2016)). Footnote 1: Galaxy clusters known to have \(>80\) kpc offset between cool core and BCG are found to be binary or triple clusters, for which the cool core is occupied by the brightest galaxy in the subcluster (De Propris et al., 2021). This paper presents follow-up multiwavelength observations of Abell 1142, using XMM-Newton and IRAM 30m, to study the two-dimensional distribution of its gas properties and search for multi-phase gas associated with the orphan cool core. The paper is structured as follows. The data preparation and methods are presented in Section 2. The results of the metallicity and multiple phase gas measurements are shown in Section 3. Our findings are discussed in Section 4 and summarized in Section 5. We use NASA/IPAC Extragalactic Database3 to estimate the luminosity distance of 153.9 Mpc (1'' = 0.697 kpc) at \(z=0.035\) for Abell 1142, by adopting a cosmology of H\({}_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}\) = 0.7, and \(\Omega_{\rm m}\) = 0.3. We assume a solar abundance table of Asplund et al. (2009) throughout this paper. All uncertainties are reported at 1\(\sigma\) confidence level. Footnote 3: [http://ned.ipac.caltech.edu](http://ned.ipac.caltech.edu) ## 2 Observations and Data Reduction ### XMM-Newton Abell 1142 was observed with _XMM-Newton_ in 2016 for an effective time of 100 ksec (Figure 2 and Table 1). Data reduction was performed with the Science Analysis System (SAS) version xmmsas_20210317_1624-19.1.0. ODF files were processed using emchain and epchain. Soft flares from MOS and pn data were filtered using mos-filter and pn-filter, respectively. Only FLAG = 0 and PATTERN <= 12 event files are included for MOS data and only FLAG = 0 and PATTERN <= 4 event files are included for pn data. Out-of-time pn events were also removed. Point sources were detected by edetect_chain, confirmed by eye, and removed from spectral analysis. We extracted spectra from an annulus of 7.4-11.9 arcmin in radius to determine the astrophysical X-ray background (AXB) as well as the non-X-ray background (NXB) simultaneously. To better constrain the level of the local bubble and Milky Way foregrounds, we also included a RASS spectrum extracted from an annulus centered on but beyond the field of view of the XMM-Newton pointing. The spectral fit was restricted to the 0.3-10.0 keV energy band for XMM-Newton and 0.1-2.4 keV for RASS. We use \(\rm{abs^{*}}(\rm{pow}_{\rm CXB}+\rm{apec}_{\rm MW})+\rm{apec}_{\rm LB}\) to model the AXB. A power law \(\rm{pow}_{\rm CXB}\) with index \(\Gamma=1.41\) represents the cosmic X-ray background (CXB), a thermal emission \(\rm{apec}_{\rm LB}\) with a temperature of 0.08 keV for local bubble emission, and another \(\rm{apec}_{\rm MW}\) with a temperature of 0.2 keV was for the Milky Way foreground. Metal abundance and redshift were fixed at 1 and 0, respectively, for \(\rm{apec}_{\rm LB}\) and \(\rm{apec}_{\rm MW}\). The aforementioned annulus is chosen as far from the cluster center as possible, while still ensuring enough photons to constrain the background components. Still, it may contain ICM emission. Therefore, we include \(\rm{apec}_{\rm ICM}\) to model the ICM component and allow its temperature, abundance, and normalization free to vary. The NXB model includes a set of Gaussian lines and a broken powerlaw with \(\rm{E}_{\rm break}=3\) keV to model a set of fluorescent instrumental lines and a continuum spectrum for each MOS and pn detector (see the Appendix in Su et al. (2017) for details). The NXB model was not folded through the Auxiliary Response Files (ARF). We inspect the ratio of area-corrected count rates in the 6-12 keV energy band within the field of view (excluding the central 10 arcmin) and in the unexposed corners. Our observations are not contaminated by the soft proton flare. ### IRAM 30-m We performed an observation of Abell 1142 with the IRAM 30-m telescope operated by the Institut de Radio Astronomie Millimetrique (IRAM) at Pico Veleta, Spain. The observation was carried out with the wobbler switching mode (WSW), targeting a few positions2 mainly near the X-ray centroid as shown in Figure 2. The observation (E01-22) was carried out from 21 February 2023 to 22 February 2023 (Table 1). The EMIR receiver was used to simultaneously observe at the frequencies of the CO(1-0) and CO(2-1) transitions. The FTS spectrometer and the WILMA autocorrelator were connected to both receivers. 1055+018 and IRC+10216 were used as flux calibrators. Footnote 2: These positions are chosen based on a tentative detection of extended CO(1-0) emission from a previous IRAM 30m observation using on-the-fly mapping mode (030-22). But we now consider this “detection” flawed by baseline ripples due to poor weather. The half power beam width of the IRAM 30m is \(-22\arcsec\) for CO(1-0) and \(\sim 11\arcsec\) for CO(2-1). Data reduction was performed using CLASS from the GILDAS3 software package. The corrected antenna temperatures, Ta*, were converted to the brightness temperature of the main beam by Tmb = Ta* F\({}_{\rm eff}\) / \(\eta_{\rm mb}\). The main beam efficiency is \(\eta_{\rm mb}\) = 0.78 and 0.61 for 110 GHz and 220 GHz, respectively, and the forward efficiencies F\({}_{\rm eff}\) = 0.94 and 0.93 respectively. The flux density is converted from the main beam antenna temperature by the ratio \(\sim 53\) Jy K\({}^{-1}\) both for CO(1-0) and CO(2-1) transitions. However, we did not detect any emission line associated with the CO(1-0) or CO(2-1) transition. Footnote 3: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS) ## 3 Results ### Metallicity We produce adaptively binned temperature and metallicity maps of Abell 1142 using XMM-Newton observations. A square region with a side length of 7.56 arcmin was chosen covering the Abell 1142 centroid, major member galaxies, and their surroundings. The region was divided into a 14x14 grid of pixels. We extract a spectrum from a circular region centered on each pixel. Its radius is chosen adaptively to collect at least 500 net counts. Response files were produced for each spectrum. We apply the same model used for the annulus region to model the AXB, NXB, and ICM components (see Section 2.1) of each region. The parameters of the AXB model and most parameters of the NXB model are fixed to those determined earlier. We allow the normalization of the fluorescent instrumental lines at 1.49 keV and 1.75 keV free to vary for MOS and that of the 1.48 keV line free to vary for pn. We first fix the ICM metallicity at 0.3 \(Z_{\odot}\) and obtain a temperature map as shown in Figure 3-left. We then allow metallicity free to vary to obtain a metallicity map as shown in Figure 3-right. The 2D spatial distribution of its gas properties demonstrates that the orphan cool core is cooler than the surrounding ICM. The metallicity peaks around the orphan cool core, approaching 1 \(Z_{\odot}\), in contrast to the ambient ICM outside the cool core of 0.2 \(Z_{\odot}\). The spectrum extraction regions, with radii ranging from 0.5\(\arcmin\) at the center to 1.3\(\arcmin\) at the outskirt, are chosen through adaptive-binning and overlap with adjacent regions; the actual temperature and metal abundance contrast are likely to be more pronounced. To constrain the metallicity of the cool core, we extract a spectrum from a circular region centered on the X-ray peak with a radius of 25 kpc (cyan circle in Figure 2). The spectrum was fit to a two-temperature thermal phabs*(vapeec+vapeec) model (along with multiple background components as described in the previous paragraph), in which all the \(\alpha\) elements (O, Ne, S, Si, Al, Ca), mainly the products of SNcc, are linked together, while Fe and Ni, mainly synthesized in SNIa, are linked together. One of the two temperatures is free to vary, for which we obtain a best-fit temperature of 1.11\({}^{+0.10}_{-0.08}\) keV, while the other temperature is fixed at 3 keV (the temperature at the outer atmosphere of the ICM as shown in Figure 3-left). We obtain a best-fit Fe abundance of 1.07\({}^{+0.16}_{-0.15}\) Z\({}_{\odot}\) and an \(\alpha\)/Fe ratio of 0.95\({}^{+0.28}_{-0.27}\) the Solar ratio, consistent with the Fe abundance peak and abundance ratios measured for typical cool core clusters. We performed a similar spectral analysis for a circular region of \(r<14\) kpc centered on the BCG using exactly the same model4. We obtain a best-fit Fe abundance of 0.78\({}^{+0.21}_{-0.16}\) Z\({}_{\odot}\) and an \(\alpha\)/Fe ratio of 0.65 \(\pm\) 0.28 the Solar ratio. This latter measurement, presumably more representative of the BCG ISM metallicity, is more dominated by the SNIa yields, although consistent within the uncertainties with the composition observed for the orphan cool core. Footnote 4: Using the point source detection tool wavdetect, we identified a point-like source at the very center of the BCG from the existing Chandra ACIS-S observation. We extracted the spectrum from this source via a circular region of \(r<1.8\arcsec\) and used an annulus region of \(1.8\arcsec<r<2.7\arcsec\) as the local background. The spectrum was fit to an absorbed power law model, for which we obtain a best-fit photon index of \(4.1\pm 0.5\), which is too soft to be a nuclear source and is likely to be thermal emission. Therefore, we did not include an additional power-law component in the spectral analysis of the BCG. ### Cold molecular gas To capture a potential multiphase gas near the X-ray centroid, we performed an IRAM 30m observation of Abell 1142, using the wobbler-switching (WSW) mode to obtain a ripple-free baseline and targeting on both CO(1-0) and CO(2-1) transitions. We observed a few positions mainly within the orphan core, and we did not detect any CO emission line with a clean baseline. A new issue was raised a few days before the observation. The maximum throw had been limited to 30 arcsec, with no fix on the horizon. If the emission is extended, the signal would be reduced due to the limited maximum throw. Based on the WSW observation at its X-ray peak (black annulus in Figure 2), we set \(3\sigma\) upper limits of 1.1Jy km/s and 1.7Jy km/s on the CO(1-0) \begin{table} \begin{tabular}{c c c c c} \hline Instrument & Observation & Date & Effective Exposure & PI \\ \hline \hline XMM-Newton EPIC & 0782330101 & Jun 2016 & 37 (MOS), 20 (pn) ksec & Y. Su \\ XMM-Newton EPIC & 0782330301 & Jun 2016 & 100 (MOS), 82 (pn) ksec & Y. Su \\ \hline IRAM 30-m & E01-22 & Feb 2023 & 11 hours & Y. Su \\ \hline \end{tabular} \end{table} Table 1: List of _XMM-Newton_ and IRAM-30m observations presented in this paper and CO(2-1) emission lines, respectively, at a velocity resolution of 300 km/s. We calculate the upper limit on the CO luminosity from the standard conversion of \[L_{\rm CO}^{\prime}=3.25\times 10^{7}S_{\rm CO}\Delta vv_{\rm obs}^{-2}D_{L}^{2 }(1+z)^{-3}\,\rm K\,km\,s^{-1}\,pc^{2} \tag{1}\] where \(S_{\rm CO}\Delta v\) is the CO velocity integrated flux for an emission line in Jy km s\({}^{-1}\), \(v_{\rm obs}\) is the observed frequency in GHz, and \(D_{L}\) is the luminosity distance in Mpc. The molecular gas mass in H\({}_{2}\) is estimated from \(M_{\rm H_{2}}[\rm M_{\odot}]=\alpha_{\rm CO}\,L_{\rm CO}^{\prime}\) [\(\rm K\,km\,s^{-1}\,pc^{2}\)], where we choose \(\alpha_{\rm CO}=4.6\,\rm M_{\odot}/(\rm K\,km\,s^{-1}\,pc^{2})\) as in our Galaxy (Solomon & Barrett, 1991). This CO-H2 conversion only applies to CO(1-0). A factor of \(R_{1J}\) must be added for the CO(J, J-1) transitions and in this case we take \(R_{12}=1.2\) for CO(2-1) (Tacconi et al., 2018). We obtain \(3\sigma\) upper limits on \(M_{\rm H_{2}}\) of \(2.8\times 10^{8}\,\rm M_{\odot}\) and \(1.4\times 10^{8}\,\rm M_{\odot}\) based on CO(1-0) and CO(2-1) measurements, respectively. ## 4 Discussions ### Enrichment process of the ICM Extensive XMM-Newton observations of the centers of cool core clusters, i.e., the CHEmical Enrichment RGS cluster Sample (CHEERS) catalog of X-ray bright galaxy groups, clusters and elliptical galaxies, have revealed that the ratios of multiple elements relative to Fe (X/Fe) are consistent with the chemical composition of our solar system, and in excellent agreement with the measurement of the center of the Perseus cluster using the microcalorimeter on board Hitomi (Mernier et al., 2018). The abundance ratios outside cluster cores have been observed with Suzaku for only a few clusters (Sarkar et al., 2022; Simionescu et al., 2015). However, unlike cluster centers, agreement on abundance ratios of cluster outskirts has not been reached. Simionescu et al. (2015) have presented the results of a Suzaku key project on the nearest galaxy cluster, Virgo. Abundance ratios of Si/Fe, Mg/Fe, and S/Fe out to its virial radius have been reported, which are generally consistent with the Solar value. Recently, Sarkar et al. (2022) presented the O/Fe, Si/Fe, Mg/Fe, and S/Fe ratios for 4 low mass clusters out to their virial radii. The \(\alpha\)/Fe ratios are found to be consistent with Solar abundances at their centers, but increase outside the cluster centers and reach \(2\times\) the Solar value. Interestingly, this latter measurement, with \(\alpha\)/Fe profiles increasing with radius, is fully consistent with the prediction of the IllustrisTNG cosmological simulation (Sarkar et al., 2022). If \(\alpha\)/Fe is found to be uniform in the ICM, from the centers to the outskirts, it would imply that the metals throughout the ICM have the same origin, likely from an epoch before the gravitational collapse of the clusters. If the compositions of the cluster outskirts deviate from those of the centers, it favors different enrichment channels. For example, the cluster outskirts may have been enriched at an earlier epoch (SNcc dominated), while the centers may have been enriched later on by the stellar mass loss of the BCG (SNIa dominated), as predicted by IllustrisTNG. In this study, we performed abundance measurement for a cluster center, for the first time, without the immediate influence of the innermost atmosphere of the BCG - minimizing the contribution of BCG to the metallicity of the cluster centers. We cannot individually constrain X/Fe for each element. Instead we linked all the elements that are mainly synthesized in core collapse supernovae. The best-fit \(\alpha\)/Fe ratio of about the Solar value as well as an abundance peak of Fe of \(\sim 1\) solar for the BCG-less cool core in Abell 1142 are fully consistent with those found for typical cool cores with a BCG. This finding hints at a limited role of the BCG in enriching the cluster centers and producing a solar-like composition. A robust comparison of the composition of cluster centers and outskirts would be the key to understanding the enrichment process of the ICM. Both of these aforementioned Suzaku studies have sizable uncertainties for measuring cluster outskirts. Observations with future X-ray instruments such as the Line Emission Mapper X-ray Probe (LEM) (Kraft et al., 2022) may allow us to convincingly measure the abundance ratios of the cluster outskirts. ### The impact of mergers on metal-rich cool cores The dichotomy of cool core and non cool core clusters has been a lasting puzzle in the study of cluster formation and evolution. The Fe abundance peak in NCC clusters is not nearly as pronounced as in CC clusters, as an anticorrelation between entropy and metallicity has been observed at cluster centers (Leccardi et al., 2010). A cool core is considered a natural outcome of radiative cooling, and major mergers may have disrupted cluster cool cores and created NCC clusters (Su et al., 2020). This prevailing theory was challenged by a recent joint Chandra and SPT analysis of 67 galaxy clusters at \(0.3<z<1.3\), revealing that the fraction of CC clusters has not evolved over the past 9 Gyr (Ruppin et al., 2021). This would require CC clusters to be converted to NCC clusters by mergers, at exactly the same rate as NCC clusters transform to CC clusters by cooling, at every redshift interval, seriously questioning the role of mergers in destroying cluster cool cores (Olivares et al., 2022). In this study, we measure an Fe abundance peak of \(\sim 1.0\,\rm Z_{\odot}\) for the central \(r<25\,\rm kpc\) of the orphan cool core in Abell 1142. This cool core has clearly been through a recent major merger, as indicated by the bimodal velocity distribution of the member galaxies and the large offset between the X-ray peak and BCG. The prominent Fe abundance peak of the orphan cool core implies that the Fe peak of CC clusters may be able to survive some mergers. ### The lack of cold gas detection in the orphan cool core We did not detect cold molecular gas at the X-ray peak of Abell 1142 (black annulus in Figure 2). We derive the cooling rate of \(1.2\,M_{\odot}\,\rm yr^{-1}\) for the orphan core, using \[M_{\rm cf}=\frac{2m_{\mu}m_{\rm p}L_{\rm X}}{5\,\rm kT}, \tag{2}\] where \(L_{\rm X}=3\times 10^{41}\,\rm erg/s\) is the bolometric luminosity measured with XMM-Newton and we assume a temperature of 1 keV. The BCG is at a redshift of 0.0338, which is 800 km/s offset from the median redshift of the member galaxies at 0.0365. We assume that the BCG travels at 800 km / s relative to the ICM of Abell 1142. Given that the BCG and the orphan cool core are offset by 80 kpc in projection, we infer that the orphan cool core may have been detached from the BCG for more than 100 Myr. It would accumulate at least \(1.2\times 10^{8}\,\rm M_{\odot}\) cold gas. The upper limit of \(1.4\times 10^{8}\,M_{\odot}\), obtained from the IRAM 30-m observation, indicates that runaway cooling does not occur in the orphan cool core, even in the absence of AGN feedback. Furthermore, even if cooling takes place as expected, the ionized warm gas may not be converted into cold molecular gas efficiently. This conversion ineffectiveness may be precisely due to the lack of a galaxy, as warm and cooling gas usually flows onto the galaxy to form molecular hydrogen, catalyzed by dust. A follow-up observation of the ionized hydrogen of Abell 1142 would be critical to resolve the scenarios. With low spatial resolution, Webb et al. (2017) detected \(1.1\times 10^{11}\) M\({}_{\odot}\) of molecular gas, in the cool core cluster SpARCS1049+56 at \(z=1.7\), which they interpreted to be associated with the X-ray peak. The molecular gas, in this case, could have arisen from massive runaway cooling of the nascent ICM (Hlavacek-Larrondo et al., 2020). However, the separation between the cool ICM and BCG is only 25 kpc in SpARCS1049+56, for which we expect even less time for cooling than Abell 1142. In fact, NOEMA observations of SpARCS1049+56 revealed, at high resolution, that at least 50% of the molecular gas is associated with the BCG companions, and the rest of the cold gas is in a tidal or ram-pressure tail (Castignani et al., 2020). Therefore, most of the cold gas is not associated with the X-ray peak, and cooling cannot be the dominant mechanism for the formation of its cold gas reservoir. It is interesting to compare the orphan cool core in Abell 1142 and the orphan cloud in Abell 1367 (Ge et al., 2021). Both reside in the nearby Universe and have similar X-ray luminosity (\(L_{\rm X,bol}=3\times 10^{41}\) erg/s). Multiphase gas from X-ray to molecular cold gas has been detected in the orphan cloud, while the orphan cool core is likely to be single phase. One possibility is that the cold gas of the orphan cloud, instead of being formed from cooling, is stripped from its host galaxy. The mixing between this stripped cold gas and the ambient hot ICM has led to the formation of warm ionized gas detected through H\(\alpha\) emission. Another key difference lies in their hot gas metallicity. The orphan cloud has a surprisingly pristine metallicity of 0.14 Z\({}_{\odot}\)(Ge et al., 2021), while that of the orphan cool core has a Solar abundance. The measured metallicity of the orphan cloud may have been biased low by its complicated temperature structure. ## 5 Conclusions In this work, we have presented a multi-wavelength study of the nearby cluster Abell 1142, featuring a cluster cool core that is not associated with any galaxy and offset from its BCG by 80 kpc. The composition and thermal properties of this orphan cool core can provide unique constraints on the enrichment and cooling processes in the ICM. Our findings obtained with XMM-Newton and IRAM 30-m observations are summarized below. * The central \(r<25\) kpc of the orphan cool core has a prominent Fe abundance peak of \(\sim 1\) solar abundance. Its \(\alpha\)/Fe ratio is fully consistent with the solar ratio. Both the metallicity and chemical composition of this orphan cool core (with a reduced impact from the innermost atmosphere of the BCG) are in excellent agreement with those observed for typical cluster cool cores. This result hints that the stellar mass loss of the BCG may have a limited contribution to the enrichment of cluster centers. However, the consistent abundance ratios within \(1\sigma\) error of the orphan cool core (\(0.95\pm 0.28\) Solar ratio) and the BCG ISM (\(0.65\pm 0.28\) Solar ratio) mean that the issue remains unresolved. * Abell 1142 has clearly been through a recent major merger that has disassociated the cool core from the BCG and created a bimodal velocity distribution of the member galaxies. The prominent Fe abundance peak, found to be associated with the orphan cool core, indicates that major mergers can not easily erase the Fe peak in cool core clusters. * The lack of CO(1-0) and CO(2-1) emission indicates that the orphan cool core is not multiphase. Assuming that it has been detached from the BCG for 100 Myr, we expect it to accumulate \(1\times 10^{8}\) M\({}_{\odot}\) molecular gas in H\({}_{2}\), which is comparable to the upper limit we obtained from the IRAM 30-m observations. An occurrence of runaway cooling in the absence of AGN feedback can be ruled out. The ionized warm gas, if there is any, may not be converted into cold molecular gas efficiently because of the lack of a galaxy. ## Acknowledgements The authors thank the anonymous reviewer for their helpful comments on the manuscript. Y.S. and V.O. acknowledge support by NSF grant 2107711, Chandra X-ray Observatory grants GO1-22126X, GO2-23120X, G01-22104X, NASA grants 80NSSC21K0714 and 80NSSC22K0856. GC acknowledges support from the grant ASI n.2018-23-HH.0. RJvW acknowledges support from the ERC Starting Grant ClusterWeb 804208. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2307.07937
Pseudo-Riemannian geodesic orbit nilmanifolds of signature $\boldsymbol{(n-2,2)}$
The geodesic orbit property is useful and interesting in itself, and it plays a key role in Riemannian geometry. It implies homogeneity and has important classes of Riemannian manifolds as special cases. Those classes include weakly symmetric Riemannian manifolds and naturally reductive Riemannian manifolds. The corresponding results for indefinite metric manifolds are much more delicate than in Riemannian signature, but in the last few years important corresponding structural results were proved for geodesic orbit Lorentz manifolds. Here we extend Riemannian and Lorentz results to trans-Lorentz nilmanifolds. Those are the geodesic orbit pseudo Riemannian manifolds $M = G/H$ of signature $(n-2,2)$ such that a nilpotent analytic subgroup of $G$ is transitive on $M$. For that we suppose that there is a reductive decomposition $\g = \h \oplus \n \text{ (vector space direct sum) with } [\h,\n] \subset \n$ and $\n$ nilpotent. When the metric is nondegenerate on $[\n,\n]$ we show that $\n$ is abelian or 2-step nilpotent. That is the same result as for geodesic orbit Riemannian and Lorentz nilmanifolds. When the metric is degenerate on $[\n,\n]$ we show that $\n$ is a double extension of a geodesic orbit nilmanifold of either Riemannian or Lorentz signature.
Zhiqi Chen, Yuri Nikolayevsky, Joseph A. Wolf, Shaoxiang Zhang
2023-07-16T04:05:14Z
http://arxiv.org/abs/2307.07937v1
# Pseudo-Riemannian geodesic orbit nilmanifolds of signature \((n-2,2)\) ###### Abstract. The geodesic orbit property is useful and interesting in itself, and it plays a key role in Riemannian geometry. It implies homogeneity and has important classes of Riemannian manifolds as special cases. Those classes include weakly symmetric Riemannian manifolds and naturally reductive Riemannian manifolds. The corresponding results for indefinite metric manifolds are much more delicate than in Riemannian signature, but in the last few years important corresponding structural results were proved for geodesic orbit Lorentz manifolds. Here we extend Riemannian and Lorentz results to trans-Lorentz nilmanifolds. Those are the geodesic orbit pseudo Riemannian manifolds \(M=G/H\) of signature \((n-2,2)\) such that a nilpotent analytic subgroup of \(G\) is transitive on \(M\). For that we suppose that there is a reductive decomposition \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{n}\) (vector space direct sum) with \([\mathfrak{h},\mathfrak{n}]\subset\mathfrak{n}\) and \(\mathfrak{n}\) nilpotent. When the metric is nondegenerate on \([\mathfrak{n},\mathfrak{n}]\) we show that \(\mathfrak{n}\) is abelian or \(2\)-step nilpotent. That is the same result as for geodesic orbit Riemannian and Lorentz nilmanifolds. When the metric is degenerate on \([\mathfrak{n},\mathfrak{n}]\) we show that \(\mathfrak{n}\) is a double extension of a geodesic orbit nilmanifold of either Riemannian or Lorentz signature. Key words and phrases:pseudo Riemannian nilmanifold, geodesic orbit manifold 2020 Mathematics Subject Classification: 53C30, 53B30, 17B30 ZC was partially supported by NNSF of China (11931009 and 12131012) and Guangdong Basic and Applied Basic Research Foundation (2023A1515010001) YN was partially supported by ARC Discovery Grant DP210100951 JW was partially supported by a Simons Foundation grant. SZ was partially supported by the National Natural Science Foundation of China (No. 12201358), Natural Science Foundation of Shandong Province (No. ZR2021QA051). ## 1. Introduction In this paper, we study the \(GO\) condition for pseudo-Riemannian nilmanifolds \((N,ds^{2})\), relative to subgroups \(G\subset I(N)\) of the form \(G=N\rtimes H\), where \(H\) is an isotropy subgroup. Most of our results apply to the case where \((N,ds^{2})\) is a _trans-Lorentz manifold_, that is, the signature of \(ds^{2}\) is \((n-2,2)\), where \(n=\dim N\). Our results for \(G\)-\(GO\) manifolds \((M,ds^{2})=(G/H,ds^{2})\) require the coset space \(G/H\) to be reductive. In other words, they make use of an \(\operatorname{Ad}_{G}(H)\)-invariant decomposition \(\mathfrak{g}=\mathfrak{m}\oplus\mathfrak{h}\). Very few structural results are known for indefinite metric \(GO\) manifolds that are not reductive, and we always assume that \(G/H\) is reductive (see the discussion below). The \(GO\) condition for reductive spaces is well known: **Geodesic Lemma** [DK].: _Let \((M,ds^{2})=G/H\) be a reductive pseudo-Riemannian homogeneous space, with the corresponding reductive decomposition \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\). Then \(M\) is a \(G\)-geodesic orbit space if and only if, for any \(T\in\mathfrak{m}\), there exist \(A=A(T)\in\mathfrak{h}\) and \(k=k(T)\in\mathbb{R}\) such that if \(T^{\prime}\in\mathfrak{m}\) then_ \[\langle[T+A,T^{\prime}]_{\mathfrak{m}},T\rangle=k\langle T,T^{\prime}\rangle, \tag{1}\] _where \(\langle\cdot,\cdot\rangle\) denotes the inner product on \(\mathfrak{m}\) defined by \(ds^{2}\), and the subscript \({}_{\mathfrak{m}}\) in (1) means taking the \(\mathfrak{m}\)-component in \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\)._ Note that \(k(T)=0\) unless \(T\) is a null vector (substitute \(T^{\prime}=T\) in (1)). Recall that a pseudo-Riemannian _nilmanifold_ is a pseudo-Riemannian manifold admitting a transitive nilpotent Lie group of isometries. In the Riemannian case, the full isometry group of a nilmanifold \((N,ds^{2})\), where \(N\) is a transitive nilpotent group of isometries, is the semidirect product \(I(N)=N\rtimes H\), where \(H\) is the group of all isometric automorphisms of \((N,ds^{2})\)[W1, Theorem 4.2]. In other words, \(N\) is the nilradical of \(I(N)\). In the pseudo-Riemannian cases, \(I(N)\) might still contain \(N\rtimes H\) and yet be strictly larger. In indefinite signatures of metric a nilmanifold is not necessarily reductive as a coset space of \(I(N)\), and even when it is, \(N\) does not have to be a normal subgroup of \(I(N)\). Here the \(GO\) condition does not rescue us, for there exist \(4\)-dimensional, Lorentz \(GO\) nilmanifolds that are reductive relative to \(I(N)\), but for which \(N\) is not an ideal in \(I(N)\)[dBO, Section 3]. Moreover, already in dimension \(4\) (the lowest dimension for homogeneous pseudo-Riemannian spaces \(G/H\) with \(H\) connected that are not reductive), every non-reductive space is a \(GO\) manifold when we make a correct choice of parameters [CFZ, Theorem 4.1]. These results explain (and motivate) our study of \(G\)-\(GO\) nilmanifolds \(G/H=(N\rtimes H)/H\), where \(N\) is nilpotent, and \(N\) is _the maximal_ connected subgroup of isometric automorphisms of \((N,ds^{2})\) (although our results remain valid for a smaller subgroup \(H\)). Given a reductive \(G\)-\(GO\) trans-Lorentz nilmanifolds \((G/H,ds^{2})\), where \(G=N\rtimes H\), with \(N\) nilpotent, and the corresponding reductive decomposition \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{n}\) at the level of Lie algebras, we denote \(\langle\cdot,\cdot\rangle\) the inner product on \(\mathfrak{n}\) induced by \(ds^{2}\), and by \(\langle\cdot,\cdot\rangle^{\prime}\), the restriction of \(\langle\cdot,\cdot\rangle\) to the derived algebra \(\mathfrak{n}^{\prime}=[\mathfrak{n},\mathfrak{n}]\). The structure of the paper is as follows. Section 2 contains the proof of the first main theorem: **Theorem 1**.: _Let \((M=G/H,ds^{2})\) be a connected trans-Lorentz \(G\)-geodesic orbit nilmanifold where \(G=N\rtimes H\), with \(N\) nilpotent. Let \(\langle\cdot,\cdot\rangle\) denote the inner product on \(\mathfrak{n}\) induced by \(ds^{2}\). If \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) is nondegenerate, then \(N\) is either abelian or \(2\)-step nilpotent._ _Remark 1_.: There are very many connected trans-Lorentz \(G\)-geodesic orbit nilmanifolds as in Theorem 1. They are real forms of the complexifications of Riemannian \(GO\) spaces. See [CW, Proposition 4.3 and Corollary 5.4] for the collection and [W2] for the fact that those real forms are \(GO\). Theorem 1 extends the results of [Gor, Theorem 2.2] (for the Riemannian signature) and of [NW, Theorem 2] and [CWZ, Theorem 7] (for the Lorentz signature) to the trans-Lorentz case. Our proof of Theorem 1 is split into two parts, given in Subsections 2.1 and 2.2, depending on the signature of \(\langle\cdot,\cdot\rangle^{\prime}\). In Subsection 2.3 we give an example which shows that the results of Theorem 1 and [NW, Theorem 2] are "almost" tight in the sense of the signature: there is a \(G\)-\(GO\) nilmanifold of signature \((8,4)\), with the Lorentz derived algebra, which is \(4\)-step nilpotent. In Section 3 we extend the result of [NW, Theorem 3] (for the Lorentz signature) to the trans-Lorentz (signature \((n-2,2)\)) setting. In Theorem 2, stated just below, we prove that if the restriction \(\langle\cdot,\cdot\rangle^{\prime}\) is _degenerate_ then \(\mathfrak{n}\) can be obtained by the _double extension_ procedure from a metric Lie algebra of either a Riemannian or Lorentz nilmanifold. The double extension construction (which is explained in Section 3) is a useful tool in pseudo-Riemannian homogeneous geometry, in particular in the theory of bi-invariant metrics (see the recent survey [Ova]) and in the context of \(GO\) nilmanifolds [NW, Section 4]. The precise result is **Theorem 2**.: _Let \((M=G/H,ds^{2})\) be a connected trans-Lorentz \(G\)-geodesic orbit nilmanifold where \(G=N\rtimes H\), with \(N\) nilpotent. Let \(\langle\cdot,\cdot\rangle\) denote the inner product on \(\mathfrak{n}\) induced by \(ds^{2}\). If \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) is degenerate, then \((\mathfrak{n},\langle\cdot,\cdot\rangle)\) is a either a \(2\)-dimensional double extension of a metric Lie algebra corresponding to a Lorentz \(GO\) nilmanifold, or a \(4\)-dimensional double extension of a metric Lie algebra corresponding to a Riemannian \(GO\) nilmanifold._ _Remark 2_.: The Lorentz \(GO\) nilmanifold in Theorem 2 is geodesic orbit relative to the group \(G_{0}\), which is the semidirect product of the group of parallel translations and pseudo orthogonal automorphisms, as constructed in Lemma 6, and hence by the results of [NW] is either at most \(2\)-step nilpotent, or is by itself obtained from a \(2\)-dimensional double extension of a metric Lie algebra of a Riemannian nilmanifold (which must be at most 2-step nilpotent by [Gor, Theorem 2.2]). As the composition of two repeated 2-dimensional double extensions is equivalent to a single 4-dimensional double extension, we deduce that in the assumptions of Theorem 2, the Lie algebra of the nilmanifold \(M\) is obtained either by a 2-dimensional double extension of a Lorentz Lie algebra \(\mathfrak{m}_{0}\) or by a 4-dimensional double extension of a Riemannian Lie algebra \(\mathfrak{m}_{0}\), where in both cases, \(\mathfrak{m}_{0}\) is at most 2-step nilpotent. Note that even a 2-dimensional \(GO\) double extension of an abelian definite Lie algebra can be of an arbitrarily high step, as shown in [NW, Section 5]. \(\diamondsuit\) The authors have no competing interests to declare that are relevant to the content of this article. ## 2. Proof of Theorem 1: If \(ds^{2}|_{[\mathfrak{n},\mathfrak{n}]}\) is nondegenerate then \(\mathfrak{n}\) is either abelian or 2-step nilpotent Given a reductive homogeneous pseudo-Riemannian manifold \((G/H,ds^{2})\), where \(G=N\rtimes H\), with \(N\) nilpotent, we identify \(\mathfrak{n}=\operatorname{Lie}(N)\) with the tangent space to \(G/H\) at \(1N\). Let \(\langle\cdot,\cdot\rangle\) be the inner product on \(\mathfrak{n}\) induces by \(ds^{2}\), and denote \(\mathfrak{n}^{\prime}=[\mathfrak{n},\mathfrak{n}]\). Assume that the restriction \(\langle\cdot,\cdot\rangle^{\prime}\) of the inner product \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\) is nondegenerate. Denote \(\mathfrak{v}=(\mathfrak{n}^{\prime})^{\perp}\); note that \(\mathfrak{n}\) is the direct orthogonal sum of \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\), and both subspaces \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\) of \(\mathfrak{n}\) are \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. _Remark 3_.: Note that if \(V_{1}\) and \(V_{2}\) are \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspaces of \(\mathfrak{n}\), then each of the following subspaces is also \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant: \[V_{1}^{\perp},\quad V_{1}+V_{2},\quad V_{1}\cap V_{2},\quad[V_{1},V_{2}], \quad\{X\in\mathfrak{n}\,:\,[X,V_{1}]\subset V_{2}\}.\] In particular, the centraliser and the normaliser of an \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspace of \(\mathfrak{n}\) are themselves \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. \(\diamondsuit\) Let \((G/H,ds^{2})\) be \(G\)-geodesic orbit. Following the first steps in the proof of [CWZ, Theorem 7] and of [NW, Theorem 1], we take \(T=X+Y\) and \(T^{\prime}=X^{\prime}+Y^{\prime}\), where \(X,X^{\prime}\in\mathfrak{n}^{\prime}\), \(Y,Y^{\prime}\in\mathfrak{v}\), and \(T\) is non-null in (1). Then \(k(T)=0\) and there exists \(A=A(X,Y)\in\mathfrak{h}\) such that \[\langle[A,X^{\prime}],X\rangle+\langle[A,Y^{\prime}],Y\rangle+\langle[X,X^{ \prime}]+[Y,X^{\prime}]+[X,Y^{\prime}]+[Y,Y^{\prime}],X\rangle=0. \tag{2}\] Taking \(Y^{\prime}=Y,X^{\prime}=0\) we obtain, by continuity, \[\langle[Y,X],X\rangle=0,\quad\text{for all $Y\in\mathfrak{v}$, $X\in\mathfrak{n}^{\prime}$.} \tag{3}\] As \(\mathfrak{v}\) generates \(\mathfrak{n}\) it follows that \[\langle[T,X],X\rangle=0,\quad\text{for all $T\in\mathfrak{n}$, $X\in\mathfrak{n}^{\prime}$.} \tag{4}\] _Remark 4_.: Note that if \(\langle\cdot,\cdot\rangle^{\prime}\) is definite, equation (4) implies \([\mathfrak{n},\mathfrak{n}^{\prime}]=0\), and so \(\mathfrak{n}\) is at most \(2\)-step nilpotent, regardless of the signature of \(\langle\cdot,\cdot\rangle\). In the context of Theorem 1 we can therefore assume that \(\langle\cdot,\cdot\rangle^{\prime}\) is indefinite. Separating the \(X^{\prime}\)- and the \(Y^{\prime}\)-components in (2) and using (3) and (4), we find that for all \(X\in\mathfrak{n}^{\prime}\) and \(Y\in\mathfrak{v}\) with \(X+Y\) non-null, there exists \(A=A(X,Y)\in\mathfrak{h}\) such that for all \(X^{\prime}\in\mathfrak{n}^{\prime}\), \(Y^{\prime}\in\mathfrak{v}\), \[\langle[A,Y],Y^{\prime}\rangle=\langle[Y,Y^{\prime}],X\rangle, \tag{6}\] \[[A+Y,X]=0. \tag{5}\] Denote \(\mathfrak{s}:=\mathfrak{so}(\mathfrak{n}^{\prime},\langle\cdot,\cdot\rangle^{ \prime})\subset\mathfrak{gl}(\mathfrak{n}^{\prime})\), the algebra of skew-symmetric endomorphisms relative to the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\). By (4) \(\mathfrak{k}:=\operatorname{ad}_{\mathfrak{g}}(\mathfrak{n})|_{\mathfrak{n}^ {\prime}}\) is a subalgebra of \(\mathfrak{s}\) consisting of nilpotent endomorphisms. In fact, the map \(\phi:\mathfrak{n}\to\mathfrak{k}\) defined by \(\phi(T)=\operatorname{ad}(T)|_{\mathfrak{n}^{\prime}}\) for \(T\in\mathfrak{n}\) is a Lie algebra homomorphism. Using Engel's Theorem, \(\mathfrak{k}\) is triangular. Thus it is conjugate by an inner automorphism [Mos, Theorem 2.1] to a subalgebra of the nilpotent part \(\mathfrak{u}\) of an Iwasawa decomposition \(\mathfrak{s}=\mathfrak{t}\oplus\mathfrak{a}\oplus\mathfrak{u}\). In the following we may (and do) assume \(\mathfrak{k}\subset\mathfrak{u}\). In view of Remark 4, to prove Theorem 1 we need to consider two cases: when the restrictions of \(\langle\cdot,\cdot\rangle\) to both \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\) are Lorentz, and when restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}\) is trans Lorentz and the restriction to \(\mathfrak{v}\) is definite. We consider these two cases separately in the following two subsections. The proof of Theorem 1 will follow from Propositions 1 and 2 below. ### Both \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\) are Lorentz In this subsection we additionally assume, in the assumptions of Theorem 1, that the restrictions of \(\langle\cdot,\cdot\rangle\) to both \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\) are both Lorentz. We prove the following. **Proposition 1**.: _Let \((M=G/H,ds^{2})\) be a connected pseudo-Riemannian \(G\)-geodesic orbit nilmanifold where \(G=N\rtimes H\) with \(N\) nilpotent. If the restrictions of \(\langle\cdot,\cdot\rangle\) to both \(\mathfrak{n}^{\prime}\) and \(\mathfrak{v}\) are of Lorentz signature, then \(N\) is either abelian or \(2\)-step nilpotent._ Proof.: Denote \(m=\dim\mathfrak{n}^{\prime}\) (note that \(m\geq 2\)). We adopt the notation and will use the facts stated at the start of this section. The subalgebra \(\mathfrak{s}=\mathfrak{so}(\mathfrak{n}^{\prime},\langle\cdot,\cdot\rangle^{ \prime})\subset\mathfrak{gl}(\mathfrak{n}^{\prime})\) of skew-symmetric endomorphisms of \(\langle\cdot,\cdot\rangle^{\prime}\) is isomorphic to \(\mathfrak{so}(m-1,1)\). We can choose a basis \(\{e_{1},\ldots,e_{m}\}\) for \(\mathfrak{n}^{\prime}\) relative to which the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\) and the nilpotent part \(\mathfrak{u}\) of the Iwasawa decomposition of \(\mathfrak{s}\) are given by the following matrices: \[\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}=\left(\begin{smallmatrix}0 &0&1\\ 0&I_{m-2}&0\\ 1&0&0\end{smallmatrix}\right)\text{ and }\mathfrak{u}=\left\{\left(\begin{smallmatrix}0 &0&0\\ u&0_{m-2}&0\\ 0&-u^{t}&0\end{smallmatrix}\right)\,:\,u\in\mathbb{R}^{m-2}\right\}. \tag{7}\] As \(\mathfrak{k}\subset\mathfrak{u}\), we obtain a linear map \(\Phi:\mathfrak{n}\to\operatorname{Span}(e_{2},\dots,e_{m-1})\) such that, for all \(T\in\mathfrak{n}\), \[[T,e_{1}]=\Phi T,\ [T,e_{m}]=0,\text{ and }[T,e_{i}]=-\langle\Phi T,e_{i} \rangle e_{m}\text{ for }2\leq i\leq m-1. \tag{8}\] As \(\mathfrak{u}\) (and hence \(\mathfrak{k}\)) is abelian, we obtain \([[\mathfrak{v},\mathfrak{v}],\mathfrak{n}^{\prime}]=0\). Since \(\mathfrak{v}\) generates \(\mathfrak{n}\), we obtain \([[\mathfrak{n},\mathfrak{n}],\mathfrak{n}^{\prime}]=0\), which implies that \(\mathfrak{n}^{\prime}\) is abelian. Introduce the \(2\)-forms \(\omega_{i}\in\Lambda^{2}(\mathfrak{n})\) by \[[T_{1},T_{2}]=\sum\nolimits_{i=1}^{m}\omega_{i}(T_{1},T_{2})e_{i}\text{ for }T_{1},T_{2}\in\mathfrak{n}. \tag{9}\] From (7) and (8) we have \[\omega_{1}(T_{1},T_{2})=\langle[T_{1},T_{2}],e_{m}\rangle,\quad\text{and} \quad\omega_{1}(\mathfrak{n},\mathfrak{n}^{\prime})=0. \tag{10}\] As \(\omega_{1}\) cannot be zero (since \(e_{1}\in\mathfrak{n}^{\prime}\)) we obtain \(\omega_{1}(\mathfrak{v},\mathfrak{v})\neq 0\). Using (8) and (9), the Jacobi identity gives \[\sigma\Big{(}\omega_{1}(T_{1},T_{2})\Phi T_{3}+\sum\nolimits_{i=2}^{m-1} \omega_{i}(T_{1},T_{2})\langle\Phi T_{3},e_{i}\rangle e_{m}\Big{)}=0, \tag{11}\] where \(\sigma\) denotes the cyclic permutation of \(T_{1},T_{2},T_{3}\in\mathfrak{n}\). Consider the following two subspaces of \(\mathfrak{n}\): \[\mathfrak{c}=\operatorname{Ker}\Phi(=\{T\in\mathfrak{n}\,:\,[T,\mathfrak{n}^{ \prime}]=0\}),\qquad\mathfrak{q}=\{T\in\mathfrak{n}\,:\omega_{1}(T,\mathfrak{n} )(=\langle[T,\mathfrak{n}],e_{m}\rangle)=0\} \tag{12}\] (the fact that \(\operatorname{Ker}\Phi\) is the centraliser of \(\mathfrak{n}^{\prime}\) in \(\mathfrak{n}\) follows from (8)). By [NW, Theorem 1(b)] we can (and will) assume that \(\mathfrak{c}\) is degenerate. Furthermore, we can assume that \(\mathfrak{c}\neq\mathfrak{n}\) (equivalently, \(\Phi\neq 0\)), as otherwise the algebra \(\mathfrak{n}\) is \(2\)-step nilpotent by (8). In these notations and assumptions, we have the following. **Lemma 1**.: * \(\mathfrak{n}^{\prime}\subset\mathfrak{q}\subset\mathfrak{c}\)_, and both_ \(\mathfrak{q}\) _and_ \(\mathfrak{c}\) _are_ \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant ideals of_ \(\mathfrak{n}\)_. Moreover,_ \([\mathfrak{c},\mathfrak{c}]\subset\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{ \prime}\)_._ * \(\operatorname{codim}\mathfrak{q}=2\) _and_ \(\operatorname{codim}\mathfrak{c}\in\{1,2\}\)__(_equivalently,_ rk__\(\Phi\in\{1,2\}\))_._ * _Let_ \(e\in\mathfrak{c}\cap\mathfrak{c}^{\perp}\) _be a nonzero_ (_necessarily null_) _vector. Then_ \(e\in\mathfrak{v}\)_, the line_ \(\mathbb{R}e\) _is_ \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant and_ \([e,e^{\perp}]=0\)_._ * _Let_ \(f\in\mathfrak{v}\) _be a null vector such that_ \(f\notin e^{\perp}\) _and_ \(\langle f,e\rangle=1\)_. Then_ \([\mathfrak{h},[f,e]]=0\)_._ * \([f,e]\in\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{\prime}\) _and_ \(e\in\mathfrak{q}\)_._ Proof.: (a) The fact that \(\mathfrak{n}^{\prime}\subset\mathfrak{q}\) follows from (10). Furthermore, taking \(T_{1}\in\mathfrak{q}\) in (11) we obtain \(\omega_{1}(T_{2},T_{3})\Phi(T_{1})=0\) which implies \(\mathfrak{q}\subset\mathfrak{c}\). As both \(\mathfrak{q}\) and \(\mathfrak{c}\) contain \(\mathfrak{n}^{\prime}\), they are ideals of \(\mathfrak{n}\). The fact that \(\mathfrak{c}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant follows from Remark 3. Moreover, by Remark 3, \(\mathfrak{q}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant provided \(\mathbb{R}e_{m}\) is. To see the latter, we note that \([\mathfrak{v},\mathfrak{n}^{\prime}]\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. From (8) (and the fact that \(\Phi\neq 0\)), we have \(e_{m}\in[\mathfrak{v},\mathfrak{n}^{\prime}]\subset\operatorname{Span}(e_{2}, \dots,e_{m})\). Then \([\mathfrak{v},\mathfrak{n}^{\prime}]\cap([\mathfrak{v},\mathfrak{n}^{\prime}])^ {\perp}=\mathbb{R}e_{m}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, by Remark 3. Finally, the fact that \([\mathfrak{c},\mathfrak{c}]\subset\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{ \prime}\) follows from the Jacobi identity, as \([\mathfrak{c},\mathfrak{n}^{\prime}]=0\). (b) If \(\operatorname{rk}\Phi\geq 3\), then for almost all triples \(T_{1},T_{2},T_{3}\in\mathfrak{n}\), the vectors \(\Phi T_{1},\Phi T_{2},\Phi T_{3}\in\operatorname{Span}(e_{2},\ldots,e_{m-1})\) are linearly independent, and so \(\omega_{1}=0\) by (11). This is a contradiction, so \(\operatorname{rk}\Phi\leq 2\). As \(\Phi\neq 0\) we obtain \(\operatorname{codim}\mathfrak{c}\,(=\operatorname{rk}\Phi)\in\{1,2\}\). Now as \(\mathfrak{q}\subset\mathfrak{c}\) by (a) and as \(\mathfrak{q}\) is the null space of the skew-symmetric form \(\omega_{1}\), the codimension of \(\mathfrak{q}\) must be a positive even number. Furthermore, from (a) we have \(\omega_{1}(\mathfrak{c},\mathfrak{c})=0\). If \(\operatorname{codim}\mathfrak{c}=1\), this implies \(\operatorname{codim}\mathfrak{q}=2\). If \(\operatorname{rk}\Phi=2\), we take \(T_{1},T_{2}\in\mathfrak{n}\) in (11) such that the vectors \(\Phi T_{1},\Phi T_{2}\in\operatorname{Span}(e_{2},\ldots,e_{m-1})\) are linearly independent and take \(T_{3}\in\mathfrak{c}\). We obtain \(\omega_{1}(T_{1},\mathfrak{c})=\omega_{1}(T_{2},\mathfrak{c})=0\). As \(\operatorname{Span}(T_{1},T_{2})\oplus\mathfrak{c}=\mathfrak{n}\) we get \(\omega_{1}(\mathfrak{n},\mathfrak{c})=0\), and so \(\mathfrak{c}\subset\mathfrak{q}\) by (12) which implies \(\mathfrak{c}=\mathfrak{q}\) by (a). (c) As \(\mathfrak{c}\) is degenerate, the space \(\mathfrak{c}\cap\mathfrak{c}^{\perp}\) has dimension \(1\) and is spanned by a (nonzero) null vector \(e\). As \(\mathfrak{n}^{\prime}\subset\mathfrak{c}\) by (a) we obtain \(e\in\mathfrak{v}\). The subspace \(\mathbb{R}e\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant as \(\mathfrak{c}\) is (and by Remark 3). Then (5) with \(Y=e\), \(Y^{\prime}\in e^{\perp}\cap\mathfrak{v}\) implies that \(\langle[e,e^{\perp}\cap\mathfrak{v}],X\rangle=0\) for all \(X\in\mathfrak{n}^{\prime}\) such that \(e+X\) is non-null. This gives \([e,e^{\perp}\cap\mathfrak{v}]=0\). As \(e\in\mathfrak{c}\) we have \([e,\mathfrak{n}^{\prime}]=0\), and so \([e,e^{\perp}]=0\). (d) Choose \(f\in\mathfrak{v}\) to be a null vector such that \(f\notin\mathfrak{c}\) and \(\langle f,e\rangle=1\) (this choice is not unique). Let \(A\in\mathfrak{h}\). By (c), \(\mathbb{R}e\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, and so \([A,e]=ae\), for some \(a\in\mathbb{R}\), and so \(\langle[A,f],e\rangle=-a\). As \(e^{\perp}\oplus\mathbb{R}f=\mathfrak{n}\), we have \([A,f]+af\in e^{\perp}\). Then \([[A,f],e]=-a[f,e]\) since \([e,e^{\perp}]=0\) by (c). As \([f,[A,e]]=a[f,e]\), the claim follows. (e) Choose \(f\) as in (d). Then \(e^{\perp}\oplus\mathbb{R}f=\mathfrak{n}\), and so from (c) we have \([e,\mathfrak{n}]\subset\mathbb{R}[f,e]\). Then from (d) and (6), with \(X=[f,e]\), we obtain \([Y,[f,e]]=0\), for all \(Y\in\mathfrak{v}\) such that \(Y+[f,e]\) is non-null, and hence for all \(Y\in\mathfrak{v}\). As \(\mathfrak{n}^{\prime}\) is abelian, we obtain \([f,e]\in\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{\prime}\). But from (8), \(\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{\prime}\subset\operatorname{Span}(e _{2},\ldots,e_{m})\) (as \(\Phi\neq 0\)), and so \(\langle[f,e],e_{m}\rangle=0\) by (7). So \(\langle[\mathfrak{n},e],e_{m}\rangle=0\) and the claim follows by (12). Now choose \(e\in\mathfrak{c}\cap\mathfrak{c}^{\perp}\) as in Lemma 1(c) and choose \(f\in\mathfrak{v}\setminus e^{\perp}\) as in Lemma 1(d). By Lemma 1(a) we have \(\mathfrak{n}^{\prime}\subset\mathfrak{q}\subset\mathfrak{c}\subset e^{\perp}\), and by Lemma 1(b), \(\operatorname{codim}\mathfrak{q}=2\) (and then either \(\mathfrak{c}=\mathfrak{q}\) or \(\mathfrak{c}=e^{\perp}\)). As \(f\) is not contained in \(e^{\perp}\), and hence in \(\mathfrak{c}\), we have \(\Phi f\neq 0\) by (8). Without loss of generality (scaling \(f\) and \(e\) and specifying the orthonormal basis \(\{e_{2},\ldots,e_{m-1}\}\)) we can assume that \(\Phi f=e_{2}\), and so by (8), \[[f,e_{1}]=e_{2},\quad[f,e_{2}]=-e_{m},\quad[f,e_{i}]=0\text{ for }i>2. \tag{13}\] Note that with this choice of the basis, \(\mathfrak{z}(\mathfrak{n})\cap\mathfrak{n}^{\prime}\subset\operatorname{Span}(e _{3},\ldots,e_{m})\). Moreover, from Lemma 1(e) (and Lemma 1(a)), the \(2\)-dimensional subspace \(\mathfrak{q}^{\perp}\) contains \(e\) and lies in \(\mathfrak{v}\). Thus we have \(\mathfrak{q}^{\perp}=\operatorname{Span}(e,Y_{0})\) for some \(Y_{0}\notin\mathfrak{q},\,Y_{0}\perp e\), with \(\langle Y_{0},Y_{0}\rangle\neq 0\). Note that \(e^{\perp}=\mathbb{R}Y_{0}\oplus\mathfrak{q}\), and so \(\mathbb{R}f\oplus\mathbb{R}Y_{0}\oplus\mathfrak{q}=\mathfrak{n}\). As \(\omega_{1}(\mathfrak{q},\mathfrak{n})=0\) and \(\omega_{1}\neq 0\), we must have \(\omega_{1}(f,Y_{0})\neq 0\). Denote \(\kappa_{i}=\omega_{i}(f,Y_{0})\), so that \([f,Y_{0}]=\sum_{i=1}^{m}\kappa_{i}e_{i}\) (by (9)), with \(\kappa_{1}\neq 0\). As \(Y_{0}\perp\mathfrak{q}\), from (5) with \(Y=Y_{0}\) and \(Y^{\prime}\in\mathfrak{q}\cap\mathfrak{v}\) we obtain \(\langle[Y_{0},Y^{\prime}],X\rangle=0\), for all \(Y^{\prime}\in\mathfrak{q}\cap\mathfrak{v}\) and all \(X\in\mathfrak{n}^{\prime}\) such that \(Y_{0}+X\) is non-null, which implies \([Y_{0},\mathfrak{q}\cap\mathfrak{v}]=0\) Moreover, as \(\mathfrak{q}^{\perp}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, for any \(A\in\mathfrak{h}\) we have \([A,Y_{0}]\in\operatorname{Span}(e,Y_{0})\), and so \([A,Y_{0}]=\mu e\), for some \(\mu\in\mathbb{R}\), since \(Y_{0}\) is non-null and \(Y_{0}\perp e\). Then from (6) with \(X=[f,Y_{0}]\) we obtain \[[[A,f],Y_{0}]+[f,[A,Y_{0}]]+[Y,[f,Y_{0}]]=0, \tag{14}\] for all \(Y\in\mathfrak{v}\) such that \(Y+[f,Y_{0}]\) is non-null (note that here we have a particular \(A=A(X,Y)\in\mathfrak{h}\)). Let \([A,f]=af+bY_{0}+Y^{\prime}\), where \(Y^{\prime}\in\mathfrak{q}\cap\mathfrak{v}\). Then \([[A,f],Y_{0}]=a[f,Y_{0}]\) as \([Y_{0},\mathfrak{q}\cap\mathfrak{v}]=0\). Furthermore, \([f,[A,Y_{0}]]=\mu[f,e]\). By Lemma 1(e) we have \([f,e]\in\mathfrak{z}(\mathfrak{h})\cap\mathfrak{n}^{\prime}\), and so \([f,e]\in\operatorname{Span}(e_{3},\ldots,e_{m})\). Take \(Y=f+\lambda e\) in (14), where \(\lambda\in\mathbb{R}\) is chosen in such a way that \(Y+X=f+\lambda e+[f,Y_{0}]\) is non-null. As \(e\in\mathfrak{c}\) we have \([e,[f,Y_{0}]]=0\), and so we obtain \(a[f,Y_{0}]+[f,[f,Y_{0}]]\in\operatorname{Span}(e_{3},\ldots,e_{m})\). Substituting \([f,Y_{0}]=\sum_{i=1}^{m}\kappa_{i}e_{i}\) and using (13) we get \(a(\kappa_{1}e_{1}+\kappa_{2}e_{2})+\kappa_{1}e_{2}=0\) which implies \(\kappa_{1}=0\), a contradiction. ### Trans-Lorentz \(\mathfrak{n}^{\prime}\), definite \(\mathfrak{v}\) In this subsection we consider the last remaining case in the proof of Theorem 1. We prove the following. **Proposition 2**.: _Let \((M=G/H,ds^{2})\) be a connected pseudo-Riemannian \(G\)-geodesic orbit nilmanifold where \(G=N\rtimes H\) with \(N\) nilpotent. Denote \(\mathfrak{n}^{\prime}=[\mathfrak{n},\mathfrak{n}]\) and \(\mathfrak{v}=(\mathfrak{n}^{\prime})^{\perp}\). If the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\) is trans-Lorentz, and the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{v}\) is definite, then \(N\) is either abelian or \(2\)-step nilpotent._ Proof.: Denote \(m=\dim\mathfrak{n}^{\prime}\) (we can assume that \(m\geq 4\), as otherwise the claim follows from [NW, Theorem 1(b)]). We adopt the notation and will use the facts stated at the start of the section. The subalgebra \(\mathfrak{s}=\mathfrak{so}(\mathfrak{n}^{\prime},\langle\cdot,\cdot\rangle^{ \prime})\subset\mathfrak{gl}(\mathfrak{n}^{\prime})\) of skew-symmetric endomorphisms of \(\langle\cdot,\cdot\rangle^{\prime}\) is isomorphic to \(\mathfrak{so}(m-2,2)\). We can choose a basis \(\{e_{1},\ldots,e_{m}\}\) for \(\mathfrak{n}^{\prime}\) relative to which the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\) and the nilpotent part \(\mathfrak{u}\) of the Iwasawa decomposition of \(\mathfrak{s}\) are given by the following matrices: \[\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}=\left(\begin{smallmatrix}0 _{2}&0&I_{2}\\ 0&I_{m-4}&0\\ I_{2}&0&0_{2}\end{smallmatrix}\right)\text{ and }\mathfrak{u}=\left\{\left(\begin{smallmatrix}0 &0&0&0&0\\ \alpha&0&0&0&0\\ u&v&0_{m-4}&0&0\\ 0&\beta&-u^{t}&0&-\alpha\\ -\beta&0&-v^{t}&0&0\end{smallmatrix}\right)\;:\;u,v\in\mathbb{R}^{m-4},\, \alpha,\beta\in\mathbb{R}\right\}. \tag{15}\] The homomorphism \(\phi:\mathfrak{n}\to\mathfrak{u}\) (given by \(\phi(T)=\operatorname{ad}(T)|_{\mathfrak{n}^{\prime}}\) for \(T\in\mathfrak{n}\)) defines linear maps \(U,V:\mathfrak{n}\to\mathbb{R}^{m-4}=\operatorname{Span}(e_{3},\ldots,e_{m-2})\) and vectors \(a,b\in\mathfrak{n}\) such that for \(T\in\mathfrak{n}\), the corresponding entries of the matrix \(\phi(T)\) in the notation of (15) are given by \(u=UT\), \(v=VT\), \(\alpha=\langle a,T\rangle\) and \(\beta=\langle b,T\rangle\). **Lemma 2**.: _We have \(\phi(\mathfrak{n}^{\prime})=0\), and so the subalgebra \(\mathfrak{n}^{\prime}\) is abelian._ Proof.: Clearly \(\phi(\mathfrak{n}^{\prime})\subset[\mathfrak{u},\mathfrak{u}]\). From (15), the subalgebra \([\mathfrak{u},\mathfrak{u}]\) is the subspace of elements of \(\mathfrak{u}\) (as given in (15)) with \(v=0\) and \(\alpha=0\), so that for \(X\in\mathfrak{n}^{\prime}\), we have \(VX=0\) and \(\langle a,X\rangle=0\). For \(X^{\prime}\in\mathfrak{n}\) we have \(0=[X,X^{\prime}]+[X^{\prime},X]=\phi(X)X^{\prime}+\phi(X^{\prime})X\) which gives \(x^{\prime}_{1}UX+x_{1}UX^{\prime}=0\) and \(x^{\prime}_{1}\langle b,X\rangle+x_{1}\langle b,X^{\prime}\rangle=0\), where \(x_{1}\) and \(x^{\prime}_{1}\) are the \(e_{1}\)-components of the vectors \(X\) and \(X^{\prime}\), respectively. This implies \(U\mathfrak{n}^{\prime}=0\) and \(\langle b,\mathfrak{n}^{\prime}\rangle=0\), so \(\phi(\mathfrak{n}^{\prime})=0\), as required. But \(\phi(T)X=[T,X]\) for \(T\in\mathfrak{n}\) and \(X\in\mathfrak{n}^{\prime}\), and the second claim follows. From Lemma 2 it follows that the subalgebra \(\mathfrak{k}=\phi(\mathfrak{n})\subset\mathfrak{u}\) is abelian. It is not hard to see using the root decomposition of \(\mathfrak{u}\) relative to the abelian subalgebra \(\mathfrak{a}\subset\mathfrak{s}\) in the Iwasawa decomposition (or to calculate directly), that the algebra \(\mathfrak{u}\) contains three different _maximal_ abelian subalgebras given below (in the notation of (15)): 1. \(\mathfrak{u}_{1}=\{Q\in\mathfrak{u}\,:\,v=0\}\). 2. \(\mathfrak{u}_{2}=\mathbb{R}\left(\begin{smallmatrix}0&0&0&0&0&0\\ 1&0&0&0&0&0\\ u_{1}&v_{1}&0&0&0&0\\ 0&0&0&0_{m-5}&0&0\\ 0&0&-u_{1}&0&-1\\ 0&0&-v_{1}&0&0&0\end{smallmatrix}\right)\oplus\left\{\left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ w&0&0&0_{m-5}&0&0\\ 0&\beta&0&-w^{t}&0&0\\ -\beta&0&0&0&0&0\end{smallmatrix}\right)\,:\,w\in\mathbb{R}^{m-5},\,\beta\in \mathbb{R}\right\}\), where \(v_{1}\neq 0\) (up to specifying a basis in \(\mathbb{R}^{m-5}=\mathrm{Span}(e_{4},\ldots,e_{m-2})\)). 3. \(\mathfrak{u}_{3}\) is a maximal abelian subalgebra of the Heisenberg algebra \(\left\{\left(\begin{smallmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ u&v&0_{m-4}&0&0\\ 0&\beta&-u^{t}&0&0\\ -\beta&0&-v^{t}&0&0\end{smallmatrix}\right)\,:\,u,v\in\mathbb{R}^{m-4},\, \beta\in\mathbb{R}\right\}\subset\mathfrak{u}\). We will consider these three cases separately, but following the same pattern. Let \(\mathfrak{c}=\mathrm{Ker}\,\phi\cap\mathfrak{v}\) be the centraliser of \(\mathfrak{n}^{\prime}\) in \(\mathfrak{v}\), and \(\mathfrak{c}^{\perp}\) be its orthogonal complement in \(\mathfrak{v}\). By Remark 3, both subspaces \(\mathfrak{c}\) and \(\mathfrak{c}^{\perp}\) are \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. We will always assume that the subspace \(\mathfrak{c}^{\perp}\) is non-trivial (equivalently, \(\phi\neq 0\)), for otherwise \(\mathfrak{n}\) is at most \(2\)-step nilpotent. Denote \(\pi:\mathfrak{h}\to\mathfrak{so}(\mathfrak{c}^{\perp})\) the restriction of the representation of \(\mathfrak{h}\) to \(\mathfrak{c}^{\perp}\), so that \(\pi(A)Y=[A,Y]\) for \(A\in\mathfrak{h}\) and \(Y\in\mathfrak{c}^{\perp}\). **Lemma 3**.: 1. _If_ \(L\subset\mathfrak{c}^{\perp}\) _is an_ \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant subspace, then_ \([L,L^{\perp}]=0\)_._ 2. _The subspace_ \(\mathfrak{c}^{\perp}\) _has no_ \(1\)_-dimensional_ \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant subspaces._ 3. _If_ \(L\subset\mathfrak{c}^{\perp}\) _is a_ \(2\)_-dimensional_ \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant subspace, then_ \([L,L]\subset\mathfrak{z}(\mathfrak{n})\)_, where_ \(\mathfrak{z}(\mathfrak{n})\) _is the centre of_ \(\mathfrak{n}\)_._ 4. _If the subalgebra_ \(\pi(\mathfrak{h})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\) _is abelian, then_ \([\mathfrak{c}^{\perp},\mathfrak{c}^{\perp}]\subset\mathfrak{z}(\mathfrak{n})\)_._ Proof.: Assertion (a) follows from (5) if we take \(Y\in L\) and \(Y^{\prime}\in L^{\perp}\). For assertion (b), suppose that for a nonzero \(Y\in\mathfrak{c}^{\perp}\), the space \(\mathbb{R}\)\(Y\) is \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. Then \([\mathfrak{h},Y]=0\), and so \([Y,\mathfrak{v}]=0\) by (6). As \(\mathfrak{v}\) generates \(\mathfrak{n}\), we obtain \([Y,\mathfrak{n}]=0\), and in particular, \(\phi(Y)=0\) contradicting the fact that \(Y\in\mathfrak{c}^{\perp}\setminus\{0\}\). For assertion (c), suppose that \(L=\mathrm{Span}(Y_{1},Y_{2})\subset\mathfrak{c}^{\perp}\) is \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, with the vectors \(Y_{1}\) and \(Y_{2}\) being orthonormal. Then for any \(A\in\mathfrak{h}\), we obtain that \(AY_{1}\) is a multiple of \(Y_{2}\), and \(AY_{2}\) is a multiple of \(Y_{1}\). Hence \(AX=0\), where \(X=[Y_{1},Y_{2}]\), and so \([X,\mathfrak{v}]=0\), by (6). As \(\mathfrak{v}\) generates \(\mathfrak{n}\), the subspace \([L,L]=\mathbb{R}X\) lies in the centre of \(\mathfrak{n}\). For assertion (d), suppose that the subalgebra \(\pi(\mathfrak{h})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\), is abelian. Then \(\mathfrak{c}^{\perp}\) is the direct, orthogonal sum of \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspaces of dimension \(1\) or \(2\) each. But by assertion (b), there can be no \(1\)-dimensional subspaces, and then the claim follows from assertions (c) and (a). Introduce the \(2\)-forms \(\omega_{i}\in\Lambda^{2}(\mathfrak{v})\) by \[[Y_{1},Y_{2}]=\sum\nolimits_{i=1}^{m}\omega_{i}(Y_{1},Y_{2})e_{i}\text{ for }Y_{1},Y_{2}\in\mathfrak{v}. \tag{16}\] Then \[[Y_{3},[Y_{1},Y_{2}]]=\sum\nolimits_{i=1}^{m}\omega_{i}(Y_{1},Y_{2})\phi(Y_{3 })e_{i}. \tag{17}\] From Lemma 2 (and the fact that \([\mathfrak{c},\mathfrak{n}^{\prime}]=0\)) we have \(\mathfrak{n}^{\prime}=[\mathfrak{v},\mathfrak{v}]+[\mathfrak{v},\mathfrak{n}^ {\prime}]=[\mathfrak{v},\mathfrak{v}]+\phi(\mathfrak{c}^{\perp})\mathfrak{n}^ {\prime}\). We now separately consider three cases for \(\phi(\mathfrak{n})\) as given above. Case (i): \(\phi(\mathfrak{n})\subset\mathfrak{u}_{1}\). Then \(\phi(\mathfrak{c}^{\perp})\mathfrak{n}^{\prime}\subset\operatorname{Span}(e_ {2},\ldots,e_{m})\). As \(e_{1}\in\mathfrak{n}^{\prime}=[\mathfrak{v},\mathfrak{v}]+\phi(\mathfrak{c}^{ \perp})\mathfrak{n}^{\prime}\), we obtain \(\omega_{1}(\mathfrak{v},\mathfrak{v})\neq 0\). The Jacobi identity gives \(\sigma(\omega_{1}(Y_{1},Y_{2})(\langle a,Y_{3}\rangle e_{2}+UY_{3}-\langle b,Y_{ 3}\rangle)=0\), where \(\sigma\) denotes the cyclic permutation of \(Y_{1},Y_{2},Y_{3}\in\mathfrak{v}\), which can be written as \[\sigma(\omega_{1}(Y_{1},Y_{2})\phi(Y_{3}))=0. \tag{18}\] Taking \(Y_{1},Y_{2}\in\mathfrak{c}\) and \(Y_{3}\in\mathfrak{c}^{\perp}\) we obtain \(\omega_{1}(\mathfrak{c},\mathfrak{c})=0\). By Lemma 3(a) we have \([\mathfrak{c},\mathfrak{c}^{\perp}]=0\), and so by (16) we obtain that also \(\omega_{1}(\mathfrak{c},\mathfrak{c}^{\perp})=0\). As \(\omega_{1}(\mathfrak{v},\mathfrak{v})\neq 0\), we deduce that \(\omega_{1}(\mathfrak{c}^{\perp},\mathfrak{c}^{\perp})\neq 0\). Now if \(\operatorname{rk}\phi(=\dim\mathfrak{c}^{\perp})>2\), then the elements \(\phi(Y_{1})\), \(\phi(Y_{2})\) and \(\phi(Y_{3})\) are linearly independent for almost all triples of vectors \(Y_{1},Y_{2},Y_{3}\in\mathfrak{v}\), and so (18) implies \(\omega_{1}(\mathfrak{v},\mathfrak{v})=0\), a contradiction. By Lemma 3(b), we have \(\dim\mathfrak{c}^{\perp}>1\), and so the only remaining possibility is \(\dim\mathfrak{c}^{\perp}=2\). But then from Lemma 3(c) we obtain \([\mathfrak{c}^{\perp},\mathfrak{c}^{\perp}]\subset\mathfrak{z}(\mathfrak{n})\). Taking \(Y_{1},Y_{2},Y_{3}\in\mathfrak{c}^{\perp}\) in equation (17), we get \(\omega_{1}(\mathfrak{c}^{\perp},\mathfrak{c}^{\perp})=0\), a contradiction. Case (ii): \(\phi(\mathfrak{n})\subset\mathfrak{u}_{2}\). For \(Y\in\mathfrak{v}\), we have \[\phi(Y)=\left(\begin{smallmatrix}0&0&0&0&0&0\\ \langle a,Y\rangle&0&0&0&0&0\\ \lambda\langle a,Y\rangle&\mu(a,Y)&0&0&0&0\\ WY&0&0&0_{m-5}&0&0\\ 0&\langle b,Y\rangle&-\lambda\langle a,Y\rangle&-\langle WY\rangle^{t}&0- \langle a,Y\rangle\\ -\langle b,Y\rangle&0&-\mu(a,Y)&0&0&0\end{smallmatrix}\right), \tag{19}\] where \(\lambda,\mu\in\mathbb{R}\) and \(W:\mathfrak{v}\to\mathbb{R}^{m-5}=\operatorname{Span}(e_{4},\ldots,e_{m-2})\). We can assume that \(a\neq 0\) and \(\mu\neq 0\), for otherwise \(\phi(\mathfrak{n})\subset\mathfrak{u}_{1}\). Arguing similarly to the previous case, we see that \(\phi(\mathfrak{c}^{\perp})\mathfrak{n}^{\prime}\subset\operatorname{Span}(e_ {2},\ldots,e_{m})\), and so we must have \(\omega_{1}(\mathfrak{v},\mathfrak{v})\neq 0\). From the Jacobi identity we obtain \[\sigma(\omega_{1}(Y_{1},Y_{2})(\langle a,Y_{3}\rangle e_{2}+WY_{3}))=0, \tag{20}\] where \(\sigma\) denotes the cyclic permutation of \(Y_{1},Y_{2},Y_{3}\in\mathfrak{v}\). From (20) with \(Y_{1},Y_{2}\in\mathfrak{c}\) and \(Y_{3}\in\mathfrak{c}^{\perp}\) we obtain \(\omega_{1}(\mathfrak{c},\mathfrak{c})=0\). Furthermore, we have \([\mathfrak{c},\mathfrak{c}^{\perp}]=0\) by Lemma 3(a), and so \(\omega_{1}(\mathfrak{c},\mathfrak{c}^{\perp})=0\) by (16). It follows that \(\omega_{1}(\mathfrak{c},\mathfrak{v})=0\), and so we must have \(\omega_{1}(\mathfrak{c}^{\perp},\mathfrak{c}^{\perp})\neq 0\), as \(\omega_{1}(\mathfrak{v},\mathfrak{v})\neq 0\). From the \(e_{2}\)-component of (20) we obtain \(\omega_{1}\wedge\alpha=0\), where \(\alpha\) is the \(1\)-form on \(\mathfrak{v}\) defined by \(\alpha(Y)=\langle a,Y\rangle\). By generalised Cartan's Lemma [1] we get \(\omega_{1}=\gamma\wedge\alpha\) for some \(1\)-form \(\gamma\in\mathfrak{v}^{*}\). As \(\omega_{1}\neq 0\), the \(1\)-form \(\gamma\) is not a multiple of \(\alpha\). Moreover, as \(\omega_{1}(\mathfrak{c},\mathfrak{v})=0\), both the vector \(a\) and the vector \(c\in\mathfrak{v}\) dual to \(\gamma\) lie in \(\mathfrak{c}^{\perp}\). Taking the inner product of (20) with \(e_{s}\), \(s=4,\ldots,m-2\), we find that \(W^{t}e_{s}\in\mathrm{Span}(a,c)\). Hence in the matrix \(\phi(Y)\) given in (19), for all \(Y\in(\mathrm{Span}(a,c))^{\perp}\cap\mathfrak{v}\), we have \(WY=\langle a,Y\rangle=0\). We first suppose that \(b\in\mathrm{Span}(a,c)\). Then \(\mathfrak{c}^{\perp}\subset\mathrm{Span}(a,c)\), and as \(\dim\mathfrak{c}^{\perp}>1\) by Lemma 3(b), we deduce that \(\dim\mathfrak{c}^{\perp}=2\) and hence, that \([\mathfrak{c}^{\perp},\mathfrak{c}^{\perp}]\) lies in the centre of \(\mathfrak{n}\), by Lemma 3(c). But now if we take \(Y_{1},Y_{2},Y_{3}\in\mathfrak{c}^{\perp}\) with \(\langle a,Y_{3}\rangle\neq 0\) in (17), then from the \(e_{2}\)-component we get \(\omega_{1}(\mathfrak{c}^{\perp},\mathfrak{c}^{\perp})=0\) which is a contradiction. We therefore suppose that \(b\notin\mathrm{Span}(a,c)\), and so \(\mathrm{Span}(a,b)\subset\mathfrak{c}^{\perp}\subset\mathrm{Span}(a,b,c)\) from (19). Note that the subspace \([\mathfrak{v},[\mathfrak{v},\mathfrak{n}^{\prime}]]\) is \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Remark 3. As \([Y,\mathfrak{n}^{\prime}]=\phi(Y)\mathfrak{n}^{\prime}\) for \(Y\in\mathfrak{v}\), equation (19) gives that the subspace \([\mathfrak{v},[\mathfrak{v},\mathfrak{n}^{\prime}]]\) lies in \(\mathrm{Span}(e_{3},\ldots,e_{m})\). Moreover, as \(\phi(Y)^{2}e_{3}=\mu\langle a,Y\rangle e_{m-1}\) and \(\phi(Y)^{2}e_{2}=-\mu\langle a,Y\rangle(\lambda\langle a,Y\rangle e_{m-1}+ \mu\langle a,Y\rangle e_{m})\), and as \(a\neq 0\) and \(\mu\neq 0\), we obtain that the subspace \([\mathfrak{v},[\mathfrak{v},\mathfrak{n}^{\prime}]]\) contains \(\mathrm{Span}(e_{m-1},e_{m})\). But then \([\mathfrak{v},[\mathfrak{v},\mathfrak{n}^{\prime}]]\cap([\mathfrak{v},[ \mathfrak{v},\mathfrak{n}^{\prime}]])^{\perp}=\mathrm{Span}(e_{m-1},e_{m})\), and so the subspace \(\mathrm{Span}(e_{m-1},e_{m})\) is \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Remark 3. Then the subspace \(\{Y\in\mathfrak{c}^{\perp}\,:\,[Y,\mathfrak{n}^{\prime}]\subset\mathrm{Span}(e _{m-1},e_{m})\}\) is also \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Remark 3. But the latter subspace is given by \(\{Y\in\mathfrak{c}^{\perp}\,:\,WY=0,\,\langle a,Y\rangle=0\}=\mathfrak{c}^{ \perp}\cap(\mathrm{Span}(a,c))^{\perp}=\mathbb{R}Y_{0}\), where \(Y_{0}\neq 0\) is the component of \(b\) orthogonal to \(\mathrm{Span}(a,c)\). This gives a \(1\)-dimensional \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspace of \(\mathfrak{c}^{\perp}\), in contradiction with Lemma 3(b). Case (iii): \(\phi(\mathfrak{n})\subset\mathfrak{u}_{3}\). This is the most involved case. For \(Y\in\mathfrak{v}\), we have \[\phi(Y)=\left(\begin{smallmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ UY&VY&0&0_{m-4}&0&0\\ 0&\langle b,Y\rangle&-(UY)^{t}&0&0\\ -\langle b,Y\rangle&0&-(VY)^{t}&0&0\end{smallmatrix}\right), \tag{21}\] where \(b\in\mathfrak{v}\), and \(U,V:\mathfrak{v}\to\mathbb{R}^{m-4}=\mathrm{Span}(e_{3},\ldots,e_{m-2})\) are such that for all \(Y_{1},Y_{2}\in\mathfrak{v}\) we have \[\langle UY_{1},VY_{2}\rangle=\langle UY_{2},VY_{1}\rangle\quad\text{(equivalently, the matrix $U^{t}V$ is symmetric)}. \tag{22}\] The following lemma sorts out the "non-generic" cases. **Lemma 4**.: _Suppose \(\phi(\mathfrak{n})\not\subset\mathfrak{u}_{1}\) and \(\phi(\mathfrak{n})\not\subset\mathfrak{u}_{2}\)\((\)up to specifying the basic vectors \(e_{1},e_{2},e_{m-1},e_{m}\), but keeping the form of \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) given in (15)\()\). We have the following._ * _For almost all_ \(Y\in\mathfrak{v}\)_, the vectors_ \(UY\) _and_ \(VY\) _are linearly independent._ * _The subspace_ \(\mathrm{Span}(e_{m-1},e_{m})\) _is_ \(\mathrm{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant._ 3. \(\mathfrak{c}=\operatorname{Ker}U\cap\operatorname{Ker}V,\ \mathfrak{c}^{\perp}=( \operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp}\)_, and so for almost all_ \(Y\in\mathfrak{c}^{\perp}\)_, the vectors_ \(UY\) _and_ \(VY\) _are linearly independent._ 4. _For almost all_ \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\)_, the_ \(1\)_-forms_ \(\xi_{e}\) _and_ \(\eta_{e}\) _on_ \(\mathfrak{v}\) _defined by_ \(\xi_{e}(Y)=\langle UY,e\rangle\) _and_ \(\eta_{e}(Y)=\langle VY,e\rangle\) _are linearly independent._ Proof.: We cannot have \(U=V=0\), as otherwise the subspace \(\mathfrak{c}^{\perp}\) is at most \(1\)-dimensional, in contradiction with Lemma 3(b). Furthermore, if \(U\) and \(V\) are proportional, we can specify the vectors \(e_{1},e_{2},e_{m-1},e_{m}\) (without changing the form of \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) given in (15)) in such a way that \(V=0\) hence obtaining \(\phi(\mathfrak{n})\subset\mathfrak{u}_{1}\). We can therefore assume that \(U\) and \(V\) are not proportional. We use the following well known fact. If \(F_{1},F_{2}:\mathbb{R}^{p}\to\mathbb{R}^{q}\) are linear maps such that \(\operatorname{rk}(F_{1}x,F_{2}x)\leq 1\), for all \(x\in\mathbb{R}^{p}\), then either \(F_{1}\) and \(F_{2}\) are proportional, or there exist \(w\in\mathbb{R}^{q}\) and \(\lambda_{1},\lambda_{2}\in(\mathbb{R}^{p})^{*}\) such that \(F_{1}x=\lambda_{1}(x)w\) and \(F_{2}x=\lambda_{2}(x)w\), for all \(x\in\mathbb{R}^{p}\). For assertion (a), we apply the above fact to \(U\) and \(V\). As we assume that \(U\) and \(V\) are not proportional, we obtain that \(UY=\lambda_{1}(Y)w,\ VY=\lambda_{2}(Y)w\) for non-proportional \(1\)-forms \(\lambda_{1},\lambda_{2}\in\mathfrak{v}^{*}\) and for some \(w\neq 0\). But this leads to a contradiction with (22). To prove assertion (b) we note that from assertion (a) and from (21) it follows that \(\operatorname{Span}(e_{m-1},e_{m})\subset[\mathfrak{v},\mathfrak{n}^{\prime} ]\subset\operatorname{Span}(e_{3},\ldots,e_{m})\), and so \([\mathfrak{v},\mathfrak{n}^{\prime}]\cap([\mathfrak{v},\mathfrak{n}^{\prime} ])^{\perp}=\operatorname{Span}(e_{m-1},e_{m})\). Hence the subspace \(\operatorname{Span}(e_{m-1},e_{m})\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Remark 3. For assertion (c) we note that from (21) we have \(\mathfrak{c}^{\perp}=(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp}+ \mathbb{R}b\). But if \(b\notin(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp}\), then the subspace \(\{Y\in\mathfrak{c}^{\perp}\,:\,[Y,\mathfrak{n}^{\prime}]\subset\operatorname{ Span}(e_{m-1},e_{m})\}=\{Y\in\mathfrak{c}^{\perp}\,:\,UY=VY=0\}\) is \(1\)-dimensional and is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by assertion (b) and Remark 3, in contradiction with Lemma 3(b). It follows that \(b\in(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp}\), and so \(\mathfrak{c}^{\perp}=(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp}\) and \(\mathfrak{c}=\operatorname{Ker}U\cap\operatorname{Ker}V\), as required. Then assertion (a) implies that for almost all \(Y\in\mathfrak{c}^{\perp}\), we have \(\operatorname{rk}(UY|VY)=2\). For assertion (d), we apply the above linear-algebraic fact to the conjugates of \(U\) and \(V\). As we assume \(U\) and \(V\) to be not proportional, the condition that the \(1\)-forms \(\xi_{e}\) and \(\eta_{e}\) on \(\mathfrak{v}\) are linearly dependent for all \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\) would imply the existence of \(w\in\mathfrak{v}\setminus\{0\}\) and \(e,e^{\prime}\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\) such that \(UY=\langle w,Y\rangle e\) and \(VY=\langle w,Y\rangle e^{\prime}\), for all \(Y\in\mathfrak{v}\). But then by assertion (c), \(\mathfrak{c}^{\perp}=\mathbb{R}w\) which contradicts Lemma 3(b). As Cases (i) and (ii) have been already understood, for the rest of the proof we will assume that the conditions of Lemma 4 are satisfied. As \(\phi(\mathfrak{v})\mathfrak{n}^{\prime}\subset\operatorname{Span}(e_{3},\ldots, e_{m})\), in order to have both \(e_{1}\) and \(e_{2}\) in \(\mathfrak{n}^{\prime}\), we need the \(2\)-forms \(\omega_{1},\omega_{2}\in\Lambda^{2}(\mathfrak{v})\) defined by (16) to be linearly independent. From the Jacobi identity we obtain \[\sigma(\omega_{1}(Y_{1},Y_{2})UY_{3}+\omega_{2}(Y_{1},Y_{2})VY_{3})=0, \tag{23}\] where \(\sigma\) denotes the cyclic permutation of \(Y_{1},Y_{2},Y_{3}\in\mathfrak{v}\). Taking \(Y_{1},Y_{2}\in\mathfrak{c}\) and \(Y_{3}\in\mathfrak{c}^{\perp}\) in such a way that \(\operatorname{rk}(UY_{3}|VY_{3})=2\) we obtain \(\omega_{1}(\mathfrak{c},\mathfrak{c})=\omega_{2}(\mathfrak{c},\mathfrak{c})=0\). As we also have \(\omega_{1}(\mathfrak{c},\mathfrak{c}^{\perp})=\omega_{2}(\mathfrak{c}, \mathfrak{c}^{\perp})=0\) by Lemma 3(a) and (16), we obtain \(\omega_{1}(\mathfrak{c},\mathfrak{v})=\omega_{2}(\mathfrak{c},\mathfrak{v})=0\), and hence the restrictions of \(\omega_{1}\) and \(\omega_{2}\) to \(\mathfrak{c}^{\perp}\) must be linearly independent. **Lemma 5**.: _In the assumptions of Lemma 4 we have the following._ * \(\dim\mathfrak{c}^{\perp}\leq 4\)_._ * _Introduce_ \(K_{1},K_{2}\in\mathfrak{so}(\mathfrak{c}^{\perp})\) _by_ \(\langle K_{i}Y,Y^{\prime}\rangle=\omega_{i}(Y,Y^{\prime})\) _for_ \(Y,Y^{\prime}\in\mathfrak{c}^{\perp}\) _and_ \(i=1,2\)_. Denote_ \(S=\operatorname{Span}(K_{1},K_{2})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\)__(note that_ \(\dim S=2\))_. Then the subalgebra_ \(\pi(\mathfrak{h})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\) _normalises the subspace_ \(S\)_._ Proof.: For assertion (a), consider the pencil \(\mu_{1}\omega_{1}+\mu_{2}\omega_{2}\subset\Lambda^{2}(\mathfrak{v})\), where \(\mu_{1},\mu_{2}\in\mathbb{R}\). Suppose at least one element of this pencil has rank greater than or equal to \(4\). Specifying the vectors \(e_{1},e_{2},e_{m-1},e_{m}\) we can assume, without loss of generality, that \(\operatorname{rk}\omega_{1}\geq 4\). Let \(\mathcal{U}\subset\operatorname{Span}(e_{3},\ldots,e_{m-2})\) be the subset of those vectors \(e\) for which the \(1\)-forms \(\xi_{e},\eta_{e}\in\mathfrak{v}^{*}\) are linearly independent. By Lemma 4(d), the subset \(\mathcal{U}\) is open and dense in \(\operatorname{Span}(e_{3},\ldots,e_{m-2})\). Taking the inner product of (23) with \(e\in\mathcal{U}\) we obtain \(\omega_{1}\wedge\xi_{e}+\omega_{2}\wedge\eta_{e}=0\). By generalised Cartan's Lemma [1], there exist \(1\)-forms \(\gamma_{11},\gamma_{12}=\gamma_{21},\gamma_{22}\in\mathfrak{v}^{*}\) such that \(\omega_{1}=\gamma_{11}\wedge\xi_{e}+\gamma_{12}\wedge\eta_{e}\) and \(\omega_{2}=\gamma_{21}\wedge\xi_{e}+\gamma_{22}\wedge\eta_{e}\). In particular, \(\operatorname{rk}\omega_{1}\leq 4\), and so \(\operatorname{rk}\omega_{1}=4\) by our assumption. Let \(L_{1}=\{Y\in\mathfrak{v}\,:\,i_{Y}(\omega_{1})=0\}\). Then \(L_{1}\) has codimension \(4\), and \(\xi_{e}(L_{1})=\eta_{e}(L_{1})=0\), for all \(e\in\mathcal{U}\), and hence for all \(e\in\mathfrak{v}\). It follows that \(UL_{1}=VL_{1}=0\), and so by Lemma 4(c), \(\mathfrak{c}^{\perp}=(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp} \subset L_{1}^{\perp}\) which implies \(\dim\mathfrak{c}^{\perp}\leq 4\). Now suppose that \(\operatorname{rk}(\mu_{1}\omega_{1}+\mu_{2}\omega_{2})<4\), for all \(\mu_{1},\mu_{2}\in\mathbb{R}\). As the rank is always even and as \(\omega_{1}\) and \(\omega_{2}\) are linearly independent, we obtain \(\operatorname{rk}(\mu_{1}\omega_{1}+\mu_{2}\omega_{2})=2\), for all \((\mu_{1},\mu_{2})\in\mathbb{R}^{2}\setminus\{(0,0)\}\). Then it is easy to see that there exist three linearly independent \(1\)-forms \(\zeta_{1},\zeta_{2},\zeta_{3}\in\mathfrak{v}^{*}\) such that \(\omega_{1}=\zeta_{1}\wedge\zeta_{3}\) and \(\omega_{2}=\zeta_{2}\wedge\zeta_{3}\). From (23) we get \(\omega_{1}\wedge\xi_{e}+\omega_{2}\wedge\eta_{e}=0\), for all \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\) which gives \(\zeta_{1}\wedge\zeta_{3}\wedge\xi_{e}+\zeta_{2}\wedge\zeta_{3}\wedge\eta_{e}=0\). It follows that \(\zeta_{1}\wedge\zeta_{2}\wedge\zeta_{3}\wedge\xi_{e}=0\), and so \(\xi_{e}\in\operatorname{Span}(\zeta_{1},\zeta_{2},\zeta_{3})\), and similarly \(\eta_{e}\in\operatorname{Span}(\zeta_{1},\zeta_{2},\zeta_{3})\), for all \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\). But then the common kernel \(L_{2}\) of the \(1\)-forms \(\zeta_{1},\zeta_{2},\zeta_{3}\) has codimension \(3\) and lies in the kernel of both \(U\) and \(V\). It follows that \(\mathfrak{c}^{\perp}=(\operatorname{Ker}U\cap\operatorname{Ker}V)^{\perp} \subset L_{2}^{\perp}\) and so \(\dim\mathfrak{c}^{\perp}\leq 3\). For assertion (b), we first note that the subspace \(\operatorname{Span}(e_{3},\ldots,e_{m})=(\operatorname{Span}(e_{m-1},e_{m}))^{\perp}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Lemma 4(b) and Remark 3. Now let \(A\in\mathfrak{h}\) and \(Y,Y^{\prime}\in\mathfrak{c}^{\perp}\). Then by (9), the component of the vector \([AY,Y^{\prime}]+[Y,AY^{\prime}]\) lying in \(\operatorname{Span}(e_{1},e_{2})\) equals \((\omega_{1}(AY,Y^{\prime})+\omega_{1}(Y,AY^{\prime}))e_{1}+(\omega_{2}(AY,Y^{ \prime})+\omega_{2}(Y,AY^{\prime}))e_{2}=\langle[K_{1},\pi(A)]Y,Y^{\prime} \rangle e_{1}+\langle[K_{2},\pi(A)]Y,Y^{\prime}\rangle e_{2}\). As \(\operatorname{Span}(e_{3},\ldots,e_{m})\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, the component of the vector \(A[Y,Y^{\prime}]\) lying in \(\operatorname{Span}(e_{1},e_{2})\) equals the component of the vector \(A(\omega_{1}(Y,Y^{\prime})e_{1}+\omega_{2}(Y,Y^{\prime})e_{2})\) lying in \(\operatorname{Span}(e_{1},e_{2})\), which is \(\omega_{1}(Y,Y^{\prime})(A_{11}e_{1}+A_{12}e_{2})+\omega_{2}(Y,Y^{\prime})(A_{21} e_{1}+A_{22}e_{2})=\langle(A_{11}K_{1}+A_{21}K_{2})Y,Y^{\prime}\rangle e_{1}+ \langle(A_{12}K_{1}+A_{22}K_{2})Y,Y^{\prime}\rangle e_{2}\), where \(A_{ij}\) denotes the corresponding entry of the matrix of \((\operatorname{ad}(A))|_{\mathfrak{n}^{\prime}}\) relative to the basis \(\{e_{1},e_{2},\ldots,e_{m}\}\). We deduce that \([K_{1},\pi(A)],[K_{2},\pi(A)]\in\operatorname{Span}(K_{1},K_{2})\), as required. By Lemma 3(b) and Lemma 5(a), we have \(2\leq\dim\mathfrak{c}^{\perp}\leq 3\). Moreover, by Lemma 5(b), the subalgebra \(\pi(\mathfrak{h})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\) normalises the \(2\)-dimensional subspace \(S=\operatorname{Span}(K_{1},K_{2})\!\subset\!\mathfrak{so}(\mathfrak{c}^{\perp})\). If \(\pi(\mathfrak{h})\) is abelian, then by Lemma 3(d), we obtain \([\mathfrak{c}^{\perp},\mathfrak{c}^{\perp}]\subset\mathfrak{z}(\mathfrak{n})\). Then taking \(Y_{1},Y_{2},Y_{3}\in\mathfrak{c}^{\perp}\) in (17) we get \(\sum_{i=1}^{m}\omega_{i}(Y_{1},Y_{2})\phi(Y_{3})e_{i}=0\) which by (21) gives \(\omega_{1}(Y_{1},Y_{2})UY_{3}+\omega_{2}(Y_{1},Y_{2})VY_{3}=0\). As by Lemma 4(c), the vectors \(UY_{3}\) and \(VY_{3}\) are linearly independent for almost all \(Y_{3}\in\mathfrak{c}^{\perp}\), we deduce that \(\omega_{1}(\mathfrak{c}^{\perp},\mathfrak{c}^{\perp})=\omega_{2}(\mathfrak{c}^{ \perp},\mathfrak{c}^{\perp})=0\), a contradiction. So the subalgebra \(\pi(\mathfrak{h})\subset\mathfrak{so}(\mathfrak{c}^{\perp})\) is non-abelian. It is easy to see that the only possible case when the normaliser of a two-dimensional subspace \(S\) of a subalgebra \(\mathfrak{so}(\mathfrak{c}^{\perp}),\ \dim\mathfrak{c}^{\perp}\in\{2,3,4\}\), is non-abelian is the following: \(\dim\mathfrak{c}^{\perp}=4\), so that \(\mathfrak{so}(\mathfrak{c}^{\perp})=\mathfrak{so}(4)=\mathfrak{so}(3)\oplus \mathfrak{so}(3)\) (direct sum of ideals), and then \(S\) lies in one of the two \(\mathfrak{so}(3)\)-components. In this last remaining case, denote \(\tilde{\omega}_{j}\in\Lambda^{2}(\mathfrak{c}^{\perp}),\ j=1,2\), the restriction of the \(2\)-form \(\omega_{j}\in\Lambda^{2}(\mathfrak{v})\) to \(\mathfrak{c}^{\perp}\), and for \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\), denote \(\tilde{\xi}_{e},\tilde{\eta}_{e}\in(\mathfrak{c}^{\perp})^{*}\) the restrictions of the \(1\)-forms \(\xi_{e},\eta_{e}\in\mathfrak{v}^{*}\), respectively (so that for \(Y\in\mathfrak{c}^{\perp}\) we have \(\tilde{\xi}_{e}(Y)=\langle UY,e\rangle\) and \(\tilde{\eta}_{e}(Y)=\langle VY,e\rangle\)). Restricting equation (23) to \(\mathfrak{c}^{\perp}\) (note that \(U\mathfrak{c}=V\mathfrak{c}=0\) and \(\omega_{1}(\mathfrak{c},\mathfrak{v})=\omega_{2}(\mathfrak{c},\mathfrak{v})=0\) anyway), we obtain \(\tilde{\omega}_{1}\wedge\tilde{\xi}_{e}+\tilde{\omega}_{2}\wedge\tilde{\eta} _{e}=0\), for all \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\). Applying the Hodge star operator (and noting that the dual vectors to \(\tilde{\xi}_{e}\) and \(\tilde{\eta}_{e}\) are \(U^{t}e,V^{t}e\in\mathfrak{c}^{\perp}\), respectively) gives \(i_{U^{t}e}(\star\tilde{\omega}_{1})+i_{V^{t}e}(\star\tilde{\omega}_{2})=0\). Let \(\star K_{j}\in\mathfrak{so}(\mathfrak{c}^{\perp}),\ j=1,2\), be defined by \(\langle(\star K_{j})Y,Y^{\prime}\rangle=\star\tilde{\omega}_{j}(Y,Y^{\prime})\) for \(Y,Y^{\prime}\in\mathfrak{c}^{\perp}\). Then from the latter equation we obtain \(\langle(\star K_{1})U^{t}e,Y\rangle+\langle(\star K_{2})V^{t}e,Y\rangle=0\), for all \(Y\in\mathfrak{c}^{\perp}\) and all \(e\in\operatorname{Span}(e_{3},\ldots,e_{m-2})\). This is equivalent to \[\tilde{U}(\star K_{1})+\tilde{V}(\star K_{2})=0, \tag{24}\] where \(\tilde{U}\) and \(\tilde{V}\) are the restrictions of \(U\) and \(V\) to \(\mathfrak{c}^{\perp}\), respectively. Note that \(K_{1}\) and \(K_{2}\) are linearly independent and belong to the same \(\mathfrak{so}(3)\)-component of the algebra \(\mathfrak{so}(\mathfrak{c}^{\perp})=\mathfrak{so}(3)\oplus\mathfrak{so}(3)\) (direct sum of ideals). As these components are \(\star\)-invariant, we obtain that \(\star K_{1}\) and \(\star K_{2}\) are also linearly independent and belong to the same \(\mathfrak{so}(3)\)-component. This implies that \((\star K_{2})^{2}=\mu\operatorname{Id}\) for some \(\mu<0\) and that \((\star K_{1})(\star K_{2})=\nu\operatorname{Id}+K_{3}\), where \(\nu\in\mathbb{R}\), and \(K_{3}\neq 0\) belongs to the same \(\mathfrak{so}(3)\)-component of \(\mathfrak{so}(\mathfrak{c}^{\perp})\) as \(\star K_{1}\) and \(\star K_{2}\). In particular, \(\det K_{3}\neq 0\). We now multiply (24) by \(\tilde{U}^{t}\) on the left and by \(\star K_{2}\) on the right. We get \(\tilde{U}^{t}\tilde{U}(\nu\operatorname{Id}+K_{3})+\mu\tilde{U}^{t}\tilde{V}=0\). But \(U^{t}V\) is symmetric by (22), and so \(\tilde{U}^{t}\tilde{V}\) is symmetric (as \(U\mathfrak{c}=V\mathfrak{c}=0\)) which implies that the \(4\times 4\) matrix \(\tilde{U}^{t}\tilde{U}K_{3}\) is also symmetric. Choosing a basis for \(\mathfrak{c}^{\perp}\) which diagonalises the semi-definite matrix \(\tilde{U}^{t}\tilde{U}\) we find that \(\tilde{U}^{t}\tilde{U}K_{3}\) can be symmetric only when \(\tilde{U}^{t}\tilde{U}K_{3}=0\). But as \(\det K_{3}\neq 0\), this implies \(\tilde{U}=0\), that is, \(U\mathfrak{c}^{\perp}=0\), which in combination with \(U\mathfrak{c}=0\) gives \(U=0\). This is a contradiction with Lemma 4(a) which completes the proof of the proposition and of Theorem 1. ### Example The following example shows that Theorem 1, concerning the \(2\)-step property of \(\mathfrak{n}\) for the case when the derived algebra \(\mathfrak{n}^{\prime}\) is nondegenerate, is "almost" tight in terms of the signature. We construct a nilpotent, metric Lie algebra \((\mathfrak{n},\langle\cdot,\cdot\rangle)\) with the following properties: * \(\dim\mathfrak{n}=12,\ \dim\mathfrak{n}^{\prime}=4,\ \dim\mathfrak{v}=8\). * \(\mathfrak{n}^{\prime}\) is Lorentz and \(\mathfrak{v}\) is of signature \((5,3)\), so that \(\mathfrak{n}\) is of signature \((8,4)\). * \(\mathfrak{n}\) is \(4\)-step nilpotent (and \(\mathfrak{n}^{\prime}\) is abelian). * \((\mathfrak{n},\langle\cdot,\cdot\rangle)\) is \(G\)-geodesic orbit (for \(G\) as in Theorem 1). We define \(\mathfrak{n}=\mathfrak{v}\oplus\mathfrak{n}^{\prime}\), where \(\dim\mathfrak{n}^{\prime}=4,\ \dim\mathfrak{v}=8\). We have a basis \(\{e_{1},e_{2},e_{3},e_{4}\}\) for \(\mathfrak{n}^{\prime}\), and a basis \(\{f_{1},\ldots,f_{8}\}\) for \(\mathfrak{v}\). The inner product \(\langle\cdot,\cdot\rangle\) is defined in such a way that \(\mathfrak{v}\perp\mathfrak{n}^{\prime}\), and \[\langle\cdot,\cdot\rangle|_{\mathfrak{v}}=\begin{pmatrix}0&0&0&0&I_{2}\\ 0&0&0&1&0\\ 0&0&I_{2}&0&0\\ 0&1&0&0&0\\ I_{2}&0&0&0&0\end{pmatrix}\ \ \text{and}\ \langle\cdot,\cdot\rangle|_{ \mathfrak{n}^{\prime}}=\begin{pmatrix}0&0&1\\ 0&I_{2}&0\\ 1&0&0\end{pmatrix}.\] The Lie bracket is defined as follows: \[[f_{1},e_{1}]=e_{2},\quad[f_{2},e_{1}]=e_{3},\quad[f_{1},e_{2}]=[ f_{2},e_{3}]=-e_{4},\] \[[f_{1},f_{2}]=e_{1},\quad[f_{1},f_{6}]=e_{2},\quad[f_{2},f_{6}]=e _{3},\quad[f_{1},f_{4}]=[f_{2},f_{5}]=e_{4}.\] It is not hard to see that the algebra \(\mathfrak{n}^{\prime}\) so defined is \(4\)-step nilpotent (in fact, if we disregard the inner product, our algebra \(\mathfrak{n}^{\prime}\) is the direct sum of the \(6\)-dimensional ideal \(\operatorname{Span}(f_{1},f_{2},e_{1},e_{2},e_{3},e_{4})\) (the algebra \(L_{6,21}(1)\) in \([\mathsf{dG}]\)) and the \(6\)-dimensional abelian ideal \(\operatorname{Span}(f_{3},f_{4}+e_{2},f_{5}+e_{3},f_{6}-e_{1},f_{7},f_{8})\)). We now define, for every \(T=X+Y\), where \(X=\sum_{i=1}^{4}x_{i}e_{i}\in\mathfrak{n}^{\prime},\ Y=\sum_{j=1}^{8}y_{j}f_{j} \in\mathfrak{v}\), the linear operator \(\mathcal{A}\) on \(\mathfrak{n}\) such that \(\mathcal{A}\mathfrak{n}^{\prime}\subset\mathfrak{n}^{\prime}\), \(\mathcal{A}\mathfrak{v}\subset\mathfrak{v}\), and relative to the chosen bases for \(\mathfrak{v}\) and \(\mathfrak{n}^{\prime}\), \[\mathcal{A}|_{\mathfrak{v}}=\begin{pmatrix}0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ x_{2}+y_{4}&x_{3}+y_{5}&0&-y_{1}&-y_{2}&0&0&0\\ x_{1}+y_{6}&0&0&0&y_{1}&0&0\\ 0&x_{1}-y_{6}&0&0&0&y_{2}&0&0\\ y_{2}&-y_{1}&0&0&0&0&0&0\\ 0&y_{3}-x_{4}&-y_{2}&y_{6}-x_{1}&0&-x_{2}-y_{4}&0&0\\ x_{4}-y_{3}&0&y_{1}&0&y_{6}-x_{1}&-x_{3}-y_{5}&0&0\end{pmatrix}\ \ \text{and}\ \mathcal{A}|_{ \mathfrak{n}^{\prime}}=\begin{pmatrix}0&0&0&0\\ -y_{1}&0&0&0\\ -y_{2}&0&0&0\\ 0&y_{1}&y_{2}&0\end{pmatrix}\.\] A direct calculation shows that \(\mathcal{A}\) so defined is a skew-symmetric derivation, and that the \(GO\) equation \(\langle\mathcal{A}T^{\prime}+[T,T^{\prime}],T\rangle=0\) (see (1)) is satisfied, for all \(T^{\prime}\in\mathfrak{n}\). As \(\mathcal{A}\) depends linearly on \(T\), one may expect the algebra \((\mathfrak{n},\langle\cdot,\cdot\rangle)\) to be even naturally reductive. \(\diamondsuit\) This example also shows that a pseudo-Riemannian \(G\)-\(GO\) nilmanifold with nondegenerate derived algebra loses the property of being \(2\)-step nilpotent already when \(\langle\cdot,\cdot\rangle^{\prime}\) is Lorentz (for the case when \(\langle\cdot,\cdot\rangle^{\prime}\) is definite, see Remark 4). ## 3. Proof of Theorem 2: If \(ds^{2}|_{[\mathfrak{n},\mathfrak{n}]}\) is degenerate then \(\mathfrak{n}\) is a double extension In this section we consider the case when the restriction of the inner product \(\langle\cdot,\cdot\rangle\) to the derived algebra \(\mathfrak{n}^{\prime}\) is degenerate. We start with the following Lemma. **Lemma 6**.: _Let \((M=G/H,ds^{2})\) be a connected pseudo-Riemannian \(G\)-geodesic orbit nilmanifold where \(G=N\rtimes H\), with \(N\) nilpotent. Let \(\langle\cdot,\cdot\rangle\) denote the inner product on \(\mathfrak{n}\) induced by \(ds^{2}\). Suppose \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) is degenerate. Let \(\mathfrak{m}_{1}\) and \(\mathfrak{e}\) be subspaces of \(\mathfrak{n}\) with the following properties:_ 1. \(\mathfrak{e}\subset\mathfrak{n}^{\prime}\subset\mathfrak{m}_{1}\)__(_so that, in particular, \(\mathfrak{m}_{1}\) is an ideal of \(\mathfrak{n}\)_);_ 2. _both_ \(\mathfrak{m}_{1}\) _and_ \(\mathfrak{e}\) _are_ \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant;_ 3. \(\langle\mathfrak{e},\mathfrak{m}_{1}\rangle=0\) _and_ \([\mathfrak{e},\mathfrak{m}_{1}]=0\)_;_ 4. \(\dim\mathfrak{m}_{1}+\dim\mathfrak{e}=\dim\mathfrak{n}\)_._ _Define the metric nilpotent Lie algebra \(\mathfrak{m}_{0}=\mathfrak{m}_{1}/\mathfrak{e}\) with the inner product \(\langle\cdot,\cdot\rangle_{0}\) induced from \(\mathfrak{m}_{1}\)_(_this is well-defined by_ (iii))_, and the pseudo-Riemannian nilmanifold_\((M_{0}=G_{0}/H_{0},ds_{0}^{2})\)_, where_ \(G_{0}=N_{0}\rtimes H_{0}\)_, with_ \(N_{0}\) _the_ (_simply connected_) _Lie group whose Lie algebra is_ \(\mathfrak{m}_{0}\)_,_ \(ds_{0}^{2}\) _is the left-invariant metric on_ \(M_{0}\) _defined by_ \(\langle\cdot,\cdot\rangle_{0}\)_, and_ \(H_{0}\) _is the maximal connected group of pseudo-orthogonal automorphisms of_ \(\langle\cdot,\cdot\rangle_{0}\)_._ _Then \((M_{0},ds_{0}^{2})\) is a \(G_{0}\)-\(GO\) pseudo-Riemannian nilmanifold._ Note that the fact that the inner product \(\langle\cdot,\cdot\rangle_{0}\) as constructed in Lemma 6 is nondegenerate follows from assumption (iv). If we denote \(m_{0}=\dim\mathfrak{e}\), then the signature of the metric \(ds_{0}^{2}\) is \((p-m_{0},q-m_{0})\), where \((p,q)\) is the signature of \(ds^{2}\). In the settings of Lemma 6, we say that the metric Lie algebra \((\mathfrak{n},\langle\cdot,\cdot\rangle)\) is a \(2m_{0}\)_-dimensional double extension of the metric Lie algebra \((\mathfrak{m}_{0},\langle\cdot,\cdot\rangle_{0})\)_. Informally, to get \((\mathfrak{n},\langle\cdot,\cdot\rangle)\), we first take the central extension of \((\mathfrak{m}_{0},\langle\cdot,\cdot\rangle_{0})\) by \(\mathfrak{e}\) and then the extension of the resulting Lie algebra \(\mathfrak{m}_{1}\) by \(m_{0}\)-dimensional space of derivations. Proof of Lemma 6.: To check the \(G_{0}\)-\(GO\) property for \((\mathfrak{m}_{0},\langle\cdot,\cdot\rangle_{0})\) we need to choose (an arbitrary, but fixed) linear complement to \(\mathfrak{e}\) in \(\mathfrak{m}_{1}\), which, with some abuse of notation, we will still denote \(\mathfrak{m}_{0}\). Then \(\mathfrak{m}_{1}=\mathfrak{m}_{0}\oplus\mathfrak{e}\). For \(X,Y\in\mathfrak{m}_{0}\) we have \(\langle X,Y\rangle_{0}=\langle X,Y\rangle\), as \(\mathfrak{e}\perp\mathfrak{m}_{1}\) by assumption (iii). We define the Lie bracket \([\cdot,\cdot]_{0}\) on \(\mathfrak{m}_{0}\) by \([X,Y]_{0}=[X,Y]_{\mathfrak{m}_{0}}\), for \(X,Y\in\mathfrak{m}_{0}\). It is easy to see that \((\mathfrak{m}_{0},\langle\cdot,\cdot\rangle_{0})\) is isomorphic to the quotient algebra \(\mathfrak{m}_{1}/\mathfrak{e}\). Let \(X\in\mathfrak{m}_{0}\). By the Geodesic Lemma, there exist \(A(X)\in\mathfrak{h}\) and \(k(X)\in\mathbb{R}\) such that for all \(Y\in\mathfrak{m}_{0}\) we have \(\langle[X+A(X),Y],X\rangle=k(X)\langle X,Y\rangle\). By assumption (ii) we have \([A(X),Y]\in\mathfrak{m}_{1}\,(=\mathfrak{m}_{0}\oplus\mathfrak{e})\), and so we can define an endomorphism \(D(X)\) of by the formula \(D(X)Y=[A(X),Y]_{\mathfrak{m}_{0}}\). As \(\operatorname{ad}_{\mathfrak{g}}(A(X))\) is skew-symmetric and \(\mathfrak{e}\perp\mathfrak{m}_{1}\), the endomorphism \(D(X)\) is skew-symmetric relative to \(\langle\cdot,\cdot\rangle_{0}\). To see that \(D(X)\) is a derivation of the Lie algebra \((\mathfrak{m}_{0},[\cdot,\cdot]_{0})\) we write, for \(Y_{1},Y_{2}\in\mathfrak{m}_{0}\), \[0 =([A(X),[Y_{1},Y_{2}]]-[[A(X),Y_{1}],Y_{2}]-[Y_{1},[A(X),Y_{2}]])_ {\mathfrak{m}_{0}}\] \[=([A(X),[Y_{1},Y_{2}]])_{\mathfrak{m}_{0}}-([[A(X),Y_{1}],Y_{2}] )_{\mathfrak{m}_{0}}-([Y_{1},[A(X),Y_{2}]])_{\mathfrak{m}_{0}}\] \[=([A(X),[Y_{1},Y_{2}]_{0}])_{\mathfrak{m}_{0}}-([D(X)Y_{1},Y_{2}] )_{\mathfrak{m}_{0}}-([Y_{1},D(X)Y_{2}])_{\mathfrak{m}_{0}}\] \[=D(X)([Y_{1},Y_{2}]_{0})-[D(X)Y_{1},Y_{2}]_{0}-[Y_{1},D(X)Y_{2}]_ {0},\] where in the third line, we used the fact that \(\mathfrak{e}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, by assumption (ii). It follows that there exists \(A_{0}(X)\in\mathfrak{h}_{0}\), where \(\mathfrak{h}_{0}\) is the Lie algebra of \(H_{0}\), such that \(D(X)Y=[A_{0}(X),Y]_{0}\), for all \(X,Y\in\mathfrak{m}_{0}\). Then from assumption (iii) and the fact that \(\langle[X+A(X),Y],X\rangle=k(X)\langle X,Y\rangle\) it follows that \(\langle[X+A_{0}(X),Y]_{0},X\rangle_{0}=k(X)\langle X,Y\rangle_{0}\), for all \(X,Y\in\mathfrak{m}_{0}\), as required by the Geodesic Lemma. First suppose that \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) has degeneracy \(1\) and is semidefinite. Denote \(\mathfrak{v}=(\mathfrak{n}^{\prime})^{\perp}\) and choose a vector \(e\) such that \(\mathfrak{n}^{\prime}\cap\mathfrak{v}=\mathbb{R}e\). Denote \(\mathfrak{m}_{1}=\mathfrak{n}^{\prime}+\mathfrak{v}\). Note that \(e^{\perp}=\mathfrak{m}_{1}\) and that all four subspaces \(\mathbb{R}e,\mathfrak{n}^{\prime},\mathfrak{v}\) and \(\mathfrak{m}_{1}\) are \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, by Remark 3. To be able to apply Lemma 6 (with \(\mathfrak{e}=\mathbb{R}e\)) we only need to check that \([e,\mathfrak{m}_{1}]=0\). Taking \(T^{\prime}=e\) and \(T=X+Y\in\mathfrak{m}_{1}\) in (1), where \(X\in\mathfrak{n}^{\prime}\) and \(Y\in\mathfrak{v}\), we obtain \(\langle[e,X+Y],X\rangle=0\), and so \(\langle[e,X],X\rangle=\langle[e,Y],X\rangle=0\), for all \(X\in\mathfrak{n}^{\prime}\), \(Y\in\mathfrak{v}\). From the second equation it follows that \([e,Y]\) is a multiple of \(e\), and hence \([e,Y]=0\), for all \(Y\in\mathfrak{v}\), as \(\operatorname{ad}_{\mathfrak{n}}(Y)\) is nilpotent. From the first equation we also obtain that \([e,X]\) is a multiple of \(e\), as \(\operatorname{ad}_{\mathfrak{n}^{\prime}}e\) is nilpotent and skew-symmetric and \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) has degeneracy \(1\) and is semidefinite. Then \([e,X]=0\), for all \(X\in\mathfrak{n}^{\prime}\), as \(\operatorname{ad}_{\mathfrak{n}}(X)\) is nilpotent. Thus \([e,\mathfrak{m}_{1}]=0\), and the claim follows from Lemma 6, with \(\mathfrak{e}=\mathbb{R}e\). Next suppose that \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) has degeneracy \(2\) and is semidefinite. Denote \(\mathfrak{v}=(\mathfrak{n}^{\prime})^{\perp},\ \mathfrak{v}=\mathfrak{n}^{ \prime}\cap\mathfrak{v}\) and \(\mathfrak{s}=\mathfrak{n}^{\prime}+\mathfrak{v}\). The restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{s}\) has degeneracy \(2\) and is semidefinite. We have \(\dim\mathfrak{v}=\operatorname{codim}\mathfrak{s}=2\) and \(\mathfrak{o}^{\perp}=\mathfrak{s}\). Moreover, all four subspaces \(\mathfrak{o},\mathfrak{n}^{\prime},\mathfrak{v}\) and \(\mathfrak{s}\) are \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, by Remark 3. If \([\mathfrak{o},\mathfrak{s}]=0\), we can directly apply Lemma 6 with \(\mathfrak{m}_{1}=\mathfrak{s}\) and \(\mathfrak{e}=\mathfrak{o}\), and the claim follows. We therefore assume that \([\mathfrak{o},\mathfrak{s}]\neq 0\). Taking \(T^{\prime}=e\in\mathfrak{o}\) and \(T\in\mathfrak{s}\) in (1) we obtain \(\langle[e,T],T\rangle=0\). As the restriction of the inner product to \(\mathfrak{s}\) is semidefinite, of degeneracy \(2\) (and \(\mathfrak{s}^{\perp}=\mathfrak{o}\)), and \(\operatorname{ad}_{\mathfrak{s}}e\) is both skew-symmetric and nilpotent, we obtain \([e,T]\subset\mathfrak{o}\), for all \(e\in\mathfrak{o}\) and \(T\in\mathfrak{s}\), and hence \([\mathfrak{s},\mathfrak{o}]\subset\mathfrak{o}\). We obtain a nilpotent representation of the (nilpotent) algebra \(\mathfrak{s}\) on the \(2\)-dimensional space \(\mathfrak{o}\). By Engel's Theorem, we can find a basis \(\{e_{1},e_{2}\}\) for \(\mathfrak{o}\) such that \([\mathfrak{s},e_{2}]=0\) and \([T,e_{1}]=\lambda(T)e_{2}\), for all \(T\in\mathfrak{s}\), where \(\lambda\in\mathfrak{s}^{*}\). As we have assumed that \([\mathfrak{o},\mathfrak{s}]\neq 0\), the \(1\)-form \(\lambda\) is nonzero (but note that \(\lambda(\mathfrak{o})=0\)). Then \([\mathfrak{o},\mathfrak{s}]=\mathbb{R}e_{2}\), and so by Remark 3, the subspace \(\mathbb{R}e_{2}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant. Choose two vectors \(f_{1},f_{2}\in\mathfrak{n}\) such that \(\operatorname{Span}(f_{1},f_{2})\oplus\mathfrak{s}=\mathfrak{n}\), and \(\langle f_{i},f_{j}\rangle=0,\ \langle f_{i},e_{j}\rangle=\delta_{ij}\), for \(i,j=1,2\). We claim that the assumptions of Lemma 6 are satisfied with \(\mathfrak{e}=\mathbb{R}e_{2}\) and \(\mathfrak{m}_{1}=\mathbb{R}f_{1}\oplus\mathfrak{s}\). Indeed, assumptions (i) and (iv) are obviously true, and for assumption (ii) we note that \(\mathfrak{m}_{1}=(\mathbb{R}e_{2})^{\perp}\) by construction, and hence \(\mathfrak{m}_{1}\) is \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant by Remark 3, as \(\mathbb{R}e_{2}\) is. It remains to show that \([\mathfrak{m}_{1},e_{2}]=0\). As we already know that \([\mathfrak{s},e_{2}]=0\), it suffices to show that \([f_{1},e_{2}]=0\). Taking \(T^{\prime}=e_{2}\) and \(T=\xi f_{1}+X\in\mathfrak{m}_{1}\), where \(X\in\mathfrak{s},\xi\in\mathbb{R}\), in (1) (and using the fact that \(\mathbb{R}e_{2}\) and \(\mathfrak{m}_{1}\) are orthogonal, \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspaces) we obtain \(\langle[f_{1},e_{2}],\xi f_{1}+X\rangle=0\), for all \(X\in\mathfrak{s},\xi\in\mathbb{R}\). It follows that \([f_{1},e_{2}]\) is a multiple of \(e_{2}\), which must be zero, by nilpotency. The claim now follows from Lemma 6. The last case to consider is the one when \(\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}\) has degeneracy \(1\) and index \(1\). This is the most involved case. As above, we denote \(\mathfrak{v}=(\mathfrak{n}^{\prime})^{\perp}\) and choose a vector \(e\) such that \(\mathfrak{n}^{\prime}\cap\mathfrak{v}=\mathbb{R}e\). Denote \(\mathfrak{m}_{1}=\mathfrak{n}^{\prime}+\mathfrak{v}\), so that \(e^{\perp}=\mathfrak{m}_{1}\). The subspaces \(\mathbb{R}e,\mathfrak{n}^{\prime},\mathfrak{v}\) and \(\mathfrak{m}_{1}\) are \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant, by Remark 3. We claim that the assumptions of Lemma 6 are satisfied with \(\mathfrak{e}=\mathbb{R}e\). It is easy to see that the only fact we need to establish is that \([e,\mathfrak{m}_{1}]=0\). The proof is completed by the following proposition. **Proposition 3**.: _In the above notation, the vector \(e\) lies in the centre of \(\mathfrak{m}_{1}\)._ Proof.: Denote \(m=\dim\mathfrak{n}^{\prime}\). Seeking a contradiction we assume that \([\mathfrak{m}_{1},e]\neq 0\). Let \(\mathfrak{f}=\{T\in\mathfrak{n}\::\langle[T,X],X\rangle=0,\text{ for all }X\in\mathfrak{n}^{\prime}\}\). It is easy to see that \(\mathfrak{f}\) is a subalgebra of \(\mathfrak{n}\). **Lemma 7**.: _In the above notation, the following holds._ 1. \(\mathfrak{v}\subset\mathfrak{f}\) _and_ \([\mathfrak{f},e]=0\)__(_so, in particular,_ \([e,\mathfrak{v}]=0\)_)._ 2. _There exists a hyperplane_ \(\mathfrak{n}_{0}\subset\mathfrak{n}^{\prime}\)_, with_ \(\mathfrak{n}_{0}\oplus\mathbb{R}e=\mathfrak{n}^{\prime}\)_, and a basis_ \(\{e_{1},\ldots,e_{m-1}\}\) _for_ \(\mathfrak{n}_{0}\) _such that relative to the basis_ \(\{e_{1},\ldots,e_{m-1},e\}\) _for_ \(\mathfrak{n}^{\prime}\)_, we have_ (25) \[\langle\cdot,\cdot\rangle|_{\mathfrak{n}^{\prime}}=\left(\begin{smallmatrix}0 &0&1&0\\ 0&I_{m-3}&0&0\\ 1&0&0&0\\ 0&0&0&0\end{smallmatrix}\right),\;\operatorname{ad}_{\mathfrak{n}^{\prime}}T= \left(\begin{smallmatrix}0&0&0&0\\ VT&0_{m-3}&0&0\\ 0&-(VT)^{t}&0&0\\ a(T)&(WT)^{t}&0&0\end{smallmatrix}\right)\;\text{and}\;\operatorname{ad}_{ \mathfrak{n}^{\prime}}e=\left(\begin{smallmatrix}0&0&0&0\\ u&0_{m-3}&0&0\\ 0&-u^{t}&0&0\\ 0&0&0&0\end{smallmatrix}\right),\] _for all_ \(T\in\mathfrak{f}\)_, where_ \(u\in L:=\operatorname{Span}(e_{2},\ldots,e_{m-2}),\,u\neq 0,\,a\in\mathfrak{f}^{*}\) _and_ \(V,W:\mathfrak{f}\to L\) _are linear maps. In particular,_ \([\mathfrak{f},e_{m-1}]=0\)_._ 3. _The subspaces_ \(\operatorname{Span}(u,e_{m-1})\) _and_ \(\mathbb{R}e_{m-1}\) _are_ \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)_-invariant. Moreover, for any_ \(A\in\mathfrak{h}\)_, we have_ \([A,e]=\alpha(A)e,\,[A,u]=\beta(A)e_{m-1},\,[A,e_{m-1}]=\gamma(A)e_{m-1}\)_, for some_ \(\alpha,\beta,\gamma\in\mathfrak{h}^{*}\)_._ Proof.: For assertion (a), the fact that \([\mathfrak{f},e]=0\) easily follows: for all \(T\in\mathfrak{f}\) and \(X\in\mathfrak{n}^{\prime}\), we have \(0=\langle[T,X],e\rangle=-\langle[T,e],X\rangle\). Therefore \([T,e]\) is a multiple of \(e\), which must be zero as \(\operatorname{ad}_{\mathfrak{n}^{\prime}}T\) is nilpotent. To see that \(\mathfrak{v}\subset\mathfrak{f}\), take \(T^{\prime}=Y\in\mathfrak{v}\) and \(T=X\in\mathfrak{n}^{\prime}\) in (1). As \(\mathfrak{v}\) and \(\mathfrak{n}^{\prime}\) are orthogonal, \(\operatorname{ad}_{\mathfrak{g}}(\mathfrak{h})\)-invariant subspaces, se obtain \(\langle[X,Y],X\rangle=0\), as required. For assertion (b), we note that the subspace \([e,\mathfrak{n}^{\prime}]\subset\mathfrak{n}^{\prime}\) does not contain \(\mathbb{R}e\) (indeed, for no \(X\in\mathfrak{n}^{\prime}\) we can have \([X,e]=e\), as \(\operatorname{ad}_{\mathfrak{n}^{\prime}}X\) is nilpotent). Choose a linear complement to \(\mathbb{R}e\) in \(\mathfrak{n}^{\prime}\) in such a way that \(\mathfrak{u}_{0}\supset[e,\mathfrak{n}^{\prime}]\). The restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}_{0}\) is nondegenerate and is of Lorentz signature. For \(T\in\mathfrak{f}\) we define the endomorphism \(\phi_{T}\) of \(\mathfrak{n}_{0}\) by \([T,X]=\phi_{T}X+\mu(X)e\), for \(X\in\mathfrak{n}_{0}\). For every \(T\in\mathfrak{f}\), the endomorphism \(\phi_{T}\) is skew-symmetric (as \(\langle e,\mathfrak{n}^{\prime}\rangle=0\)) and nilpotent (as \([\mathfrak{f},e]=0\) by assertion (a)). Moreover, the map \(\phi:\mathfrak{f}\to\mathfrak{so}(\mathfrak{n}_{0},\langle\cdot,\cdot\rangle|_{ \mathfrak{n}_{0}})\) sending \(T\) to \(\phi_{T}\) is a Lie algebra homomorphism (as \([\mathfrak{f},e]=0\)). Considering the Iwasawa decomposition of the Lie algebra \(\mathfrak{so}(m-2,1)=\mathfrak{so}(\mathfrak{n}_{0},\langle\cdot,\cdot \rangle|_{\mathfrak{n}_{0}})\), by the argument similar to that in the proof of Proposition 1 (see equations (7) and (8)) we can construct a basis \(\{e_{1},\ldots,e_{m-1}\}\) for \(\mathfrak{n}_{0}\) such that the restriction of \(\langle\cdot,\cdot\rangle\) to \(\mathfrak{n}^{\prime}\) relative to the basis \(\{e_{1},\ldots,e_{m-1},e\}\) for \(\mathfrak{n}_{0}\) has the form as given in (25), and moreover, there is a linear map \(V:\mathfrak{f}\to L\,(=\operatorname{Span}(e_{2},\ldots,e_{m-2})\) such that for all \(T\in\mathfrak{f}\) we have \(\phi_{T}e_{1}=VT,\ \phi_{T}e_{m-1}=0\) and \(\phi_{T}e_{i}=-\langle VT,e_{i}\rangle e_{m-1}\), for \(i=2,\ldots,m-2\). It follows that for some linear map \(W:\mathfrak{f}\to L\) and linear forms \(a,b\in\mathfrak{f}^{*}\), we have \[\begin{split}[T,e_{1}]=VT+a(T)e,\qquad[T,e_{m-1}]=b(T)e,\\ [T,e_{i}]=-\langle VT,e_{i}\rangle e_{m-1}+\langle WT,e_{i}\rangle e,\quad i=2,\ldots,e_{m-2},\end{split} \tag{26}\] for all \(T\in\mathfrak{f}\). In particular, taking \(T=e\) and using the fact that \([e,\mathfrak{n}_{0}]\subset\mathfrak{n}_{0}\) (by construction of \(\mathfrak{n}_{0}\)) we obtain \(We=0\) and \(a(e)=b(e)=0\). Thus \(\operatorname{ad}_{\mathfrak{n}^{\prime}}e\) has the form as given in (25), where we denote \(u=Ve\in L\). Then from (26) we obtain \([e,u]=-\|u\|^{2}e_{m-1}\), and so \([T,[e,u]]=-\|u\|^{2}b(T)e\), for all \(T\in\mathfrak{f}\). But \([T,e]=0\) by assertion (a), which gives \([[T,u],e]=\|u\|^{2}b(T)e\). As \(\operatorname{ad}_{\mathfrak{n}^{\prime}}[T,u]\) is nilpotent we get \(\|u\|^{2}b(T)=0\), for all \(T\in\mathfrak{f}\). If \(b\neq 0\) we get \(u=0\) (as the restriction of \(\langle\cdot,\cdot\rangle\) to \(L\) is definite), and so \([e,\mathfrak{n}^{\prime}]=0\). As \([e,\mathfrak{v}]=0\) by assertion (a) we obtain \([e,\mathfrak{m}_{1}]=0\) contradicting our assumption. Therefore \(b=0\), and then equations (26) imply that \(\operatorname{ad}_{\mathfrak{n}^{\prime}}T\) has the form given in (25). The last statement in assertion (b) follows from (25). For assertion (c), we note that from (25) we obtain \([e,\mathfrak{n}^{\prime}]=\operatorname{Span}(u,e_{m-1})\) and \([e,[e,\mathfrak{n}^{\prime}]]=\mathbb{R}e_{m-1}\), and so the first claim follows from Remark 3. Then the second claim also follows (note that the \(u\)-component of \([A,u]\) vanishes as \(\langle[A,u],u\rangle=0\) and \(u\) lies in the subspace \(L\) with a definite inner product). Let now \(f\notin\mathfrak{m}_{1}\) be a null vector such that \(f\perp\mathfrak{n}_{0}\) and \(\langle f,e\rangle=1\) (the choice of such an \(f\) is not unique); note that \(\mathfrak{m}_{1}\oplus\mathbb{R}f=\mathfrak{n}\). **Lemma 8**.: _The following holds:_ \[[\mathfrak{h},[f,e]]=0, \tag{28}\] \[[e,[f,\mathfrak{v}]]=[[f,e],\mathfrak{v}]=0,\] (29) \[[\mathfrak{v},[f,[f,e]]]=0,\] (30) \[[f,[f,[f,e]]]=0. \tag{27}\] Proof.: From Lemma 7(c), for any \(A\in\mathfrak{h}\), we have \(\langle[A,f],e\rangle=-\langle[A,e],f\rangle=-\alpha(A)\), \(\langle[A,f],e_{m-1}\rangle=-\langle[A,e_{m-1}],f\rangle=0\) and \(\langle[A,f],u\rangle=-\langle[A,u],f\rangle=0\), as \(e_{m-1}\in\mathfrak{n}_{0}\subset f^{\perp}\). It follows that \([A,f]=-\alpha(A)f+Y+X\), where \(Y\in\mathfrak{v}\) and \(X\in(\operatorname{Span}(e_{m-1},u))^{\perp}\cap\mathfrak{n}^{\prime}\). Then \([A,[f,e]]=[[A,f],e]+[f,[A,e]]=[-\alpha(A)f+Y+X,e]+[f,\alpha(A)e]=0\), as \([Y,e]=0\) by Lemma 7(a) and \([X,e]=0\) by (25). This proves (27). Take in (1) \(T^{\prime}=[f,e]\) and a non-null vector \(T=\mu f+Y+X\), where \(Y\in\mathfrak{v}\), \(X\in\mathfrak{n}^{\prime}\) and \(\mu\in\mathbb{R}\). As \([A,[f,e]]=0\) by (27) and \(\mathfrak{v}\perp\mathfrak{n}^{\prime}\) we obtain \(\langle[\mu f+Y+X,[f,e]],\mu f+X\rangle=0\), for all \(Y\in\mathfrak{v}\), \(X\in\mathfrak{n}^{\prime}\) and \(\mu\in\mathbb{R}\), by continuity, from which we get \[\begin{split}\langle[Y,[f,e]],X\rangle=0,\quad\langle[X,[f,e]],X \rangle=0,\quad\langle[f,[f,e]],f\rangle=0,\\ \langle[f,[f,e]],X\rangle+\langle[X,[f,e]],f\rangle=0,\end{split} \tag{31}\] for all \(Y\in\mathfrak{v}\), \(X\in\mathfrak{n}^{\prime}\). The first equation of (31) implies that \([Y,[f,e]]\) is a multiple of \(e\). But \([Y,[f,e]]=[[Y,f],e]\), as \([Y,e]=0\) by Lemma 7(a), and so \([Y,[f,e]]=[[Y,f],e]=0\), as \(\operatorname{ad}_{\mathfrak{n}^{\prime}}[Y,f]\) is nilpotent. This proves (28). We now consider the last equation of (31). We have \(\langle[X,[f,e]],f\rangle=\langle-[e,[X,f]]-[f,[e,X]],f\rangle\). As \([e,\mathfrak{n}^{\prime}]\subset\mathfrak{n}_{0}\) (by construction of \(\mathfrak{n}_{0}\)) and \(f\perp\mathfrak{n}_{0}\), we have \(\langle[e,[X,f]],f\rangle=0\). Taking \(X=x_{1}e_{1}+\tilde{x}+x_{m-1}e_{m-1}+xe\in\mathfrak{n}\), where \(x_{1},x_{m-1},x\in\mathbb{R}\) and \(\tilde{x}\in L\,(=\operatorname{Span}(e_{2},\ldots,e_{m-2}))\) we obtain from (25) that \([e,X]=x_{1}u+\langle\tilde{x},u\rangle e_{m-1}\), which gives \(\langle[X,[f,e]],f\rangle=-\langle[f,[e,X]],f\rangle=-\langle[f,x_{1}u+ \langle\tilde{x},u\rangle e_{m-1}],f\rangle=-x_{1}\langle[f,u],f\rangle- \langle\tilde{x},u\rangle\langle[f,e_{m-1}],f\rangle\). From the last equation of (31) we obtain \([f,[f,e]]=\langle[f,u],f\rangle e_{m-1}+\langle[f,e_{m-1}],f\rangle u+\eta e\), for some \(\eta\in\mathbb{R}\). But then \(\eta=0\) from the third equation of (31). Moreover, as \(e\in\mathfrak{v}\), we get \([e,[f,e]]=0\) by (28), which implies \([e,[f,[f,e]]]=0\). Substituting the above expression for \([f,[f,e]]\) and using (25) we find \[\langle[f,e_{m-1}],f\rangle=0,\text{ and so }[f,[f,e]]=\langle[f,u],f\rangle e_{m-1}. \tag{32}\] But from Lemma 7(a), (b) we have \(\mathfrak{v}\subset\mathfrak{f}\) and \([\mathfrak{f},e_{m-1}]=0\) which implies \([[f,[f,e]],\mathfrak{v}]=0\), by the second equation of (32). This establishes (29). From the second equation of (31) we obtain that \([f,e]\in\mathfrak{f}\), and so \(\operatorname{ad}_{\mathfrak{n}^{\prime}}[f,e]\) has the form given in (25), in particular, \([[f,e],u]\in\operatorname{Span}(e_{m-1},e)\). As \([e,[f,u]]\in[e,\mathfrak{n}^{\prime}]=\operatorname{Span}(u,e_{m-1})\) (by (25)), we obtain \([f,[e,u]]=[[f,e],u]+[e,[f,u]]\in\operatorname{Span}(u,e_{m-1},e)\). But \([e,u]=-\|u\|^{2}e_{m-1}\) by (25), and so we obtain \([f,e_{m-1}]=\rho_{1}u+\rho_{2}e_{m-1}+\rho_{3}e\), for some \(\rho_{1},\rho_{2},\rho_{3}\in\mathbb{R}\). Then from the first equation of (32) we get \(\rho_{3}=0\). Moreover, as \([f,e]\in\mathfrak{f}\), from (25) we find \([[f,e],e_{m-1}]=0\) which implies \([e,[f,e_{m-1}]]=0\) (since \([e,e_{m-1}]=0\) by (25)). From the expression for \([f,e_{m-1}]\) above we obtain \([e,\rho_{1}u+\rho_{2}e_{m-1}]=0\) which implies \(\rho_{1}=0\), again by (25). Therefore \([f,e_{m-1}]=\rho_{2}e_{m-1}\) which gives \([f,e_{m-1}]=0\), by nilpotency. Now equation (30) follows from the second equation of (32). We can now complete the proof of the proposition. We have \(\mathfrak{n}=\mathbb{R}f\oplus\mathfrak{m}_{1}=\mathbb{R}f\oplus(\mathfrak{v}+ \mathfrak{n}^{\prime})=(\mathbb{R}f\oplus\mathfrak{v})+\mathfrak{n}^{\prime}\). It follows that the subspace \(\mathfrak{V}=\mathbb{R}f\oplus\mathfrak{v}\) contains some linear complement to \(\mathfrak{n}^{\prime}\) in \(\mathfrak{n}\), and hence generates \(\mathfrak{n}\). Then \(\mathfrak{n}=\mathfrak{V}+[\mathfrak{V},\mathfrak{V}]+[\mathfrak{V},[\mathfrak{V },\mathfrak{V}]]+\dots\), and so \(\mathfrak{n}^{\prime}=[\mathfrak{V},\mathfrak{V}]+[\mathfrak{V},[\mathfrak{V },\mathfrak{V}]]+\dots\). As we already know that \([e,\mathfrak{v}]=0\) (by Lemma 7(a)), to show that \([e,\mathfrak{m}_{1}]=0\) it suffices to prove that \([e,\mathfrak{n}^{\prime}]=0\), that is, to prove that \([e,[T_{1},[T_{2},[\dots,[T_{r-1},T_{r}]\dots]]]=0\), where \(r\geq 2\), and where, for every \(i=1,\dots,r\), we have either \(T_{i}=f\) or \(T_{i}\in\mathfrak{v}\). The proof goes by induction by \(r\geq 2\). If \(r=2\) the claim follows from the facts that \([e,\mathfrak{v}]=0\) and that \([e,[f,\mathfrak{v}]]=0\) (by (28)). Suppose \(r>2\). If \(T_{1}\in\mathfrak{v}\), then the claim follows by the induction assumption from the fact that \([e,T_{1}]=0\). Suppose \(T_{1}=f\). Then by the induction assumption it suffices to prove that \([[e,f],[T_{2},[T_{3},[\dots,[T_{r-1},T_{r}]\dots]]]]=0\). If \(T_{2}\in\mathfrak{v}\), the claim follows from the fact that \([e,[f,\mathfrak{v}]]=0\) (by (28)) and the induction assumption (or from (29) if \(r=3\)). Suppose \(T_{2}=f\). Then it suffices to prove that \([[[e,f],f],[T_{3},[\dots,[T_{r-1},T_{r}]\dots]]]=0\). But \([[[e,f],f],f]=0\) by (30) and \([[[e,f],f],\mathfrak{v}]=0\) by (29). It follows that \([[[e,f],f],T_{i}]=0\), for all \(i=3,\dots,r\), which completes the proof of Proposition 3. With Proposition 3, application of Lemma 6 completes the proof of Theorem 2.
2302.13144
Revisiting LQR Control from the Perspective of Receding-Horizon Policy Gradient
We revisit in this paper the discrete-time linear quadratic regulator (LQR) problem from the perspective of receding-horizon policy gradient (RHPG), a newly developed model-free learning framework for control applications. We provide a fine-grained sample complexity analysis for RHPG to learn a control policy that is both stabilizing and $\epsilon$-close to the optimal LQR solution, and our algorithm does not require knowing a stabilizing control policy for initialization. Combined with the recent application of RHPG in learning the Kalman filter, we demonstrate the general applicability of RHPG in linear control and estimation with streamlined analyses.
Xiangyuan Zhang, Tamer Başar
2023-02-25T19:16:40Z
http://arxiv.org/abs/2302.13144v3
# Revisiting LQR Control from the Perspective of ###### Abstract We revisit in this paper the discrete-time linear quadratic regulator (LQR) problem from the perspective of receding-horizon policy gradient (RHPG), a newly developed model-free learning framework for control applications. We provide a fine-grained sample complexity analysis for RHPG to learn a control policy that is both stabilizing and \(\epsilon\)-close to the optimal LQR solution, and our algorithm does not require knowing a stabilizing control policy for initialization. Combined with the recent application of RHPG in learning the Kalman filter, we demonstrate the general applicability of RHPG in linear control and estimation with streamlined analyses. ## 1 Introduction Model-free policy gradient (PG) methods promise a universal end-to-end framework for controller designs. By utilizing input-output data of a black-box simulator, PG methods directly search the prescribed policy space until convergence, agnostic to system models, objective function, and design criteria/constraints. The general applicability of PG methods leads to countless empirical successes in continuous control, but the theoretical understanding of these PG methods is still in its early stage. Stemmed from the convergence theory of PG methods for general reinforcement learning tasks [1, 2], a recent thrust of research has specialized the analysis for the convergence and sample complexity of PG methods into several linear state-feedback control benchmarks [3, 4, 5, 6, 7, 8, 9]. However, incorporating imperfect-state measurements leads to a deficit of most, if not all, favorable landscape properties crucial for PG methods to converge (globally) in the state-feedback settings [9, 10]. Even worse, the control designer now faces several challenges unique to control applications: a) convergence might be toward a suboptimal stationary point without system-theoretic interpretations; b) provable stability and robustness guarantee could be lacking; c) convergence depends heavily on the initialization (e.g., the initial policy should be stabilizing), which would be challenging to hand-craft; and d) algorithm could be computationally inefficient. These bottlenecks blur the applicability of model-free PG methods in real-world control scenarios since the price for each of the above disadvantages could be unaffordable. On the other hand, classic theories provide both elegant analytic solutions and efficient computational means (e.g., Riccati recursions) to a wide range of control problems [11, 12, 13]. They further reveal the intricate structure in various control settings and offer system-theoretical interpretations and guarantees to their characterized solutions. They suggest that, compared to viewing the dynamical system as a black box and studying the properties of PG methods from a (nonconvex) optimization perspective, it is better to incorporate those properties unique to decision and control into the design of learning algorithms. In this work, we revisit the classical linear quadratic regulator (LQR) problem from the perspective of the newly-developed receding-horizon PG (RHPG) framework [14], which fuses Bellman's principle of optimality into the development of a model-free PG framework. First, RHPG approximates infinite-horizon LQR using a finite-horizon problem formulation and further decomposes the finite-horizon problem into a sequence of one-step sub-problems. Second, RHPG solves each sub-problem recursively using model-free PG methods. To accommodate the inevitable computational errors in solving these sub-problems, we establish the generalized principle of optimality that bounds the accumulated bias by controlling the inaccuracies in solving each sub-problem. We characterize the convergence and sample complexity of RHPG in SS3-D and emphasize that the RHPG algorithm does not require knowing a stabilizing initial control policy _a priori_. Compared with [3, 4], we provide a new parametrization and perspective in learning for control with performance guarantees and streamlined analyses. By unifying existing control theory into the design of learning algorithms, our work and [14] aim to explore a promising path toward the theoretical foundation of PG methods in addressing partially observable and nonlinear control through the lens of RHPG. Compared with [15, 8, 16] that have also removed the assumption on an initial stabilizing point in LQR, we provide more explicit choices of various parameters in the RHPG algorithm. Moreover, the RHPG framework [17, 14] can be directly extended to solve output-feedback control problems [14]. At the same time, the results for discounted LQR can hardly be generalized further since its optimization landscapes are identical to those in un-discounted LQR, with essentially scaled versions of system parameters. We defer an extensive literature review to SSF. ## 2 Preliminaries ### _Infinite-Horizon LQR_ Consider the discrete-time linear dynamical system1 Footnote 1: For extensions to stochastic LQR with i.i.d. additive noises, as well as the setting with an arbitrary (deterministic) initial state, see §H. \[x_{t+1}=Ax_{t}+Bu_{t}, \tag{2.1}\] where \(x_{t}\in\mathbb{R}^{n}\) is the state; \(u_{t}\in\mathbb{R}^{m}\) is the control input; \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{n\times m}\) are system matrices unknown to the control designer; and the initial state \(x_{0}\in\mathbb{R}^{n}\) is sampled from a zero-mean distribution \(\mathcal{D}\) that satisfies \(\mathrm{Cov}(x_{0})=\Sigma_{0}>0\). The goal in the LQR problem is to obtain the optimal controller \(u_{t}=\phi_{t}(x_{t})\) that minimizes the cost \[\mathcal{J}_{\infty}:=\mathbb{E}_{x_{0}\sim\mathcal{D}}\left[\sum_{t=0}^{ \infty}\left(x_{t}^{\top}Qx_{t}+u_{t}^{\top}Ru_{t}\right)\right], \tag{2.2}\] where \(Q>0\) and \(R>0\) are symmetric positive-definite (pd) weightings chosen by the control designer. For the LQR problem as posed to admit a solution, we require \((A,B)\) to be stabilizable. Note that here \(Q>0\) implies the observability of \((A,Q^{1/2})\). Then, the unique optimal LQR controller is linear state-feedback, i.e., \(u_{t}^{*}=-K^{*}x_{t}\), and \(K^{*}\in\mathbb{R}^{m\times n}\), which with a slight abuse of terminology we will call optimal control policy, can be computed by \[K^{*}=(R+B^{\top}P^{*}B)^{-1}B^{\top}P^{*}A, \tag{2.3}\] where \(P^{*}\) is the unique positive definite (pd) solution to the algebraic Riccati equation (ARE) \[P=Q+A^{\top}PA-A^{\top}PB(R+B^{\top}PB)^{-1}B^{\top}PA. \tag{2.4}\] Moreover, the optimal control policy \(K^{*}\) is guaranteed to be stabilizing, i.e., \(\rho(A-BK^{*})<1\). Therefore, we can parametrize LQR as an optimization problem over the policy space \(\mathbb{R}^{m\times n}\), subject to the stability condition [3]: \[\min_{K} \mathcal{J}_{\infty}(K)=\mathbb{E}_{x_{0}\sim\mathcal{D}}\left[ \sum_{t=0}^{\infty}\left(x_{t}^{\top}(Q+K^{\top}RK)x_{t}\right)\right]\] (2.5) s.t. \[K\in\mathcal{K}:=\{K\mid\rho(A-BK)<1\}. \tag{2.6}\] Theoretical properties of model-free (zeroth-order) PG methods in solving (2.5) have been well understood [3, 4, 18]. In particular, the objective function (2.5), even though nonconvex, is coercive and (globally) gradient dominated [18]. Hence, if an initial control policy \(K_{0}\in\mathcal{K}\) is known _a priori_, then any descent direction of the objective value (e.g., vanilla PG) suffices to ensure that all the iterates will remain in the interior of \(\mathcal{K}\) while quickly converging toward the unique stationary point. Removing the assumption on \(K_{0}\) (that an initial stabilizing policy can readily be found) has remained an active research topic [8, 15, 16]. ### _Finite-Horizon LQR_ The finite-\(N\)-horizon version of the LQR problem is also described by the system dynamics (2.1), but with the objective function summing up only up to time \(t=N\). Similar to (2.5), we can parametrize the finite-horizon LQR problem as \(\min_{\{K_{t}\}}\mathcal{J}\big{(}\{K_{t}\}\big{)}\), where \[\mathcal{J}\big{(}\!\{K_{t}\}\!\!:=\!\mathbb{E}_{x_{0}\sim\mathcal{D}}\!\left[ \sum_{t=0}^{N-1}x_{t}^{\top}\!(Q\!+\!K_{t}^{\top}RK_{t})x_{t}\!+\!x_{N}^{\top }Q_{N}x_{N}\!\right]\!\!, \tag{2.7}\] and \(Q_{N}\) is a symmetric pd terminal-state weighting to be chosen. The unique optimal control policy in the finite-horizon LQR is time-varying and can be computed by \[K_{t}^{*}=(R+B^{\top}P_{t+1}^{*}B)^{-1}B^{\top}P_{t+1}^{*}A, \tag{2.8}\] where \(P_{t}^{*}\), for all \(t\in\{0,\cdots,N-1\}\), are generated by the Riccati difference equation (RDE) starting with \(P_{N}^{*}=Q_{N}\): \[P_{t}^{*} =Q+A^{\top}P_{t+1}^{*}A\] \[\quad-A^{\top}P_{t+1}^{*}B(R+B^{\top}P_{t+1}^{*}B)^{-1}B^{\top}P_ {t+1}^{*}A. \tag{2.9}\] Theoretical properties of zeroth-order PG methods in addressing (2.7) have been studied in [6, 7]. Compared to the infinite-horizon setting (2.5), the finite-horizon LQR problem (2.7) is also a nonconvex and gradient-dominated problem, but it does not naturally require the stability condition (2.6). ## 3 Receding-Horizon Policy Gradient ### _LQR with Dynamic Programming_ It is well known that the solution of the RDE (2.9) converges monotonically to the stabilizing solution of the ARE (2.4) exponentially [19]. It then readily follows that the sequence of time-varying LQR policies (2.8), denoted as \(\{K_{t}\}_{t\in\{N-1,\cdots,0\}}\), converges monotonically to the time-invariant LQR policy \(K^{*}\) as \(N\to\infty\). Furthermore, if \(Q_{N}\) satisfies \(Q_{N}>P^{*}\), then the time-varying LQR policies are stabilizing when treated as frozen. Now, we formally present this convergence result in the following theorem. **Theorem 3.1**: _Let \(A_{K}^{*}:=A-BK^{*}\), use \(\|\cdot\|_{*}\) to denote the \(P^{*}\)-induced norm, and define_ \[N_{0}=\frac{1}{2}\cdot\frac{\log\big{(}\frac{\|Q_{N}-P^{*}\|_{*} \cdot\|A_{K}^{*}\|\cdot\|B\|}{\epsilon\cdot\min(R)}\big{)}}{\log\big{(}\frac{1 }{\|A_{K}^{*}\|_{*}}\big{)}}+1. \tag{3.1}\] _Then, it holds that \(\|A_{K}^{*}\|_{*}<1\) and for all \(N\geq N_{0}\), the control policy \(K_{0}^{*}\) computed by (2.8) is stabilizing and satisfies \(\|K_{0}^{*}-K^{*}\|\leq\epsilon\) for any \(\epsilon>0\)._ We provide the proof of Theorem 3.1 in SSB. Theorem 3.1 demonstrates that if selecting \(N\sim\mathcal{O}(\log(\epsilon^{-1}))\), then solving the finite-horizon LQR will result in a policy \(K_{0}^{*}\) that is stabilizing and also \(\epsilon\)-close to \(K^{*}\), for any \(\epsilon>0\). ### _Algorithm Design_ We propose the RHPG algorithm (cf., Algorithm 1), which first selects \(N\) by Theorem 3.1, and then sequentially decomposes the finite-\(N\)-horizon LQR problem backward in time. In particular, for every iteration indexed by \(h\in\{N-1,\cdots,0\}\), the RHPG algorithm solves an LQR problem from \(t=h\) to \(t=N\), where we only optimize for the current policy \(K_{h}\) and fix all the policies \(\{K_{t}\}\) for \(t\in\{h+1,\cdots,N-1\}\) to be the convergent solutions generated from earlier iterations. Concretely, for every \(h\), the RHPG algorithm solves the following _quadratic_ program in \(K_{h}\): \[\min_{K_{h}}\ \mathcal{J}_{h}(K_{h}) :=\mathbb{E}_{x_{h}\sim\mathcal{D}}\bigg{[}\sum_{t=h+1}^{N-1}x_{t}^{ \top}\big{(}Q+(K_{t}^{*})^{\top}RK_{t}\big{)}x_{t}\] \[+x_{h}^{\top}\big{(}Q+K_{h}^{\top}RK_{h}\big{)}x_{h}+x_{N}^{\top}Q _{N}x_{N}\bigg{]}. \tag{3.2}\] Due to the quadratic optimization landscape of (3.2) in \(K_{h}\) for every \(h\), applying any PG method with an arbitrary finite initial point (e.g., zero) would lead to convergence to the globally optimal solution of (3.2). ### _Bias of Model-Free Receding-Horizon Control_ The RHPG algorithm builds on Bellman's principle of optimality, which requires solving each iteration to the exact optimal solution. However, PG methods can only return an \(\epsilon\)-accurate solution after a finite number of steps. To generalize Bellman's principle of optimality, we analyze how computational errors accumulate and propagate in the (backward) dynamic programming process. In the theorem below, we show that if one solves every iteration of the RHPG algorithm to the \(\mathcal{O}(\epsilon)\)-neighborhood of the unique optimum, then the RHPG algorithm will output a policy that is \(\epsilon\)-close to the infinite-horizon LQR policy \(K^{*}\). **Theorem III.2**: _Choose \(N\) according to Theorem III.1 and assume that one can compute, for all \(h\in\{N-1,\cdots,0\}\) and some \(\epsilon>0\), a policy \(\widetilde{K}_{h}\) that satisfies_ \[\big{\|}\widetilde{K}_{h}-\widetilde{K}_{h}^{*}\big{\|} \sim\mathcal{O}(\epsilon)\mathcal{O}(1)\] \[+\mathcal{O}(\epsilon^{\frac{3}{4}})\mathcal{O}(\texttt{poly}( \text{system parameters})),\] _where \(\widetilde{K}_{h}^{*}\) is the optimum of the LQR from \(h\) to \(N\), after absorbing errors in all previous iterations of Algorithm 1. Then, the RHPG algorithm outputs a control policy \(\widetilde{K}_{0}\) that satisfies \(\big{\|}\widetilde{K}_{0}-K^{*}\big{\|}\leq\epsilon\). Furthermore, if \(\epsilon\) is sufficiently small such that \(\epsilon<1-\|A-BK^{*}\|_{*}\), then \(\widetilde{K}_{0}\) is stabilizing._ We illustrate Theorem III.2 in Figure 1 and defer its proof to SSC. Theorem III.2 provides specific error tolerance levels for every iteration of the RHPG algorithm to ensure that the output policy is at most \(\epsilon\)-away from \(K^{*}\). Then, it remains to establish the sample complexity for the convergence of (zeroth-order) PG methods in every iteration of the algorithm, which is done next. ### _PG Update and Sample Complexity_ We analyze here the sample complexity of the zeroth-order PG update in solving each iteration of the RHPG algorithm. Specifically, the zeroth-order PG update is defined as \[K_{h,i+1}=K_{h,i}-\eta_{h}\cdot\widetilde{\nabla}\mathcal{J}_{h}(K_{h,i}) \tag{3.3}\] where \(\eta_{h}>0\) is the stepsize to be determined and \(\widetilde{\nabla}\mathcal{J}_{h}(K_{h,i})\) is the estimated PG sampled from a (two-point) zeroth-order oracle. We formally present the sample complexity result in the following proposition. **Proposition III.3**: _For all \(h\in\{0,\cdots,N-1\}\), choose a constant smoothing radius \(r_{h}\sim\mathcal{O}(\epsilon)\) and a constant stepsize \(\eta_{h}\sim\mathcal{O}(\epsilon^{2})\). Then, the zeroth-order PG update (3.3) converges after \(T_{h}\sim\mathcal{O}(\frac{1}{\epsilon^{2}}\log(\frac{1}{\delta\epsilon^{2}}))\) iterations in the sense that \(\big{\|}K_{h,T_{h}}-\widetilde{K}_{h}^{*}\big{\|}\leq\epsilon\) with a probability of at least \(1-\delta\)._ For completeness, we provide a supplementary proof of Proposition III.3 in SSD, which mostly follows existing results in the literature [4]. Combining Theorem III.2 with Proposition III.3, we conclude that if we spend \(\widetilde{\mathcal{O}}(\epsilon^{-2}\log(\delta^{-1}))\) iterations in solving every subproblem to \(\mathcal{O}(\epsilon)\)-accuracy with a probability of \(1-\delta\), for all \(h\in\{0,\cdots,N-1\}\), then the RHPG algorithm will output a \(\widetilde{K}_{0}\) that satisfies \(\|\widetilde{K}_{0}-K^{*}\|\leq\epsilon\) with a probability of at least \(1-N\delta\). By (3.1), this implies that the total iteration complexity of RHPG is also \(\widetilde{\mathcal{O}}(\epsilon^{-2}\log(\delta^{-1}))\) with the dependence on various system parameters being polynomial. We further provide a discussion on the tradeoffs in selecting \(N\) to balance minimizing finite-to-infinite error and minimizing errors in inexact dynamic programming in SSD. To compare our sample complexity bound with the sharpest result in the literature [4], our dependence on \(\epsilon\) matches that of [4]2. Both our sample complexity and that of [4] have polynomial dependence on system parameters. However, it is not clear how to compare the polynomial dependencies Fig. 1: We show that the output policy \(\widetilde{K}_{0}\) can be made \(\epsilon\)-close to \(K^{*}\) in two steps. First, Theorem III.1 proves that \(K_{0}^{*}\) is \(\epsilon\)-close to \(K^{*}\) by selecting \(N\) accordingly. Then, Theorem III.2 analyzes the backward propagation of the computational errors from solving each subproblem, denoted as \(\delta_{t}:=\widetilde{K}_{t}-\widetilde{K}_{t}^{*}\) for all \(t\), where \(\widetilde{K}_{t}^{*}\) represents the current optimal LQR policy after absorbing errors from all previous iterations. between our bounds and that of [4], and these polynomial factors might affect the overall computational efficiency of both algorithms in a major way. We leave this comparison as an important topic for future research. ## 4 Numerical Experiments We verify our theories on a scalar linear system studied in [4], where \(A=5\), \(B=0.33\), \(Q=R=1\), and the unique optimal LQR policy is \(K^{*}=14.5482\). For the PG method in [3, 4] to converge in this simple setting, one has to initialize with a policy \(K_{0}\) that satisfies \(K_{0}\in\underline{\mathcal{K}}:=\{K\mid 12.12<K\pm r<18.18\}\), which is necessary to prevent the zeroth-order oracle with a smoothing radius of \(r\) from perturbing \(K_{0}\) outside of the stabilizing region \(\mathcal{K}\) during the first iteration of the PG update. In contrast, we initialize the PG updates in Algorithm 1 with a zero policy \(K_{h}=0\), set \(Q_{N}=3\), and choose \(N=\texttt{ceil}(\log(\epsilon^{-1}))\) according to (3.1). Furthermore, we choose \(r_{h}=\sqrt{\epsilon}\), select a constant stepsize in each iteration of the RHPG algorithm, and run the algorithm to solve the LQR problem under six different \(\epsilon\), namely \(\epsilon\in\{10^{-3},3.16\times 10^{-3},10^{-2},3.16\times 10^{-2},10^{-1},3.16 \times 10^{-1}\}\). We apply the zeroth-order PG update in solving every subproblem to \(\left\|\widetilde{K}_{h}-\widetilde{K}_{h}^{*}\right\|\leq\epsilon\). As shown in Figure 2, the empirical observation of the iteration complexity of RHPG (right) for the convergence in policy (left) is around \(\mathcal{O}(\epsilon^{-2})\) under varying \(\epsilon\), which corroborates our theoretical findings. ## 5 Conclusion We have revisited discrete-time LQR from the perspective of RHPG and provided a fine-grained sample complexity for RHPG to learn a control policy that is both stabilizing and \(\epsilon\)-close to the optimal LQR policy. Our result demonstrates the potential of RHPG in addressing various tasks in linear control and estimation with streamlined analyses.
2305.17385
Finding Diameter-Reducing Shortcuts in Trees
In the \emph{$k$-Diameter-Optimally Augmenting Tree Problem} we are given a tree $T$ of $n$ vertices as input. The tree is embedded in an unknown \emph{metric} space and we have unlimited access to an oracle that, given two distinct vertices $u$ and $v$ of $T$, can answer queries reporting the cost of the edge $(u,v)$ in constant time. We want to augment $T$ with $k$ shortcuts in order to minimize the diameter of the resulting graph. For $k=1$, $O(n \log n)$ time algorithms are known both for paths [Wang, CG 2018] and trees [Bil\`o, TCS 2022]. In this paper we investigate the case of multiple shortcuts. We show that no algorithm that performs $o(n^2)$ queries can provide a better than $10/9$-approximate solution for trees for $k\geq 3$. For any constant $\varepsilon > 0$, we instead design a linear-time $(1+\varepsilon)$-approximation algorithm for paths and $k = o(\sqrt{\log n})$, thus establishing a dichotomy between paths and trees for $k\geq 3$. We achieve the claimed running time by designing an ad-hoc data structure, which also serves as a key component to provide a linear-time $4$-approximation algorithm for trees, and to compute the diameter of graphs with $n + k - 1$ edges in time $O(n k \log n)$ even for non-metric graphs. Our data structure and the latter result are of independent interest.
Davide Bilò, Luciano Gualà, Stefano Leucci, Luca Pepè Sciarria
2023-05-27T06:35:09Z
http://arxiv.org/abs/2305.17385v1
# Finding Diameter-Reducing Shortcuts in Trees ###### Abstract In the \(k\)_-Diameter-Optimally Augmenting Tree Problem_ we are given a tree \(T\) of \(n\) vertices as input. The tree is embedded in an unknown _metric_ space and we have unlimited access to an oracle that, given two distinct vertices \(u\) and \(v\) of \(T\), can answer queries reporting the cost of the edge \((u,v)\) in constant time. We want to augment \(T\) with \(k\) shortcuts in order to minimize the diameter of the resulting graph. For \(k=1\), \(O(n\log n)\) time algorithms are known both for paths [20] and trees [17]. In this paper we investigate the case of multiple shortcuts. We show that no algorithm that performs \(o(n^{2})\) queries can provide a better than \(10/9\)-approximate solution for trees for \(k\geq 3\). For any constant \(\varepsilon>0\), we instead design a linear-time \((1+\varepsilon)\)-approximation algorithm for paths and \(k=o(\sqrt{\log n})\), thus establishing a dichotomy between paths and trees for \(k\geq 3\). We achieve the claimed running time by designing an ad-hoc data structure, which also serves as a key component to provide a linear-time \(4\)-approximation algorithm for trees, and to compute the diameter of graphs with \(n+k-1\) edges in time \(O(nk\log n)\) even for non-metric graphs. Our data structure and the latter result are of independent interest. Keywords:Tree diameter augmentation Fast diameter computation Approximation algorithms Time-efficient algorithms ## 1 Introduction The \(k\)_-Diameter-Optimally Augmenting Tree Problem_ (\(k\)-doat) is defined as follows. The input consists of a tree \(T\) of \(n\) vertices that is embedded in an _unknown_ space \(c\). The space \(c\) associates a non-negative cost \(c(u,v)\) to each pair of vertices \(u,v\in V(T)\), with \(u\neq v\). The goal is to quickly compute a set \(S\) of \(k\)_shortcuts_ whose addition to \(T\) minimizes the _diameter_ of the resulting graph \(T+S\). The diameter of a graph \(G\), with a cost of \(c(u,v)\) associated with each edge \((u,v)\) of \(G\), is defined as \(\operatorname{diam}(G):=\max_{u,v\in V(G)}d_{G}(u,v)\), where \(d_{G}(u,v)\) is the distance in \(G\) between \(u\) and \(v\) that is measured w.r.t. the edge costs. We assume to have access to an oracle that answers a query about the cost \(c(u,v)\) of any tree-edge/shortcut \((u,v)\) in constant time. When \(c\) satisfies the triangle inequality, i.e., \(c(u,v)\leq c(u,w)+c(w,v)\) for every three distinct vertices \(u,v,w\in V(T)\), we say that \(T\) is embedded in a _metric space_, and we refer to the problem as _metric_\(k\)-doat. \(k\)-doat and metric \(k\)-doat have been extensively studied for \(k=1\). \(k\)-doat has a trivial lower bound of \(\Omega(n^{2})\) on the number of queries needed to find \(S\) even if one is interested in any finite approximation and the input tree is actually a path.3 On the positive side, it can be solved in \(O(n^{2})\) time and \(O(n\log n)\) space [5], or in \(O(n^{2}\log n)\) time and \(O(n)\) space [24]. Interestingly enough, this second algorithm uses, as a subroutine, a linear time algorithm to compute the diameter of a unicycle graph, i.e., a connected graph with \(n\) edges. Footnote 3: As an example, consider an instance \(I\) consisting of two subpaths \(P_{1},P_{2}\) of \(\Theta(n)\) edges each and cost \(0\). The subpaths are joined by an edge of cost \(1\), and the cost of all shortcuts is \(1\). Any algorithm that does not examine the cost of some shortcut \((u,v)\) with one endpoint in \(P_{1}\) and the other in endpoint in \(P_{2}\) cannot distinguish between \(I\) and the instance \(I^{\prime}\) obtained from \(I\) by setting \(c(u,v)=0\). The claim follows by noticing that there are \(\Theta(n^{2})\) such shortcuts \((u,v)\) and that the optimal diameters of \(I\) and \(I^{\prime}\) are \(1\) and \(0\), respectively. Metric \(k\)-doat has been introduced in [14] where an \(O(n\log^{3}n)\)-time algorithm is provided for the special case in which the input is a path. In the same paper the authors design a less efficient algorithm for trees that runs in \(O(n^{2}\log n)\) time. The upper bound for the path case has been then improved to \(O(n\log n)\) in [23], while in [5] it is shown that the same asymptotic upper bound can be also achieved for trees. Moreover, the latter work also gives a \((1+\varepsilon)\)-approximation algorithm for trees with a running time of \(O(n+\frac{1}{\varepsilon}\log\frac{1}{\varepsilon})\). Our resultsIn this work we focus on metric \(k\)-doat for \(k>1\). In such a case one might hope that \(o(n^{2})\) queries are enough for an exact algorithm. Unfortunately, we show that this is not the case for \(3\leq k=o(n)\), even if one is only searching for a \(\sigma\)-approximate solution with \(\sigma<\frac{10}{9}\). Our lower bound is unconditional, holds for trees (with many leaves), and trivially implies an analogous lower bound on the time complexity of any \(\sigma\)-approximation algorithm. Motivated by the above lower bound, we focus on approximate solutions and we show two linear-time algorithms with approximation ratios of \(4\) and \(1+\varepsilon\), for any constant \(\varepsilon>0\). The latter algorithm only works for trees with few leaves and \(k=o(\sqrt{\log n})\). This establishes a dichotomy between paths and trees for \(k\geq 3\): paths can be approximated within a factor of \(1+\varepsilon\) in linear-time, while trees (with many leaves) require \(\Omega(n^{2})\) queries (and hence time) to achieve a better than \(10/9\) approximation. Notice that this is not the case for \(k=1\). We leave open the problems of understanding whether exact algorithms using \(o(n^{2})\) queries can be designed for 2-doat on trees and \(k\)-doat on paths. To achieve the claimed linear-time complexities of our approximation algorithms, we develop an ad-hoc data structure which allows us to quickly compute a small set of well-spread vertices with large pairwise distances. These vertices are used as potential endvertices for the shortcuts. Interestingly, our data structure can also be used to compute the diameter of a _non-metric_ graph with \(n+k-1\) edges in \(O(nk\log n)\) time. For \(k=O(1)\), this extends the \(O(n)\)-time algorithm in [24] for computing the diameter of a unicycle graph, with only a logarithmic-factor slowdown. We deem this result of independent interest as it could serve as a tool to design efficient algorithms for \(k\)-doat, as shown for \(k=1\) in [24]. Other related work.The problem of minimizing the diameter of a graph via the addition of \(k\) shortcuts has been extensively studied in the classical setting of optimization problems. This problem is shown to be \(\mathsf{NP}\)-hard [21], not approximable within logarithmic factors unless \(\mathsf{P}=\mathsf{NP}\)[6], and some of its variants - parameterized w.r.t. the overall cost of the added shortcuts and w.r.t. the resulting diameter - are even W[2]-hard [12, 11]. As a consequence, the literature has focused on providing polynomial time approximation algorithms for all these variants [1, 19, 1, 6, 9, 10]. This differs from \(k\)-doat, where the emphasis is on \(o(n^{2})\)-time algorithms. Finally, variants of 1-doat in which one wants to minimize either the radius or the continuous diameter of the augmented tree have been studied [7, 8, 15, 16, 18, 20]. Paper organization.In Section 2 we present our non-conditional lower bound. Section 3 is devoted to the algorithm for computing the diameter of a graph with \(n+k-1\) edges in time \(O(nk\log n)\). This relies on our data structure, which is described in Section 3.2. Finally, in Section 4 we provide our linear-time approximation algorithms for metric \(k\)-doat that use our data structure. The implementation of our data structure and its analysis, along with some proofs throughout the paper, are deferred to the appendix. ## 2 Lower Bound This section is devoted to proving our lower bound on the number of queries needed to solve \(k\)-doat on trees for \(k\geq 3\). We start by considering the case \(k=3\). Lemma 1: _For any sufficiently large \(n\), there is a class \(\mathcal{G}\) of instances of metric \(3\)-doat satisfying the following conditions:_ Figure 1: The graph \(G\) of the lower bound construction. The edges of the tree \(T\) are solid and have cost \(2\); the non-tree edges are dashed and their colors reflect the different types of augmenting edges as defined in the proof of Lemma 1. To reduce clutter, only some of the augmenting edges are shown. 1. _In each instance_ \(\langle T,c\rangle\in\mathcal{G}\)_,_ \(T\) _is a tree with_ \(\Theta(n)\) _vertices and all tree-edge/shortcut costs assigned by_ \(c\) _are positive integers;_ 2. _No algorithm can decide whether an input instance_ \(\langle T,c\rangle\) _from_ \(\mathcal{G}\) _admits a solution_ \(S\) _such that_ \(\operatorname{diam}(T+S)\leq 9\) _using_ \(o(n^{2})\) _queries._ Proof.: We first describe the \(3\)-doat instances. All instances \(\langle T,c\rangle\in\mathcal{G}\) share the same tree \(T\), and only differ in the cost function \(c\). The tree \(T\) is defined as follows: consider \(4\) identical stars, each having \(n\) vertices and denote by \(x_{i}\) the center of the \(i\)-th star, by \(L_{i}\) the set of its leaves, and let \(X_{i}=L_{i}\cup\{x_{i}\}\). The tree \(T\) is obtained by connecting each pair \(x_{i}\), \(x_{i+1}\) of centers with a path of three edges \((x_{i},v_{i}),(v_{i},u_{i+1})\), and \((u_{i+1},x_{i+1})\), as in Figure 1. All the tree edges \((u,v)\) have the same cost \(c(u,v)=2\). The class \(\mathcal{G}\) contains an instance \(I_{a,b}\) for each pair \((a,b)\in L_{2}\times L_{3}\), and an additional instance \(I\). Fix \((a,b)\in L_{2}\times L_{3}\). The costs of the shortcuts in \(I_{a,b}\) are defined w.r.t. a graph \(G_{a,b}\) obtained by augmenting \(T\) with the following edges: * All edges \((x_{1},y)\) with \(y\in L_{2}\), and all edges \((x_{4},y)\) with \(y\in L_{3}\) with cost \(2\). * The edges \((y,z)\) for every \(y\in L_{2}\) and every \(z\in L_{3}\). The cost \((a,b)\) is \(1\), while the cost of all other edges is \(2\); * The edges \((y,z)\) for every distinct pair of vertices \(y,z\) that are both in \(L_{2}\) or both in \(L_{3}\). The cost of all such edges is \(3\). * All edges \((x_{2},y)\) with \(y\in L_{3}\), and all edges \((x_{3},y)\) with \(y\in L_{2}\) with cost \(3\); * All edges \((x_{1},y)\) with \(y\in L_{3}\), and all edges \((x_{4},y)\) with \(y\in L_{2}\) with cost \(3\). We define the cost \(c(u,v)\) of all remaining shortcuts \((u,v)\) as \(d_{G_{a,b}}(u,v)\). We now argue that \(c\) satisfies the triangle inequality. Consider any triangle in \(G_{a,b}\) having vertices \(u,v,\) and \(w\). We show that the triangle inequality holds for the generic edge \((u,v)\). As the costs of all edges of \(G_{a,b}\), except for the shortcut \((a,b)\), are either \(2\) or \(3\) and since \(c(a,b)=1\), we have that \(c(u,v)\leq 3\leq c(u,w)+c(w,v)\). Any other triangle clearly satisfies the triangle inequality as it contains one or more edges that are not in \(G_{a,b}\) and whose costs are computed using distances in \(G_{a,b}\). To define the cost function \(c\) of the remaining instance \(I\) of \(\mathcal{G}\), choose any \((a,b)\in L_{2}\times L_{3}\), and let \(G\) be the graph obtained from \(G_{a,b}\) by changing the cost of \((a,b)\) from \(1\) to \(2\). We define \(c(u,v)=d_{G}(u,v)\). Notice that, in \(G\), all edges \((u,v)\in L_{2}\times L_{3}\) have cost \(2\) and that the above arguments also show that \(c\) still satisfies the triangle inequality. Since our choice of \(I_{a,b}\) and \(I\) trivially satisfies (i), we now focus on proving (ii). We start by showing the following facts: (1) each instance \(I_{a,b}\) admits a solution \(S\) such that \(\operatorname{diam}(T+S)\leq 9\); (2) all solutions \(S\) to \(I\) are such that \(\operatorname{diam}(T+S)\geq 10\); (3) if \(u\neq a\) or \(v\neq b\) then \(d_{G}(u,v)=d_{G_{a,b}}(u,v)\). To see (1), consider the set \(S=\{(x_{1},a),(a,b),(b,x_{4})\}\) of \(3\) shortcuts. We can observe that \(\operatorname{diam}(T+S)\leq 9\). This is because \(d_{T+S}(x_{i},x_{j})\leq 5\) for every two star centers \(x_{i}\) and \(x_{j}\) (see also Figure 1). Moreover, each vertex \(u\in L_{i}\) is at a distance of at most \(2\) from \(x_{i}\). Therefore, for every two vertices \(u\in X_{i}\) and \(v\in X_{j}\), we have that \(d_{T+S}(u,v)\leq d_{T}(u,x_{i})+d_{T+S}(x_{i},x_{j})+d_{T}(x_{j},v)\leq 2+5+2=9\). Concerning (2), let us consider any solution \(S\) of \(3\) shortcuts and define \(B_{i}\) as the set of vertices \(X_{i}\) plus the vertices \(u_{i}\) and \(v_{i}\), if they exist. We show that if there is no edge in \(S\) between \(B_{i}\) and \(B_{i+1}\) for some \(i=1,\ldots,3\), then \(\operatorname{diam}(T+S)\geq 10\). To this aim, suppose that this is the case, and let \(u\in L_{i}\) and \(v\in L_{i+1}\) be two vertices that are not incident to any shortcut in \(S\). Notice that the shortest path in \(T+S\) between \(u\) and \(v\) traverses \(x_{i}\) and \(x_{i+1}\), and hence \(d_{T+S}(u,v)=d_{T+S}(u,x_{i})+d_{T+S}(x_{i},x_{i+1})+d_{T+S}(x_{i+1},v)=4+d_{T+S }(x_{i},x_{i+1})\). We now argue that \(d_{T+S}(x_{i},x_{i+1})\geq 6\). Indeed, we have that all edges in \(T+S\) cost at least \(2\), therefore \(d_{T+S}(x_{i},x_{i+1})<6\) would imply that the shortest path \(\pi\) from \(x_{i}\) to \(x_{i+1}\) in \(T+S\) traverses a single intermediate vertex \(z\). By assumption, \(z\) must belong to some \(B_{j}\) for \(j\not\in\{i,i+1\}\). However, by construction of \(G\), we have \(c(x_{i},z)\geq 3\) and \(c(x_{i+1},z)\geq 3\) for every such \(z\in B_{j}\). Hence, we can assume that we have a single shortcut edge between \(B_{i}\) and \(B_{i+1}\), for \(i=1,\ldots,3\). Let \(x\in L_{1}\) (resp. \(y\in L_{4}\)) such that no shortcut in \(S\) is incident to \(x\) (resp. \(y\)). Notice that every path in \(T+S\) from \(x\) to \(y\) traverses at least \(5\) edges and, since all edges cost at least \(2\), we have \(\operatorname{diam}(T+S)\geq d_{T+S}(x,y)\geq 10\). We now prove (3). Let \(u,v\) be two vertices such that there is a shortest path \(\pi\) in \(G_{a,b}\) from \(u\) to \(v\) traversing the edge \((a,b)\). We show that there is another shortest path from \(u\) to \(v\) in \(G_{a,b}\) that avoids edge \((a,b)\). Consider a subpath \(\pi^{\prime}\) of \(\pi\) consisting of two edges one of which is \((a,b)\). Let \(w\neq a,b\) be one of the endvertices of \(\pi^{\prime}\) and let \(w^{\prime}\in\{a,b\}\) be the other endvertex of \(\pi^{\prime}\). Observe that edges \((a,b)\), \((a,w)\), and \((w,b)\) forms a triangle in \(G_{a,b}\) and \(c(a,w),c(w,b)\in\{2,3\}\). Since \(c(a,b)=1\), we have \(c(w,a)\leq c(w,b)+c(b,a)\) and \(c(w,b)\leq c(w,a)+c(a,b)\). This implies that we can shortcut \(\pi^{\prime}\) with the edge \((w,w^{\prime})\) thus obtaining another shortest path from \(u\) to \(v\) that does not use the edge \((a,b)\). We are finally ready to prove (ii). We suppose towards a contradiction that some algorithm \(\mathcal{A}\) requires \(o(n^{2})\) queries and, given any instance \(\langle T,c\rangle\in\mathcal{G}\), decides whether \(\operatorname{diam}(T+S)\leq 9\) for some set \(S\) of \(3\) shortcuts.4 By (2), \(\mathcal{A}\) with input \(I\) must report that there is no feasible set of shortcuts that achieves diameter at most \(9\). Since \(\mathcal{A}\) performs \(o(n^{2})\) queries, for all sufficiently large values of \(n\), there must be an edge \((a,b)\) with \(a\in L_{2}\) and \(b\in L_{3}\) whose cost \(c(a,b)\) is not inspected by \(\mathcal{A}\). By (3), the costs of all the edges \((u,v)\) with \((u,v)\neq(a,b)\) are the same in the two instances \(I\) and \(I_{a,b}\) and hence \(\mathcal{A}\) must report that \(I_{a,b}\) admits no set of shortcuts \(S\) such that \(\operatorname{diam}(T+S)\leq 9\). This contradicts (1). Footnote 4: For the sake of simplicity, we consider deterministic algorithms only. However, standard arguments can be used to prove a similar claim also for randomized algorithms. With some additional technicalities we can generalize Lemma 1 to \(3\leq k=o(n)\), which immediately implies a lower bound on the number of queries needed by any \(\sigma\)-approximation algorithm with \(\sigma<10/9\). Lemma 2: _For any sufficiently large \(n\) and \(3\leq k=o(n)\), there is a class \(\mathcal{G}\) of instances of metric \(k\)-doat satisfying the following conditions:_ 1. _In each instance_ \(\langle T,c\rangle\in\mathcal{G}\)_,_ \(T\) _is a tree with_ \(\Theta(n)\) _vertices and all tree-edge/shortcut costs assigned by_ \(c\) _are positive integers;_ _._ 2. _No algorithm can decide whether an input instance_ \(\langle T,c\rangle\) _from_ \(\mathcal{G}\) _admits a solution_ \(S\) _such that_ \(\operatorname{diam}(T+S)\leq 9\) _using_ \(o(n^{2})\) _queries._ Theorem 3.1: _There is no \(o(n^{2})\)-query \(\sigma\)-approximation algorithm for metric \(k\)-doat with \(\sigma<10/9\) and \(3\leq k=o(n)\)._ ## 3 Fast diameter computation In this section we describe an algorithm that computes the diameter of a graph on \(n\) vertices and \(n+k-1\) edges, with non-negative edge costs, in \(O(nk\log n)\) time. Before describing the general solution, we consider the case in which the graph is obtained by augmenting a path \(P\) of \(n\) vertices with \(k\) edges as a warm-up. ### Warm-up: diameter on augmented paths Given a path \(P\) and a set \(S\) of \(k\) shortcuts we show how to compute the diameter of \(P+S\). We do that by computing the _eccentricity_\(\mathcal{E}_{s}:=\max_{v\in V(P)}d_{P+S}(s,v)\) of each vertex \(s\). Clearly, the diameter of \(P+S\) is given by the maximum value chosen among the computed vertex eccentricities, i.e., \(\operatorname{diam}(P+S)=\max_{s\in V(P)}\mathcal{E}_{s}\). Given a subset of vertices \(X\subseteq V(P)\), define \(\mathcal{E}_{s}(X):=\max_{v\in X}d_{P+S}(s,v)\), i.e., \(\mathcal{E}_{s}(X)\) is the eccentricity of \(s\) restricted to \(X\). In the rest of the section we focus on an fixed vertex \(s\). We begin by computing a condensed weighted (multi-)graph \(G^{\prime}\). To this aim, we say that a vertex \(v\) is _marked as terminal_ if it satisfies at least one of the following conditions: (i) \(v=s\), (ii) \(v\) is an endvertex of some shortcut in \(S\), or (iii) \(v\) is an endpoint of the path. Traverse \(P\) from one endpoint to the other and let \(v_{1},\ldots,v_{h}\) be the marked vertices, in order of traversal. We set the vertex set of \(G^{\prime}\) as the set \(M\) of all vertices marked as terminals while the edge set of \(G^{\prime}\) contains (i) all edges \(e_{i}=(v_{i},v_{i+1})\) for \(i=1,\ldots,h-1\), where the cost of edge \(e_{i}\) is \(d_{P}(v_{i},v_{i+1})\), and (ii) all edges in \(S\), with their respective costs. The graph \(G^{\prime}\) has \(O(k)\) vertices and edges, and it can be built in \(O(k)\) time after a one-time preprocessing of \(P\) which requires \(O(n)\) time. See Figure 2 for an example. We now compute all the distances from \(s\) in \(G^{\prime}\) in \(O(k\log k)\) time by running Dijkstra's algorithm. Since our construction of \(G^{\prime}\) ensures that \(d_{P+S}(s,v)=d_{G^{\prime}}(s,v)\) for every terminal \(v\), we now know all the distances \(\alpha_{i}:=d_{P+S}(s,v_{i})\) with \(v_{i}\in M\). For \(i=1,\ldots,h-1\), define \(P_{i}\) as the subpath of \(P\) between \(v_{i}\) and \(v_{i+1}\). In order to find \(\mathcal{E}_{s}(V(P))\), we will separately compute the quantities \(\mathcal{E}_{s}(P_{1}),\ldots,\mathcal{E}_{s}(P_{h-1})\).5 Fix an index \(i\) and let \(u\) be a vertex in \(P_{i}\). Consider a shortest path \(\pi\) from \(s\) to \(u\) in \(P+S\) and let \(x\) be the last marked vertex traversed by \(\pi\) (this vertex always exists since \(s\) is marked). We can decompose \(\pi\) into two subpaths: a shortest path \(\pi_{x}\) from \(s\) to \(x\), and a shortest path \(\pi_{u}\) from \(x\) to \(u\). By the choice of \(x\), \(x\) is the only marked vertex in \(\pi_{u}\). This means that \(x\) is either \(v_{i}\) or \(v_{i+1}\) and, in particular: \[d_{P+S}(s,u)=\min\left\{\alpha_{i}+d_{P_{i}}(v_{i},u),\alpha_{i+1}+d_{P_{i}}(v_{i +1},u)\right\}.\] Hence, the farthest vertex \(u\) from \(s\) among those in \(P_{i}\) is the one that maximizes the right-hand side of the above formula, i.e.: \[\mathcal{E}_{s}(P_{i})=\max_{u\in P_{i}}\min\left\{\alpha_{i}+d_{P_{i}}(v_{i}, u),\alpha_{i+1}+d_{P_{i}}(v_{i+1},u)\right\}.\] We now describe how such a maximum can be computed efficiently. Let \(u_{j}\) denote the \(j\)-th vertex encountered when \(P_{i}\) is traversed from \(v_{i}\) to \(v_{i+1}\). The key observation is that the quantity \(\ell(j)=\alpha_{i}+d_{P_{i}}(v_{i},u_{j})\) is monotonically non-decreasing w.r.t. \(j\), while the quantity \(r(j)=\alpha_{i+1}+d_{P_{i}}(v_{i+1},u_{j})\) is monotonically non-increasing w.r.t. \(j\). Since both \(\ell(j)\) and \(r(j)\) can be evaluated in constant time once \(\alpha_{i}\) and \(\alpha_{i+1}\) are known, we can binary search for the smallest index \(j\geq 2\) such that \(\ell(j)\geq r(j)\) (see Figure 2 (c)). Notice that index \(j\) always exists since the condition is satisfied for \(u_{j}=v_{i+1}\). This requires \(O(\log|P_{i}|)\) time and allows us to return \(\mathcal{E}_{s}(P_{i})=\max\{\ell(j-1),r(j)\}\). After the linear-time preprocessing, the time needed to compute the eccentricity of a single vertex \(s\) is then \(O(k\log k+\sum_{i=1}^{h-1}\log|P_{i}|)\). Since \(\sum_{i=1}^{h-1}|P_{i}|<n+h\) and \(h=O(k)\), this can be upper bounded by \(O(k\log k+k\log\frac{n}{k})=O(k\log\max\{k,\frac{n}{k}\})\). Repeating the above procedure for all vertices \(s\), and accounting for the preprocessing time, we can compute the diameter of \(P+S\) in time \(O(nk\log\max\{k,\frac{n}{k}\})\). Figure 2: An example showing how to compute the eccentricity of a vertex \(s\) in \(P+S\). (\(a\)) Edges of \(P\) are solid, while edges in \(S\) are dashed. Vertices marked as terminals are white. (\(b\)) The condensed graph \(G^{\prime}\). (\(c\)) Every terminal \(v_{i}\) is labeled with its corresponding \(\alpha_{i}\) and the arrows point to the vertices \(u_{j}\) in each (shaded) path \(P_{i}\) such that \(j\geq 2\) is the smallest index with \(\ell(j)\geq r(j)\). ### Diameter on augmented trees We are now ready to describe the algorithm that computes the diameter of a tree \(T\) augmented with a set \(S\) of \(k\) shortcuts. The key idea is to use the same framework used for the path. Roughly speaking, we define a condensed graph \(G^{\prime}\) over a set of \(O(k)\) marked vertices, as we did for the path. Next, we use \(G^{\prime}\) to compute in \(O(k\log k)\) time the distance \(\alpha_{v}:=d_{T+S}(s,v)\) for every marked vertex \(v\), and then we use these distances to compute the eccentricity \(\mathcal{E}_{s}\) of \(s\) in \(T+S\). This last step is the tricky one. In order to efficiently manage it, we design an ad-hoc data structure which will be able to compute \(\mathcal{E}_{s}\) in \(O(k\log n)\) time once all values \(\alpha_{v}\) are known. In the following we provide a description of the operations supported by our data structure, and we show how they can be used to compute the diameter of \(T+S\). In Appendix 0.C we describe how the data structure can be implemented. #### 3.2.1 An auxiliary data structure: description Given a tree \(T\) that is rooted in some vertex \(r\) and two vertices \(u,v\) of \(T\), we denote by \(\textsc{lca}_{T}(u,v)\) their _lowest common ancestor_ in \(T\), i.e., the deepest node (w.r.t. the hop-distance) that is an ancestor of both \(u\) and \(v\) in \(T\). Moreover, given a subset of vertices \(M\), we define the _shrunk_ version of \(T\) w.r.t. \(M\) as the tree \(T(M)\) whose vertex set consists of \(r\) along with all vertices in \(\{\textsc{lca}_{T}(u,v)\mid u,v\in M\}\) (notice that this includes all vertices \(v\in M\) since \(\textsc{lca}(v,v)=v\)), and whose edge set contains an edge \((u,v)\) for each pair of distinct vertices \(u,v\) such that the only vertices in the unique path between \(u\) and \(v\) in \(T\) that belong to \(T(M)\) are the endvertices \(u\) and \(v\). We call the vertices in \(M\)_terminal_ vertices, while we refer to the vertices that are in \(T(M)\) but not in \(M\) as _Steiner_ vertices. See Figure 3 for an example. Our data structure can be built in \(O(n)\) time and supports the following operations, where \(k\) refers to the current number of terminal vertices: **MakeTerminal(\(v\)):**: Mark a given vertex \(v\) of \(T\) as a terminal vertex. Set \(\alpha_{v}=0\). This operation requires \(O(\log n)\) time. **Shrink()**: : Report the shrunk version of \(T\) w.r.t. the set \(M\) of terminal vertices, i.e., report \(T(M)\). This operation requires \(O(k)\) time. Figure 3: The rooted binary tree \(T\) with root \(r\) is depicted on the left side. The set of terminal vertices is \(M=\{c,f,g,h,i,j,k\}\) and is represented using white vertices. The square vertices are the Steiner vertices. The corresponding shrunk tree \(T(M)\) is depicted on the right side. **SetAlpha\((v,\alpha)\):**: Given a terminal vertex \(v\), set \(\alpha_{v}=\alpha\). This requires \(O(1)\) time. **ReportFarthest()**: : Return \(\max_{u\in T}\min_{v\in M}(\alpha_{v}+d_{T}(v,u))\), where \(M\) is the current set of terminal vertices, along with a pair of vertices \((v,u)\) for which the above maximum is attained. This operation requires \(O(k\log n)\) time. #### 3.1.2 Our algorithm In this section we show how to compute the diameter of \(T+S\) in time \(O(nk\log n)\). We can assume w.l.o.g. that the input tree \(T\) is binary since, if this is not the case, we can transform \(T\) into a binary tree having same diameter once augmented with \(S\), and asymptotically the same number of vertices as \(T\). This transformation requires linear time and is described in Appendix 0.A. Moreover, we perform a linear-time preprocessing in order to be able to compute the distance \(d_{T}(u,v)\) between any pair of vertices in constant time.6 Footnote 6: This can be done by rooting \(T\) in an arbitrary vertex, noticing that \(d_{T}(u,v)=d_{T}(u,\textsc{lca}_{T}(u,v))+d_{T}(\textsc{lca}_{T}(u,v),v)\), and using an _oracle_ that can report lca\((u,v)\) in constant time after a \(O(n)\)-time preprocessing [17]. We use the data structure \(\mathcal{D}\) of Section 3.2, initialized with the binary tree \(T\). Similarly to our algorithm on paths, we compute the diameter of \(T\) by finding the eccentricity \(\mathcal{E}_{s}\) in \(T+S\) of each vertex \(s\). In the rest of this section we fix \(s\) and show how to compute \(\mathcal{E}_{s}\). We start by considering all vertices \(x\) such that either \(x=s\) or \(x\) is an endvertex of some edge in \(S\), and we mark all such vertices as terminals in \(\mathcal{D}\) (we recall that these vertices also form the set \(M\) of terminals). This requires time \(O(k\log n)\). Next, we compute the distances from \(s\) in the (multi-)graph \(G^{\prime}\) defined on the shrunk tree \(T(M)\) by (i) assigning weight \(d_{T}(u,v)\) to each edge \((u,v)\) in \(T(M)\), and (ii) adding all edges in \(S\) with weight equal to their cost. This can be done in \(O(k\log k)\) time using Dijkstra's algorithm. We let \(\alpha_{v}\) denote the computed distance from \(s\) to terminal vertex \(v\). We can now find \(\mathcal{E}_{s}\) by assigning cost \(\alpha_{v}\) to each terminal \(v\) in \(\mathcal{D}\) (using the SetAlpha\((v,\alpha_{v})\) operation), and performing a ReportFarthest() query. This requires \(O(k+k\log n)=O(k\log n)\) time. Finally, we revert \(\mathcal{D}\) to the initial state before moving on to the next vertex \(s\).7 Footnote 7: This can be done in \(O(k\log n)\) time by keeping track of all the memory words modified as a result of the operations on \(\mathcal{D}\) performed while processing \(s\), along with their initial contents. To revert \(\mathcal{D}\) to its initial state it suffices to rollback these modifications. Theorem 3.1: _Given a graph \(G\) on \(n\) vertices and \(n+k-1\) edges with non-negative edge costs and \(k\geq 1\), we can compute the diameter of \(G\) in time \(O(nk\log n)\)._ Since there are \(\eta=\binom{n}{2}-(n-1)=\frac{(n-1)(n-2)}{2}\) possible shortcut edges, we can solve the \(k\)-doat problem by computing the diameter of \(T+S\) for each of the \(\binom{\eta}{k}=O(n^{2k})\) possible sets \(S\) of \(k\) shortcuts using the above algorithm. This yields the following result: Corollary 1: _The \(k\)-doat problem on trees can be solved in time \(O(n^{2k+1}\cdot k\log n)\)._ Linear-time approximation algorithms In this section we describe two \(O(n)\)-time approximation algorithms for metric \(k\)-doat. The first algorithm guarantees an approximation factor of \(4\) and runs in linear time when \(k=O\big{(}\sqrt{\frac{n}{\log n}}\big{)}\). The second algorithm computes a \((1+\varepsilon)\)-approximate solution for constant \(\varepsilon>0\), but its running time depends on the number \(\lambda\) of leaves of the input tree \(T\) and it is linear when \(\lambda=O\left(n^{\frac{1}{(2k+2)^{2}}}\right)\) and \(k=o(\sqrt{\log n})\). Both algorithms use the data structure introduced in Section 3.2 and are based on the famous idea introduced by Gonzalez to design approximation algorithms for graph clustering [13]. Gonzalez' idea, to whom we will refer to as Gonzalez algorithm in the following, is to compute \(h\) suitable vertices \(x_{1},\ldots,x_{h}\) of an input graph \(G\) with non-negative edge costs in a simple iterative fashion.8 The first vertex \(x_{1}\) has no constraint and can be any vertex of \(G\). Given the vertices \(x_{1},\ldots,x_{i}\), with \(i<h\), the vertex \(x_{i+1}\) is selected to maximize its distance towards \(x_{1},\ldots,x_{i}\). More precisely, \(x_{i+1}\in\arg\max_{v\in V(G)}\min_{1\leq j\leq i}d_{G}(v,x_{j})\). We now state two useful lemmas, the first of which is proved in [13]. Footnote 8: In the original algorithm by Gonzalez, these \(h\) vertices are used to define the centers of the \(h\) clusters. Lemma 3: _Let \(G\) be a graph with non-negative edge costs and let \(x_{1},\ldots,x_{h}\) be the vertices computed by Gonzalez algorithm on input \(G\) and \(h\). Let \(D=\min_{1\leq i<j\leq h}d_{G}(x_{i},x_{j})\). Then, for every vertex \(v\) of \(G\), there exists \(1\leq i\leq h\) such that \(d_{G}(v,x_{i})\leq D\)._ Lemma 4: _Given as input a graph \(G\) that is a tree and a positive integer \(h\), Gonzalez algorithm can compute the vertices \(x_{1},\ldots,x_{h}\) in \(O(n+h^{2}\log n)\) time._ Proof: We can implement Gonzalez algorithm in time \(O(n+h^{2}\log n)\) by constructing the data structure \(\mathcal{D}\) described in Section 3.2 and use it as follows. We iteratively (i) mark the vertex \(x_{i}\) as a terminal (in \(O(\log n)\) time), and (ii) query \(\mathcal{D}\) for the vertex \(x_{i+1}\) that maximizes the distance from all terminal vertices (in \(O(h\log n)\) time). In the remainder of this section let \(S^{*}\) be an optimal solution for the \(k\)-doat instance consisting of the tree \(T\) embedded in a metric space \(c\), and let \(D^{*}=\operatorname{diam}(T+S^{*})\). #### 4.0.1 The 4-approximation algorithm The 4-approximation algorithm we describe has been proposed and analyzed by Li _et al._[19] for the variant of \(k\)-doat in which we are given a graph \(G\) as input and edge/shortcut costs are uniform. Li _et al._[19] proved that the algorithm guarantees an approximation factor of \(\left(4+\frac{2}{D^{*}}\right)\); the analysis has been subsequently improved to \(\left(2+\frac{2}{D^{*}}\right)\) in [6]. We show that the algorithm guarantees an approximation factor of \(4\) for the \(k\)-doat problem when \(c\) satisfies the triangle inequality. The algorithm works as follows. We use Gonzalez algorithm on the input \(G=T\) and \(h=k+1\) to compute \(k+1\) vertices \(x_{1},\ldots,x_{k+1}\). The set \(S\) of \(k\) shortcuts is given by the star centered at \(x_{1}\) and having \(x_{2},\ldots,x_{k+1}\) as its leaves, i.e., \(S=\{(x_{1},x_{i})\mid 2\leq i\leq k+1\}\). The following lemma is crucial to prove the correctness of our algorithm. Lemma 5 ([11, Lemma 3]): _Given \(G=T\) and \(h=k+1\), Gonzalez algorithm computes a sequence of vertices \(x_{1},\ldots,x_{k+1}\) with \(\min_{1\leq i\leq k+1}d_{T}(x_{i},v)\leq D^{*}\) for every vertex \(v\) of \(G\)._ Theorem 4.1: _In a \(k\)-doat input instance in which \(T\) is embedded in a metric space and \(k=O\big{(}\sqrt{\frac{n}{\log n}}\big{)}\), the algorithm computes a \(4\)-approximate solution in \(O(n)\) time._ #### 4.1.1 The \((1+\varepsilon)\)-approximation algorithm We now describe an algorithm that, for any constant \(\varepsilon>0\), computes a \((1+\varepsilon)\)-approximate solution for metric \(k\)-doat. The running time of the algorithm is guaranteed to be linear when \(k=o(\sqrt{\log n})\) and \(T\) has \(\lambda=O\left(n^{\frac{1}{(2k+2)^{2}}}\right)\) leaves. As usual for polynomial-time approximation schemes, we will consider only large enough instances, i.e., we will assume \(n\geq n_{0}\) for some constant \(n_{0}\) depending only on \(\varepsilon\). We can solve the instances with \(n<n_{0}\) in constant time using, e.g., the exact algorithm of Section 3. In particular, we will assume that \[n>\left(\frac{12\lambda(k+2)^{2}}{\varepsilon}\right)^{2k+2}. \tag{1}\] Notice that, for any constant \(\varepsilon>0\), when \(k=o(\sqrt{\log n})\) and \(\lambda=O\left(n^{\frac{1}{(2k+2)^{2}}}\right)\), the right-hand side of (1) is in \(o(n)\). As a consequence, it is always possible to choose a constant \(n_{0}\) such that (1) is satisfied by all \(n\geq n_{0}\). The main idea is an extension of the similar result proved in [5] for the special case \(k=1\). However, we also benefit from the fast implementation we provided for Gonzalez algorithm to obtain a linear running time. The idea borrowed from [5] is that of reducing the problem instance into a smaller instance formed by a tree \(T^{\prime}\) induced by few vertices of \(T\) and by a suitable cost function \(c^{\prime}\). Next, we use the exact algorithm of Corollary 1 to compute an optimal solution \(S^{\prime}\) for the reduced instance. Finally, we show that \(S^{\prime}\) is a \((1+\varepsilon)\)-approximate solution for the original instance. The quasi-optimality of the computed solution comes from the fact that the reduced instance is formed by a suitably selected subset of vertices that are close to the unselected ones. The reduced instance is not exactly a \(k\)-doat problem instance, but an instance of a variant of the \(k\)-doat problem in which each edge \((u,v)\) of the tree \(T^{\prime}\) has a known non-negative cost associated with it, we have a shortcut for each pair of distinct vertices, and the function \(c^{\prime}\) determines the cost of each shortcut \((u,v)\) that can be added to the tree. Therefore, in our variant of \(k\)-doat we are allowed to add the shortcut \((u,v)\) of cost \(c^{\prime}(u,v)\) even if \((u,v)\) is an edge of \(T^{\prime}\) (i.e., the cost of the shortcut \((u,v)\) may be different from the cost of the edge \((u,v)\) of \(T^{\prime}\)). We observe that all the results discussed in the previous sections hold even for this generalized version of \(k\)-doat.9 Footnote 9: This is because we can reduce the generalized \(k\)-doat problem instance into a \(k\)-doat instance in linear time by splitting each edge of \(T^{\prime}\) of cost \(\chi\) into two edges, one of cost \(0\) and the other one of cost \(\chi\), to avoid the presence of shortcuts that are parallel to the tree edges. All the shortcuts that are incident to the added vertex used to split an edge of \(T^{\prime}\) have a sufficiently large cost which renders them useless. Let \(B\) be the set of _branch_ vertices of \(T\), i.e., the internal vertices of \(T\) having a degree greater than or equal to \(3\). It is a folklore result that a tree with \(\lambda\) leaves contains at most \(\lambda-1\) branch vertices. Therefore, we have \(|B|\leq\lambda-1\). The reduced instance \(T^{\prime}\) has a set \(V^{\prime}\) of \(\eta=2n^{\frac{1}{2k+2}}\) vertices defined as follows. \(V^{\prime}\) contains all the branch vertices \(B\) plus \(\eta-|B|\) vertices \(x_{1},\ldots,x_{\eta-|B|}\) of \(T\) that are computed using Gonzalez algorithm on input \(G=T\) and \(h=\eta-|B|\). By Lemma 4, the vertices \(x_{1},\ldots,x_{\eta-|B|}\) can be computed in \(O(n+(\eta-|B|)^{2}\log n)=O(n)\) time. As \(|B|\leq\lambda-1=O\left(n^{\frac{1}{(2k+2)^{2}}}\right)\), it follows that \(\eta-|B|>n^{\frac{1}{2k+2}}\). The edges of \(T^{\prime}\) are defined as follows. There is an edge between two vertices \(u,v\in V^{\prime}\) iff the path \(P\) in \(T\) from \(u\) to \(v\) contains no vertex of \(V^{\prime}\) other than \(u\) and \(v\), i.e., \(V(P)\cap V^{\prime}=\{u,v\}\). The cost of an edge \((u,v)\) of \(T^{\prime}\) is equal to \(d_{T}(u,v)\). Then, the cost function \(c^{\prime}\) of \(T^{\prime}\) is defined for every pair of vertices \(u,v\in V^{\prime}\), with \(u\neq v\), and is equal to \(c^{\prime}(u,v)=c(u,v)\). Given the vertices \(V^{\prime}\), the \(\eta-1\) costs of the edges \((u,v)\) of \(T^{\prime}\), that are equal to the values \(d_{T}(u,v)\), can be computed in \(O(n)\) time using a depth-first traversal of \(T\) (from an arbitrary vertex of \(T^{\prime}\)). We use the exact algorithm of Corollary 1 to compute an optimal solution \(S^{\prime}\) for the reduced instance in time \(O(\eta^{2k+1}\cdot k\log\eta)=O\left(2^{2k+2}\cdot n^{1-\frac{1}{2k+2}}\cdot k \log n\right)=O(n)\). The algorithm returns \(S^{\prime}\) as a solution for the original problem instance. We observe that the algorithm runs in \(O(n)\) time. In order to prove that the algorithm computes a \((1+\varepsilon)\)-approximate solution, we first give a preliminary lemma showing that each vertex of \(T\) is not too far from at least one of the vertices in \(\{x_{1},\ldots,x_{\eta-|B|}\}\). Lemma 6: _If \(T\) has \(\lambda=O\left(n^{\frac{1}{(2k+2)^{2}}}\right)\) leaves, then, for every \(v\in V(T)\), there exists \(i\in\{1,\ldots,\eta-|B|\}\) such that \(d_{T}(x_{i},v)\leq\frac{\varepsilon}{4(k+2)}D^{*}\)._ Theorem 4.1: _Let \(\varepsilon>0\) be a constant. Given a metric \(k\)-doat instance with \(k=o(\sqrt{\log n})\) and such that \(T\) is a tree with \(\lambda=O\big{(}n^{\frac{1}{(2k+2)^{2}}}\big{)}\) leaves, the algorithm computes a \((1+\varepsilon)\)-approximate solution in \(O(n)\) time._ Proof: We already proved through the section that the algorithm runs in \(O(n)\) time. So, it only remains to prove the approximation factor guarantee. We define a function \(\phi:V(T)\to\{x_{1},\ldots,x_{\eta-|B|}\}\) that maps each vertex \(v\in V(T)\) to its closest vertex \(x_{i}\), with \(i\in\{1,\ldots,\eta-|B|\}\) w.r.t. the distances in \(T\), i.e., \(d_{T}(v,\phi(v))=\min_{1\leq i\leq\eta-|B|}d_{T}(v,x_{i})\). We now show that there exists a set \(S\) of at most \(k\) shortcuts such that (i) each edge \(e\in S\) is between two vertices in \(\{x_{1},\ldots,x_{\eta-|B|}\}\) and (ii) \(\operatorname{diam}(T^{\prime}+S)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). The set \(S\) is defined by mapping each shortcut \(e=(u,v)\in S^{*}\) of an optimal solution for the original \(k\)-doat instance to the shortcut \(\big{(}\phi(u),\phi(v)\big{)}\) (self-loops are discarded). Clearly, (i) holds. To prove (ii), fix any two vertices \(u\) and \(v\) of \(T^{\prime}\). We first show that \(d_{T+S}(u,v)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\) and then prove that \(d_{T^{\prime}+S}(u,v)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). Let \(P\) be a shortest path in \(T+S^{*}\) between \(u\) and \(v\) and assume that \(P\) uses the shortcuts \(e_{1}=(u_{1},v_{1}),\ldots,e_{t}=(u_{t},v_{t})\in S^{*}\), with \(t\leq k\). Consider the (not necessarily simple) path \(P^{\prime}\) in \(T+S\) that is obtained from \(P\) by replacing each shortcut \(e_{i}\) with a _detour_ obtained by concatenating the following three paths: (i) the path in \(T\) from \(u_{i}\) to \(\phi(u_{i})\); (ii) the shortcut \((\phi(u_{i}),\phi(v_{i}))\); (iii) the path in \(T\) from \(\phi(v_{i})\) to \(v_{i}\). The overall cost of the detour that replaces the shortcut \(e_{i}\) in \(P^{\prime}\) is at most \(c(e_{i})+\frac{\varepsilon}{k+2}D^{*}\). Indeed,using that \(c(\phi(u_{i}),\phi(v_{i}))\leq d_{T}(u_{i},\phi(u_{i}))+c(e_{i})+d_{T}(\phi(v_ {i}),v_{i})\) together with Lemma 6 that implies \(d_{T}(u_{i},\phi(u_{i})),d_{T}(\phi(v_{i}),v_{i})\leq\frac{\varepsilon}{4(k+2 )}D^{*}\), we obtain \[d_{T}(u_{i},\phi(u_{i}))+c(\phi(u_{i}),\phi(v_{i}))+d_{T}(\phi( v_{i}),v_{i})\\ \leq 2d_{T}(u_{i},\phi(u_{i}))+c(e_{i})+2d_{T}(\phi(v_{i}),v_{i}) \leq c(e_{i})+\frac{\varepsilon}{k+2}D^{*}.\] As a consequence, \(c(P^{\prime})\leq c(P)+\frac{t\varepsilon}{k+2}D^{*}\leq c(P)+\frac{k \varepsilon}{k+2}D^{*}\) and, since \(c(P)\leq\operatorname{diam}(T+S^{*})\leq D^{*}\), we obtain \(c(P^{\prime})\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). To show that \(d_{T^{\prime}+S}(u,v)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\), i.e., that \(\operatorname{diam}(T^{\prime}+S)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\), it is enough to observe that \(P^{\prime}\) can be converted into a path \(P^{\prime\prime}\) in \(T^{\prime}+S\) of cost that is upper bounded by \(c(P^{\prime})\). More precisely, we partition the edges of \(P^{\prime}\) except those that are in \(S\) into subpaths, each of which has two vertices in \(\{x_{1},\ldots,x_{\eta-|B|}\}\) as its two endvertices and no vertex in \(\{x_{1},\ldots,x_{\eta-|B|}\}\) as one of its internal vertices. The path \(P^{\prime\prime}\) in \(T^{\prime}+S\) is defined by replacing each subpath with the edge of \(T^{\prime}\) between its two endvertices. Clearly, the cost of this edge, being equal to the distance in \(T\) between the two endvertices, is at most the cost of the subpath. Therefore, the cost of \(P^{\prime\prime}\) in \(T^{\prime}+S\) is at most the cost of \(P^{\prime}\) in \(T+S\); hence \(d_{T^{\prime}+S}(u,v)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). We conclude the proof by showing that the solution \(S^{\prime}\) computed by the algorithm satisfies \(\operatorname{diam}(T+S^{\prime})\leq(1+\varepsilon)D^{*}\). The solution \(S^{\prime}\) is an optimal solution for the reduced instance. As a consequence, \(\operatorname{diam}(T^{\prime}+S^{\prime})\leq\operatorname{diam}(T^{ \prime}+S)\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). Let \(u\) and \(v\) be any two vertices of \(T\). We have that \(d_{T^{\prime}+S^{\prime}}(\phi(u),\phi(v))\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). Moreover, by Lemma 6, we have that \(d_{T}(u,\phi(u)),d_{T}(v,\phi(v))\leq(1+\frac{k\varepsilon}{k+2})D^{*}\). \(\frac{\varepsilon}{4(k+2)}D^{*}\). Therefore, \[d_{T+S^{\prime}}(u,v) \leq d_{T}(u,\phi(u))+d_{T+S^{\prime}}(\phi(u),\phi(v))+d_{T}(v,\phi( v))\] \[\leq\frac{\varepsilon}{4(k+2)}D^{*}+d_{T^{\prime}+S^{\prime}}(\phi (u),\phi(v))+\frac{\varepsilon}{4(k+2)}D^{*}\] \[\leq\frac{2\varepsilon}{4(k+2)}D^{*}+\left(1+\frac{k\varepsilon}{ k+2}\right)D^{*}<(1+\varepsilon)D^{*}.\] Hence \(\mathrm{diam}(T+S^{\prime})\leq(1+\varepsilon)D^{*}\). This completes the proof.
2310.13355
SILC: Improving Vision Language Pretraining with Self-Distillation
Image-Text pretraining on web-scale image caption datasets has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants. Several works have also used CLIP features for dense prediction tasks and have shown the emergence of open-set abilities. However, the contrastive objective used by these models only focuses on image-text alignment and does not incentivise image feature learning for dense prediction tasks. In this work, we introduce SILC, a novel framework for vision language pretraining. SILC improves image-text contrastive learning with the simple addition of local-to-global correspondence learning by self-distillation. We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on dense predictions tasks like detection and segmentation, while also providing improvements on image-level tasks such as classification and retrieval. SILC models sets a new state of the art for zero-shot classification, few shot classification, image and text retrieval, zero-shot segmentation, and open vocabulary segmentation. We further show that SILC features greatly benefit open vocabulary detection, captioning and visual question answering.
Muhammad Ferjad Naeem, Yongqin Xian, Xiaohua Zhai, Lukas Hoyer, Luc Van Gool, Federico Tombari
2023-10-20T08:44:47Z
http://arxiv.org/abs/2310.13355v2
# SILC: Improving Vision Language Pretraining with Self-Distillation ###### Abstract Image-Text pretraining on web-scale image caption dataset has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants. Several works have also used CLIP features for dense prediction tasks and have shown the emergence of open-set abilities. However, the contrastive objective only focuses on image-text alignment and does not incentivise image feature learning for dense prediction tasks. In this work, we propose the simple addition of local-to-global correspondence learning by self-distillation as an additional objective for contrastive pre-training to propose SILC. We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on several computer vision tasks including classification, retrieval, and especially segmentation. We further show that SILC scales better with the same training duration compared to the baselines. Our model SILC sets a new state of the art for zero-shot classification, few shot classification, image and text retrieval, zero-shot segmentation, and open vocabulary segmentation. ## 1 Introduction. Recent advancements in self-supervised learning (Caron et al., 2021; Oquab et al., 2023; Chen et al., 2020; Grill et al., 2020) and weakly supervised learning on web data (Radford et al., 2021; Jia et al., 2021; Zhai et al., 2023) has spearheaded the development of foundational language (Radford et al., 2018; Chowdhery et al., 2022) and vision-language models (Radford et al., 2021; Jia et al., 2021; Zhai et al., 2023). These methods get around the long term challenge of obtaining large labelled dataset by developing self-supervision criterions. The development of Transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) has further facilitated this trend as Transformers scale better with larger datasets, e.g. such as the ones available from the internet. Developping open vocabulary computer vision models that can reason beyond a pre-determined set of classes has been a long-term challenge. The introduction of web image-text datasets and the progress in compute have enabled significant advances in this field. Popularized by CLIP (Radford et al., 2021), contrastive pretraining utilizes large datasets with paired image and text from the web and trains a vision-language model (VLM) to embed them to a shared latent space. Since these models are trained on a wide set of concepts, the learned VLM allows for open vocabulary inference (Radford et al., 2021). However, developing open vocabulary dense prediction models for segmentation and detection is still an open challenge, since internet-scale dataset do not have labels available for these tasks. Several works have found that incorporating VLMs in segmentation and detection models can unlock some open vocabulary abilities (Cho et al., 2023; Ding et al., 2022; Xu et al., 2022). Since CLIP is not trained for these tasks, these methods get around its limitations by tuning the learned model with some dense prediction labelled dataset. One set of methods utilizes a normal segmentation / detection model for class agnostic inference and then predict the class logits with CLIP (Cho et al., 2023; Liang et al., 2023). Another family of methods aims to distill VLMs directly into a dense prediction model and utilize the text transformer to generate the class weights to predict logits Li et al. (2022b); Ghiasi et al. (2022). These works have been highly impactful towards expanding open vocabulary abilities of dense prediction models. However, since the contrastive pretraining objective does not explicitly encourage learning good local features for dense prediction tasks, these methods are limited by the VLM's intrinsic performance (Oquab et al., 2023) as we also show later in our experiments. In the self-supervised literature, enforcing local-to-global consistency by self-distillation has emerged as a powerful pretraining objective (Caron et al., 2021; Oquab et al., 2023; Zhou et al., 2022b) to learn vision backbones that are competitive on classification as well as dense prediction tasks, e.g. segmentation and detection. However, these backbones can not directly be used for zero-shot or open vocabulary inference. We take motivation from these two branches of literature and unify image-text contrastive pretraining and local-to-global consistency SILC utilises web image-text dataset to learn one model that improves VLM performance on existing classification and retrieval tasks while especially improving performance on zero-shot and open vocabulary segmentation. Our contributions are as follows: 1. We propose a novel training framework for VLMs that pairs contrastive pretraining on image-text data with self-distillation on web images. 2. We show that SILC scales better with training duration than baselines and achieves significant improvements on all VLM tasks. 3. We show that our learned model especially improves zero-shot segmentation and open vocabulary segmentation tasks. 4. We contribute a new foundation model that sets a new state of the art on zero-shot classification, few-shot classification, image-to-text and text-to-image retrieval, zero-shot semantic segmentation and open vocabulary semantic segmentation. ## 2 Related Works. **Image-Text Pretraining.** Vision-language model (VLM) pretraining (Radford et al., 2021; Jia et al., 2021; Li et al., 2022c; Chen et al., 2023) aims to learn generic multimodal representations that generalize to a wide range of downstream tasks. Substantial progress has been made in this field recently towards better pretraining objectives (Jia et al., 2021; Wang et al., 2022b) and better large-scale image-text dataset (Radford et al., 2021; Chen et al., 2023). One of the most popular objective functions is contrastive learning (Radford et al., 2021; Jia et al., 2021) that pulls positive image and text pairs close and pushes negative ones apart in the joint embedding space. It is capable of scaling to a large-scale pretraining dataset and learning highly discriminative image and text features. Many works (Li et al., 2021b; Zhai et al., 2022b; 2023; Yao et al., 2021; Naeem et al., 2022; 2023) in this direction have demonstrated improvements across zero-shot image classification and retrieval benchmarks. Another line of research focuses on generative learning via autoregressive text generation (Wang et al., 2022b;a; Tschannen et al., 2023). Compared to the contrastive learning, generative learning usually performs better on text generation tasks e.g., image captioning and VQA. Finally, there are hybrid methods (Alayrac et al., 2022; Li et al., 2021a; Singh et al., 2022; Lu et al., 2019; Yu et al., 2022; Li et al., 2022c) that combine multiple objective functions including generative, contrastive and multi-task losses. While many VLMs (Radford et al., 2021; Wang et al., 2022b) mainly focus on learning global image-text alignment that benefits image-level downstream tasks, our work aims to develop a new VLM that benefits both image-level and pixel-level tasks. There have been a few attempts (Dou et al., 2022; Luo et al., 2023; Zhong et al., 2022; Dong et al., 2023) to improve VLMs for dense prediction tasks including object detection and semantic segmentation. However, they are either modeling the fine-grained patch-text interactions that are not scalable (Dou et al., 2022; Luo et al., 2023) or rely on additional bounding box annotations (Zhong et al., 2022; Li et al., 2022d). In this work we propose to pair image-text contrastive learning with self-distillation to learn a VLM. **Self-supervised Learning.** Self-supervised learning is another popular pretraining paradigm where features are learned from image data itself. One branch of methods optimize the network to solve pretext tasks e.g., image coloring (Zhang et al., 2016), inpainting (Pathak et al., 2016), transformation prediction (Gidaris et al., 2018), and patch ordering (Misra and Maaten, 2020). Another family of approaches adopt instance-level discriminative learning via contrastive learning (Chen et al., 2020; He et al., 2020) and clustering (Caron et al., 2018; 2020). Recently, (He et al., 2022) shows that masked autoencoder is also a scalable self-supervised learner. Our work is inspired by DINO (Caron et al., 2021) which shows that segmentation emerges from learning local and global-views consis tency. However, DINO cannot be directly used for zero-shot and open-vocabulary inference because it only learns image features. In contrast, our method is trained on image and text data jointly to enable more vision-language applications. **Zero-shot Semantic Segmentation.** Zero-shot semantic segmentation aims to segment arbitrary visual concepts in the wild without dense annotations (Xu et al., 2022). Methods in this area rely on image-text pairs from a combination of image captioning and web image-text dataset. Since these images do not have dense labels, these methods devise a self-supervised image region to text attention criterion. Group-VIT (Xu et al., 2022) proposes to introduce grouping tokens that cluster similar image patches under each group token. MaskCLIP (Zhou et al., 2022) found that normal CLIP training results in zero-shot segmentation emerging in the last transformer block of the image encoder. ReCo (Shin et al., 2022) proposes a refinement process on top of MaskCLIP by retrieval and co-segmentation. Finally the most recent state-of-the-art TCL (Cha et al., 2023) learns a decoder to upsample the grounded patch embeddings (values of last block) and learns a region to text attention. **Open Vocabulary Segmentation.** Open-vocabulary semantic segmentation methods aim to segment images according to a vocabulary of class categories provided at test-time containing additional unseen classes. In contrast to zero-shot segmentation, open-vocabulary semantic segmentation has access to a semantic segmentation dataset with a limited vocabulary for training. Early methods for open-vocabulary semantic segmentation attempt to learn visual embeddings that align with existing text embeddings of the class names (Zhao et al., 2017; Xian et al., 2019; Bucher et al., 2019). With the emergence of large-scale vision-language pre-training such as CLIP, more recent methods transfer the open-vocabulary capabilities of CLIP from image- to pixel-level predictions. To achieve dense predictions, LSeg (Li et al., 2022) learns pixel-wise visual embeddings that align with CLIP text embeddings while OpenSeg (Ghiasi et al., 2022) learns class-agnostic segmentation proposals to pool visual features for region-text grounding. To better preserve the zero-shot abilities of a pre-trained CLIP, ZegFormer (Ding et al., 2022) and ZSseg (Xu et al., 2022) introduce a two-stage framework, which first learns class-agnostic segmentation mask predictions and classifies the corresponding region crops using a frozen CLIP. OVSeg (Liang et al., 2023) further finetunes CLIP on region-text pairs to compensate for the appearance shift of masked crops. To avoid the overhead of two stages, CAT-Seg (Cho et al., 2023) learns the aggregation of cost volumes between text embeddings and dense image embeddings from CLIP. Figure 1: **SILC** is a two-tower transformer based VLM. The first component of our training objective uses a global view of an image covering a large area and its paired caption to optimise a batch-wise contrastive loss for images and texts. The second component of our training objective enforces local-to-global consistency by self-distillation between the main model (the student) and an Exponential Moving Average (EMA)-based teacher. This local-to-global correspondence additionally allows the model to learn good visual features. Together the two objectives allow the model to excel at both traditional VLM tasks as well as segmentation. Method. SILC builds on the contrastive pretraining framework of CLIP and consists of a two-tower transformer model with a shared embedding space. We utilize a web-scale paired image-text dataset and rely on large-scale pretraining to learn the weights of the model. The first component of our pretraining objective focuses on aligning matching image-text pairs close together and away from other images and texts in the batch. This objective has been incredibly successful in recent literature. However, the contrastive objective in its current form does not focus on capturing rich local image semantics necessary for dense prediction tasks like segmentation. Therefore, we propose to pair the contrastive pretraining objective with a local-to-global consistency objective that uses self-distillation as shown in Figure 1. **SILC** gets its name from the two training objectives consisting of **S**elf-Distillation from Images and **I**mage-**L**anguage **C**ontrastive Alignment from Image-Text pairs. ### Aligning Image and Text. The contrastive pretraining objective relies on the Info-NCE framework (Oord et al., 2018). It utilizes large amount of web-scale image-text dataset to learn an alignment between paired image and text. Given a minibatch \(\mathcal{B}=\{(I_{1},T_{1}),(I_{2},T_{2}),\dots\}\), where \((I_{i},T_{i})\) denotes a matching pairs of Image and Text, the contrastive objective encourages matching image and text pairs to lie close together in a shared embedding space. The image \(I_{i}\) is processed by a learnable Vision Transformer \(\mathcal{F}\) to get its feature embedding. Similarly, the tokenized text \(T_{i}\) is processed by a learnable Text Transformer \(\mathcal{G}\) to get its feature embedding. These feature embeddings are normalized by their \(l_{2}\) norm to get \(\mathbf{f}_{i}=\frac{\mathcal{F}(I_{i})}{\|\mathcal{F}(I_{i})\|_{2}}\in \mathbb{R}^{J}\) for the image \(I_{i}\) and \(\mathbf{g}_{i}=\frac{\mathcal{G}(T_{i})}{\|\mathcal{G}(T_{i})\|_{2}}\in \mathbb{R}^{J}\) for the paired text \(T_{i}\) where \(J\) is the feature dimension of the shared embedding space. The dot product of \(\mathbf{f}_{i}\) and \(\mathbf{g}_{i}\) computes their cosine similarity and is optimized with a pair of cross-entropy losses as follows: \[\mathcal{L}_{\mathrm{image-text}}=-\frac{1}{2|\mathcal{B}|}\sum_{i=1}^{| \mathcal{B}|}\left(\frac{\text{image-text}}{\log\frac{e^{\mathbf{f}_{i}\cdot \mathbf{g}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\mathbf{f}_{i}\cdot\mathbf{g}_{ j}}}}{\log\frac{e^{\mathbf{f}_{i}\cdot\mathbf{g}_{i}}}{\sum_{j=1}^{| \mathcal{B}|}e^{\mathbf{f}_{i}\cdot\mathbf{g}_{j}}}}+\frac{\text{text-image softmax}}{\log\frac{e^{\mathbf{f}_{i}\cdot\mathbf{g}_{i}}}{\sum_{j=1}^{| \mathcal{B}|}e^{\mathbf{f}_{j}\cdot\mathbf{g}_{i}}}}\right), \tag{1}\] where \(t\) is the learnable temperature and controls the peakness of the activation in the loss function. The batch-wise contrastive loss relies on a large batch size to align image-text pairs. This objective tuned over a large amount of data learns a shared embedding space between image and text and thus can be used for zero-shot transfer to several computer vision tasks like classification and retrieval. ### Distilling Local Image features. The image-text contrastive loss has shown to be very successful in learning zero-shot transfer models (Radford et al., 2021; Jia et al., 2021). Models learned with this objective have also been used to improve dense prediction tasks like open vocabulary segmentation and detection. However, the contrastive objective alone does not explicitly focus on learning good visual features for dense prediction tasks. These tasks require local image semantics to be sufficiently encoded in the output image and patch embeddings. Enforcing local-to-global consistency has emerged as a powerful technique to accomplish this on large unlabelled image data (Caron et al., 2021; Oquab et al., 2023; Zhou et al., 2022b) in self-supervision literature. However, these methods can not be directly used for open vocabulary models. In the second component of our training framework, we take inspiration from this subset of literature and additionally add local-to-global consistency as a training objective for images in our image-text dataset. The basic idea of this objective is as follows. A teacher network gets a global view of the image representing the scene as a whole and produces a feature embedding. A student model gets a partial view of the same image and produces a feature embedding. A self-distillation objective is introduced where the student needs to match the prediction of the teacher while only having partial information of the scene. This enforces the model to learn local semantics and their relation to global semantics of the scene. We add this criterion for the image encoder \(\mathcal{F}\). We add a projection as a learnable MLP on top of image encoder to map from the original shared embedding space of dimension \(J\) to \(K\) where \(K>J\). The student \(\mathcal{F}_{S}\) is the main image encoder with a learnable projection head. Since we rely on noisy web scale image-text data, we do not have an oracle teacher for the student to match. We therefore construct our teacher \(\mathcal{F}_{T}\) as a exponential moving average of the student \(\mathcal{F}_{S}\) from the previous training iterations to realize our self-distillation framework: \[\mathcal{F}_{T}\leftarrow\lambda\mathcal{F}_{T}+(1-\lambda)\mathcal{F}_{S}, \tag{2}\] where \(\lambda\) controls the update step of the teacher. For a given image \(I_{i}\), the teacher processes its global crop to produce \(\mathbf{p}_{i}^{t}\in\mathbb{R}^{K}\) and the student processes its local crop to produce \(\mathbf{p}_{i}^{s}\in\mathbb{R}^{K}\). To prevent the teacher from collapsing to a trivial solution, we apply sharpening on the outputs of teacher with \(\tau_{t}\) and student with \(\tau_{s}\). To encourage each feature dimension to contribute to the output feature, we additionally introduce a centering operation on the prediction of the teacher. The centering term \(\mathbf{c}\in\mathbb{R}^{K}\) is initialized with \(0\) and is updated by a momentum update with a factor of \(m\) with the first order batch statistics of the teacher's prediction at each step as follows: \(\mathbf{c}\gets m\mathbf{c}+(1-m)\frac{1}{|\mathcal{B}|}\sum_{i=1}^{| \mathcal{B}|}\mathbf{p}_{i}^{t}\). To learn local-to-global correspondences, the student is faced with an information asymmetry. The student is given a local view of an image which is realized as a random crop over a small region of the image. The teacher, however, has access to a global view of the image containing more information about the scene. The student is tasked with matching the semantics of the teacher while only having partial information. Therefore, for a given image, the model needs to learn local semantics of the image and how it would fit in the global context of this image. This is realized as a knowledge-distillation loss where the student and the teacher's feature vectors are first converted to a probability distribution by applying a softmax on the teacher prediction \(\mathcal{P}_{t}(I_{i}^{gl})=\texttt{softmax}((\mathbf{p}_{i}^{t}-\mathbf{c})/ \tau_{t})\) and student prediction \(\mathcal{P}_{s}(I_{i}^{lc})=\texttt{softmax}(\mathbf{p}_{i}^{s}/\tau_{s})\). The student is optimized to match the teacher's output with a cross-entropy loss. \[\mathcal{L}_{\mathrm{self-dist}}=-\mathcal{P}_{t}(I_{i}^{gl})^{\intercal}log (\mathcal{P}_{s}(I_{i}^{lc})) \tag{3}\] This self-distillation objective incentivises the image encoder to learn local semantics of images over the large web scale dataset. Since the teacher is constructed with the student's weights, and the image level features are pooled from patch embeddings in a Vision Transformer, this allows for richer local semantics to be captured in the image level as well as the patch level features. ## 4 Experiments. We compare SILC with several image-text pretraining methods on the same test bench and perform extensive experimentation. We show that SILC sets a new state of the art on a variety of tasks: zero-shot classification, few-shot classification, retrieval, zero-shot segmentation and open vocabulary segmentation. ### Implementation details. We implement our model in jax in the big_vision codebase (Beyer et al., 2022; 20), following the contrastive pre-training setups from (Zhai et al., 2023), and use the WebLI dataset(Chen et al., \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Zero-Shot Classification**} & \multicolumn{3}{c}{**Few-shot classification**} & \multicolumn{3}{c}{**Retrieval**} \\ \cline{2-10} & **ImageNet** & **CIFAR100** & \multicolumn{3}{c}{**ImageNet**} & \multicolumn{3}{c}{**CIFAR100**} & \multicolumn{3}{c}{**COCO**} \\ \cline{2-10} & **T1** & **T1** & **ishd** & **5shot** & **10shot** & **1shot** & **5shot** & **10shot** & **12T@1** & **T2@1** \\ \hline SLIP (Mu et al., 2022) & 73.8 & 69.3 & 42.3 & 62.7 & 66.9 & 40.8 & 61.0 & 65.8 & 61.6 & 44.3 \\ KCILP (Zhou et al., 2023) & 74.1 & 68.5 & 44.1 & 62.5 & 66.1 & 36.2 & 58.8 & 63.7 & 60.6 & 41.8 \\ SigLIP (Zhai et al., 2023) & 75.1 & 69.8 & 44.0 & 64.2 & 68.4 & 39.0 & 61.7 & 66.3 & 62.6 & 44.9 \\ MaskCLIP (Dong et al., 2023) & 74.4 & 69.0 & 42.6 & 61.0 & 64.7 & 44.1 & 63.2 & 67.6 & 61.4 & 43.6 \\ CLIP (WebLI) (Zhai et al., 2023) & 74.1 & 68.4 & 42.8 & 63.2 & 67.3 & 39.4 & 59.6 & 64.6 & 61.7 & 43.9 \\ **SILC\({}^{\bullet}\) (Ours)** & 75.3 & 71.0 & 44.6 & 64.3 & 67.8 & 42.8 & 64.6 & 69.6 & 62.5 & 44.9 \\ **SILC (Ours)** & **76.2** & **72.3** & **45.3** & **65.0** & **68.5** & **45.2** & **66.9** & **71.3** & **66.1** & **49.1** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparing SILC* with baseline**, we observe that our pretraining framework results in significant improvement over various VLM pretraining methods. We reproduce all methods on the same WebLI dataset (Chen et al., 2023) to quantify the improvements from the training objective. We further fine-tune SILC* on a cleaner subset to get our final model SILC and see that it unlocks additional performance without significant extra retraining. The best performance at each model size is **bolded**, the second best is underlined. 2023) for our experiments. We utilize two global views cropped between (0.4-1.0) of the original image area and eight local views cropped between (0.05-0.4) of the original image area for the self-distillation loss. The global views are resized to \((256\times 256)\) and the local views are resized to \((96\times 96)\). The teacher momentum \(\lambda\) is kept fixed at 0.966 and the center update momentum \(m\) is kept fixed at 0.9 through the training. The teacher temperature \(\tau_{t}\) is fixed at 0.04 and the student temperature \(\tau_{s}\) is fixed at 0.1. \(K\) is \(65536\). We resize the original image to \((256\times 256)\) for the contrastive loss between image-text pairs. We additionally use each augmented global view for the contrastive loss with the text. However the batch size for negatives is kept the same. We trained with a batch size of 16k on Google TPUs. We use _example-seen_ to represent how many image and text pairs are drawn from the dataset throughout the training. We train all baselines in our main comparisons in Table 1 for 20 Billion example-seen on the WebLI dataset (Chen et al., 2023) following Zhai et al. (2023). Our model trained on WebLI is marked as **SILC***. We use a rsqrt learning scheduler Zhai et al. (2022) with base learning rate of \(0.001\) with \(50000\) warm up and \(50000\) cool down steps. We additionally finetune our model using a smaller but cleaner WebLI subset (Chen et al., 2023) for 1 Billion additional example-seen and represent this model as **SILC**. The smaller WebLI subset contains 100 million image-text pairs with finer-grained text filters etc. ### State-of-the-art comparison on Classification and Retrieval. **Compared Baselines.** We compare with several popular image-text pretraining methods under the same training and evaluation protocol in Table 1. CLIP Radford et al. (2021) is the most popular baseline trained with the contrastive loss introduced in Section 3.1. We refer to CLIP trained on WebLI as CLIP (WebLI) to avoid confusion with other popular variants trained on other datasets. SLIP (Mu et al., 2022) proposes to incorporate Sim-CLR loss between two augmented global view of the image in addition to the CLIP loss on the resized image. ACLIP (Zhou et al., 2023) proposes to add a self-distillation loss between the features of the text-transformer and the image in addition to the CLIP loss. MaskCLIP (Dong et al., 2023) proposes to use the original image and an additional masked image in the contrastive loss and self-distillation. SigLIP (Zhai et al., 2023) proposes to replace the contrastive loss in CLIP with the Sigmoid loss. SLIP, XCLIP and MaskCLIP only train for short durations with limited data in their respective manuscripts. For fair comparison, we train all baselines using the same WebLI dataset following setups described in (Zhai et al., 2023). We compare with these baseline VLMs in Table 1at ViT/B16 and see that our model consistently achieves state-of-the-art performance on all metrics. SILC* also consistently improves on CLIP for all metrics. On zero-shot classification on ImageNet, SILC* improves on CLIP (WebLI) by 1.2 points, similarly we notice an improvement of 2.6 points on CIFAR-100 showing the benefit of local feature self-distillation. Similar improvements are noted for few-shot classification where SILC* improves over CLIP (WebLI) by 1.8, 1.1 and 0.5 points on ImageNet 1 shot, 5 shot and 10 shot classification respectively. We notice similar improvements on retrieval where SILC* shows improvements on image to text as well as text to image retrieval. We observe that SLIP achieves very similar performance to CLIP (WebLI). We noticed similar improvement to the author's work in early stage of training but when trained for longer, SLIP converged to the same point as CLIP (WebLI). This shows that the Sim-CLR loss' contribution to model performance diminishes when training CLIP at scale. As we compare with XCLIP, we again notice that this baseline performs very similarly to CLIP (WebLI) when trained for 20B example-seen. Comparing MaskCLIP Dong et al. (2023), we observe that this baseline adds some performance for zero-shot classification over CLIP (WebLI) but behaves very similarly on other metrics. Finally, we see that SigLIP improves on CLIP (WebLI) performance on all metrics but SILC* continuously outperforms it on most metrics indicating that our training framework is the state-of-the art in zero-shot classification, few-shot classification and retrieval compared to baseline works. Comparing SILC* with SILC, we notice that the finetuning on cleaner subset unlocks additional performance for the model without significant extra training. We notice another 0.9 point improvement over SILC* on zero-shot ImageNet classification with SILC. We observe improvements of the same magnitude on few-shot classification. Comparing retrieval performance, we see a significant increase in retrieval performance on COCO where SILC achieves a 3.6 and 4.2 points improvement on Image to Text and Text to Image Recall@1. ### Zero-Shot Semantic Segmentation. Zero-shot semantic segmentation aims to measure the grounding performance of a VLM usually from its patch embeddings. MaskCLIP (Zhou et al., 2022) (different from MaskCLIP Dong et al. (2023)) found that for the original CLIP model, this grounding emerges in the values of the last transformer encoder block's MHA. We use a Vision Transformer with a MAP pooling head (Zhai et al., 2022). We observe that grounding for our model emerges in the values of the MAP head instead of the last encoder block. These values are processed by the layer norm and MLP layers of the MAP pooling head to get output patch embeddings. For a given set of possible classes in a segmentation dataset, we obtain the corresponding text embeddings by querying our text encoder with a standard prompt. We compute the cosine similarity between the image patch embeddings and the text features of each class name to generate a segmentation map in zero-shot. We report the mean-IOU (mIOU) performance of our model in Table 2 and compare with baselines at ViT/B16 similar to previous works. We follow the evaluation protocol of TCL (Cha et al., 2023) without the background class. However, we do not use any post-refinement step e.g. PAMR as we argue that the raw segmentation of a VLM is the true depiction of its zero-shot segmentation performance. As we compare SILC* with CLIP (WebLI), we see that our knowledge distillation setup for local-to-global correspondences improves the zero-shot segmentation performance consistently by a few \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & **A-150** & **PC-59** & **Cityscapes** & **VOC-20** & **COCO-Stuff** \\ \hline GroupVIT (Xu et al., 2022) & 9.2 & 23.4 & 11.1 & **79.7** & 11.1 \\ MaskCLIP (Zhou et al., 2022) & 9.8 & 26.4 & 12.6 & 74.9 & 16.4 \\ ReCo (Shin et al., 2022) & 11.2 & 22.3 & 21.1 & 57.7 & 14.8 \\ TCL (Cha et al., 2023) & 14.9 & 30.3 & 23.1 & 77.5 & 19.6 \\ CLIP (WebLI) (Zhai et al., 2023) & 15.0 & 24.0 & 22.6 & 69.5 & 15.0 \\ **SILC* (Ours)** & 17.2 & 29.3 & 25.1 & 73.5 & 18.2 \\ **SILC (Ours)** & **19.3** & **31.6** & **26.9** & 77.5 & **20.8** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparing Zero-Shot Segmentation performance** we see that SILC* trained on noisy web image-text data already outperforms several ZS segmentation baselines that use cleaner image-text data. When we tune our model on a cleaner subset of image-text data to get SILC, we see that it sets the absolute state-of-the-art on 4/5 datasets. We would also like to emphasize that SILC achieves this without learning an expensive image patch to text attention that TCL relies on. Figure 2: **Qualitative results on zero-shot segmentation** show that SILC achieves significant improvements over CLIP (WebLI). SILC produces less noisy segmentation and better distinguishes semantic classes. This semantic segmentation emerges without any segmentation supervision. mIOU points on all 5 datasets. In fact SILC* is also superior to ReCo and GroupVIT while being competitive to TCL without learning an expensive image patch to text attention. SILC* is trained on noisier web dataset compared to GroupVIT (Xu et al., 2022a), ReCo (Shin et al., 2022) and TCL (Cha et al., 2023) which use relatively cleaner smaller image caption datasets. When we fine-tune SILC* with cleaner subset of data to get SILC, we notice a significant improvement on all datasets. Compared to the previous state-of-the-art TCL, SILC achieves a remarkable 4.3 mIOU points improvement on A-150, 2.9 points improvement on PC-59, and 4.9 points improvement on CityScapes. Similar improvements are noted on VOC-20 and COCO-Stuff however Group-VIT maintains the best result on VOC-20. In our preliminary experiments we noticed that the improvements in zero-shot segmentation are achievable by finetuning on a cleaner subset of data. We did not observe superior performance by learning an expensive patch-wise attention similar to TCL. We show the improvements of SILC on CLIP (WebLI) qualitatively in Figure 2. We can observe that SILC is better at segmenting and labeling semantic classes in images. We would like to emphasize that SILC has achieved this this without any segmentation ground truth. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline **VLM** & **Method** & **A-847** & **PC-459** & **A-150** & **PC-59** & **VOC-20** & **VOC-21** \\ \hline CLIP-B/16 & ZSegFormer (Ding et al., 2022) & 5.6 & 10.4 & 18.0 & 45.5 & 89.5 & 65.5 \\ CLIP-B/16 & ZSeg (Xu et al., 2022b) & 7.0 & - & 20.5 & 47.7 & 88.4 & - \\ CLIP-B/16 & CVSeg (Liang et al., 2023) & 7.1 & 11.0 & 24.8 & 53.3 & 92.6 & - \\ CLIP-B/16 & CAT-Seg (Cho et al., 2023) & 8.4 & 16.6 & 27.2 & 57.5 & 93.7 & 78.3 \\ **SILC-B/16** & CAT-Seg (Cho et al., 2023) & 13.4 (+5.0) & 22.0 (+5.4) & 36.6 (+9.4) & 61.2 (+3.7) & 95.9 (+2.2) & 80.4 (+2.1) \\ \hline CLIP-L/14 & ZSeg (Xu et al., 2022b) & 7.1 & 10.2 & 21.7 & 52.2 & 92.3 & - \\ CLIP-L/14 & CVSeg (Liang et al., 2023) & 9.0 & 12.4 & 29.6 & 55.7 & 94.5 & - \\ CLIP-L/14 & CAT-Seg (Cho et al., 2023) & 10.8 & 20.4 & 31.5 & 62.0 & 96.6 & 81.8 \\ **SILC-1/16** & CAT-Seg (Cho et al., 2023) & 15.0 (+4.2) & 25.8 (+5.4) & 37.7 (+6.2) & 63.5 (+1.5) & 97.6 (+1.0) & 82.5 (+0.7) \\ \hline CLIP-G/14 & CAT-Seg (Cho et al., 2023) & 13.3 & 21.4 & 36.2 & 61.5 & 97.1 & 81.4 \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparing Open Vocabulary Semantic Segmentation performance**, we observe that SILC significantly improves over CLIP by significant margins on all unseen test sets. SILC particularly improves the performance for challenging test sets with large vocabularies. SILC-L/16 even outperforms the much larger CLIP-G/14. All models are trained on COCO-Stuff. Figure 3: **Comparing qualitative examples for open vocabulary segmentation**, we observe that SILC w/ CAT-Seg better distinguishes semantically similar classes such as field/grass, runway/road, grandstand/chair, sand/water, sign/metal, and tree/stick than CLIP. ### Open-Vocabulary Semantic Segmentation. Open Vocabulary Semantic Segmentation aims to develop segmentation models that can segment novel classes beyond the training vocabulary. Most of the recent methods in this area rely on a pretrained CLIP due to its open-vocabulary capabilities and adapt it for segmentation task. To evaluate the open vocabulary segmentation potential of SILC, we take the current state-of-the-art model CAT-Seg (Cho et al., 2023) and replace the CLIP model used by the authors with SILC. The models are trained on COCO-Stuff-164k with 172 classes and tested on unseen datasets with different vocabularies: ADE-20k with 847 or 150 classes (A-847/A-150), Pascal Context (PC-459/PC-59), and Pascal VOC (VOC-20/VOC-21). From Table 3, we observe that SILC significantly improves on CLIP. In fact, SILC - B/16 performs on par with the much bigger CLIP-G/14 on the three most challenging test datasets A-847, PC-459 and A-150. The observed improvements of SILC also transfer to the larger ViT-L variant, where CAT-Seg with SILC-L/16 outperforms CAT-Seg with CLIP-L/14 on all datasets by a significant margin. In particular, it achieves more than +4 mIOU improvement on the challenging A-847, PC-459, and A-150. SILC-L/16 even significantly outperforms the much bigger CLIP-G/14 on all tested datasets. The improvements of SILC over CLIP are also reflected in the qualitative examples in Fig. 3. We observe that SILC better distinguishes semantically similar classes such as grandstand/building, field/grass, runway/road, grandstand/chair, and sign/metal. Further, it improves segmentation in difficult cases (such as the paintings) and better handles transparent segments (such as windowpane). ### Ablation on Model Components. We ablate on the various design choices of our model and their impact on various tasks. We train all models for 5 Billion example-seen and report the performance in Table 4. Since our method processes additional image augmentations in the contrastive loss, we first test if our improvements are a consequence of processing more augmentations. We observe that the introduction of additional image augmentations (second row) improve the classification and retrieval metrics but their impact on zero-shot segmentation and open vocabulary segmentation is not as significant. When we add an EMA over this model's weights similar to our model (third row), we notice a slight improvement for the EMA model as seen in previous SSL literature. Finally when we add the self-distillation from local crops, we see an improvement across the board on all tasks. This improvement is more profound on the segmentation tasks highlighting our proposal's impact on these tasks. ### How does SILC* scale with example-seen? We test CLIP (WebLI) trained with additional data augmentations in each forward pass in Section 4.5. We additionally test the examples-seen efficiency of SILC* and compare to CLIP (WebLI). We report the results in Table 5. We notice that SILC* at only 5B examples-seen already achieves superior performance to CLIP (WebLI). When we further train SILC* to 10B and 20B example, we notice even superior performance. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{**ImageNet 0 shot**} & \multicolumn{2}{c}{**ImageNet Few shot**} & \multicolumn{2}{c}{**COCO Retrieval**} & \multicolumn{2}{c}{**ZS Segmentation**} & \multicolumn{2}{c}{**Open Vocab Seg**} \\ \cline{2-13} & **T1** & **Isbelt** & **Shot** & **Ibelt** & **T20 t & **T20 t & **A-150** & **Stuff** & **PC-9** & **PC-459** & **A-150** & **PC-59** \\ \hline CLIP (WebLI) & 71.7 & 36.4 & 57.7 & 62.5 & 59.1 & 42.9 & 11.8 & 12.9 & 20.1 & 18.6 & 30.5 & 57.7 \\ + additional views & 73.6 & 38.7 & 60.8 & 65.7 & 60.6 & 43.2 & 11.7 & 13.0 & 20.0 & 19.2 & 32.1 & 57.8 \\ + EMA & 73.7 & 38.4 & 60.7 & 65.5 & 61.3 & 43.1 & 11.9 & 13.3 & 20.5 & 19.0 & 32.2 & 57.5 \\ + Self Dist (**SILC*)** & **74.3** & **39.9** & **61.2** & **65.7** & **62.7** & **43.9** & **12.2** & **15.3** & **21.1** & **21.0** & **33.3** & **60.7** \\ \hline \hline \end{tabular} \end{table} Table 4: **We ablate over each component** of our model to verify our design choices. The addition of additional image augmentation and EMA to CLIP (WebLI) improves classification and retrieval metrics but only slightly impact the segmentation. Adding local-to-global consistency by self-distillation, we observe an improvement across the board especially on segmentation metrics. \begin{table} \begin{tabular}{l|c c c} \hline \hline & **CLIP (WebLI)** & **SILC* \\ **Example-Seen** & **200** & **58** & **108** & **200** \\ \hline M0 dist & 74.1 & 74.3 & 75.0 & 75.3 \\ \hline \hline \end{tabular} \end{table} Table 5: SILC* shows greater efficiency for example seen. Conclusion. We propose to integrate local-to-global correspondance learning by self-distillation as a complementary objective to the popular VLM Contrastive objective originally proposed by CLIP (Radford et al., 2021). We show that the introduction of this allows the VLM to scale better to example-seen from the dataset and results in significant performance improvements on multiple computer vision tasks. We see a consistent performance improvement on zero-shot classification, few-shot classification, and retrieval. We further test our VLM on zero-shot segmentaton and show that our training framework results in significant improvements without using any dense ground truth. Finally we show that SILC can significantly improve the open vocabulary segmentation performance thanks to our training framework. SILC sets a new state-of-the-art in Vision-Language Foundational Models.
2301.05276
Waring identifiability for powers of forms via degenerations
We discuss an approach to the secant non-defectivity of the varieties parametrizing $k$-th powers of forms of degree $d$. It employs a Terracini type argument along with certain degeneration arguments, some of which are based on toric geometry. This implies a result on the identifiability of the Waring decompositions of general forms of degree kd as a sum of $k$-th powers of degree $d$ forms, for which an upper bound on the Waring rank was proposed by Fr\"oberg, Ottaviani and Shapiro.
Alex Casarotti, Elisa Postinghel
2023-01-12T19:52:49Z
http://arxiv.org/abs/2301.05276v3
# Waring Identifiability for Powers of Forms via Degenerations ###### Abstract. We discuss an approach to the secant non-defectivity of the varieties parametrizing \(k\)-th powers of forms of degree \(d\). It employs a Terracini type argument along with certain degeneration arguments, some of which are based on toric geometry. This implies a result on the identifiability of the Waring decompositions of general forms of degree kd as a sum of \(k\)-th powers of degree \(d\) forms, for which an upper bound on the Waring rank was proposed by Froberg, Ottaviani and Shapiro. Key words and phrases:Identifiability, Waring problems, secant varieties, linear systems, degenerations 2020 Mathematics Subject Classification: Primary: 14N07. Secondary: 14C20, 14D06, 14M25 Both authors are members of INdAM-GNSAGA and that this bound is sharp, i.e. when \(d\) is sufficiently large, \(k^{n}\) computes the generic rank. However the secant defectivity of the varieties parametrizing \(k-\)th powers of forms of degree \(d\) remains an open problem in general. In this paper we address both the secant defectivity and identifiability problems for such Waring decompositions. Denote with \(V^{k}_{n,d}\) the variety parametrizing \(k-\)th powers of homogeneous degree \(d\) forms in \(n+1\) variables: \[V^{k}_{n,d}:=\{[F^{k}]|F\in\mathbb{C}[x_{0},\dots,x_{n}]_{d}\}.\] Our first main result is about secant non-defectivity. **Theorem 1.1**.: _The variety \(V^{k}_{n,d}\) is not \(h\)-defective if \(k\geq 3\) and \(h\leq\frac{1}{N+1}\binom{N+k-3}{N}\), where \(N=\binom{n+d}{d}-1\)._ Our second result is about identifiability. A bridge from non-defectivity to identifiability was proposed in [1] first and then generalised in the recent [13]: whenever \(X\) is a sufficiently regular variety (with non-degenerate Gauss map), then if \(X\) is not \(h-\)defective, then \(X\) is \((h-1)-\)identifiable. Using this and Theorem 1.1 we obtain what follows. **Theorem 1.2**.: _A general form \(F\in\mathbb{C}[x_{0},\dots,x_{n}]_{dk}\) of rank \(h\) with \(k\geq 3\) is identifiable whenever_ \[h\leq\min\{\frac{1}{N+1}\binom{N+k-3}{N}-1,\frac{\binom{n+kd}{n}}{N+1}-1\}\] We remark that in [14], the author showed that the secant defectivity of \(V^{k}_{n,d}\) can be bounded asymptotically, using a direct algebraic computational argument, to \(k^{n}-d^{n}\). In Section 6 we show that when \(d\gg k\) our bound of Theorem 1.1 extends the latter. In order to prove Theorem 1.1, we brought together a Terracini type argument and several different degeneration techniques. By a classical application of Terracini's Lemma, non-defectivity problems for secant varieties translate into the study of particular linear systems of hypersurfaces with prescribed singularities. The first systematic study was used in the proof of the celebrated Alexander and Hirschowitz Theorem for the case of classical Waring problems (1.1), where secant varieties of Veronese embeddings of \(\mathbb{P}^{n}\) correspond with linear systems of hypersurfaces of \(\mathbb{P}^{n}\) with prescribed double points. In the generalized Waring problem setting, as in (1.2) for \(k\geq 2\), a direct translation to linear systems of hypersurfaces with only double point singularities is not possible. In order to prove secant non-defectivity in this case it is necessary to impose a bigger base locus to our linear systems. In particular, we will be interested in studying the dimensions of linear systems \(\mathcal{L}:=\mathcal{L}_{N,k}(V,2^{h})\) of hyperurfaces of \(\mathbb{P}^{N}\) of degree \(k\) that are singular at \(h\) general points and that contain the \(d\)-thVeronese embedding of \(\mathbb{P}^{n}\), \(V\subset\mathbb{P}^{N}\). The study of such linear systems is carried out by combining two types of degenerations introduced in [15] and in [1] and [16] respectively. On one hand, we degenerate the ambient space \(\mathbb{P}^{N}\) to a scheme with two components and, in turn, the linear system \(\mathcal{L}\) to a fibered product of two linear systems, one on each component, which are somewhat easier to deal with than the original one. In fact one of them consists of hypersurfaces containing a linear subspace and a collection of double points, the other one consists of hypsersurfaces containing \(V\) and one fat point of relatively large multiplicity with support on \(V\). In order to study the latter, we perform a toric degeneration of the Veronese \(V\) to a union of \(n\)-dimensional linear spaces, which will have the effect of reducing further the study of the limit linear system. The study of Waring type problems and identifiability of symmetric tensors has been implemented also in the applied fields, from chemistry, biology to algebraic statistics. Recently in [1], the problem of identifiability for \(k-\)th powers of forms was linked to the identifiability of centered Gaussian mixture models in applied statistics. ### Organization of the paper Section 2 contains all definitions and our Terracini type result that translates non-defectivity of \(V^{k}_{n,d}\) to the study of \(\mathcal{L}\), Proposition 2.16. In Section 3 we explain in detail the degenerations techniques, both in the classical and in the toric setting. In Section 4 we analyse two auxiliary linear systems arising from the degeneration of \(\mathcal{L}\), Proposition 4.3 and Corollary 4.8. Section 5 is devoted to the proof of the main technical result, i.e. Theorem 5.2. Finally in Section 6 we show at what extent our bounds are asymptotically better then the ones known before in the literature. ### Acknowledgments The authors would like to thank Giorgio Ottaviani and Alessandro Oneto for several useful discussions during the preparation of this article. ## 2. Powers of forms In order to give a coherent and self-contained treatment of the subject, let us recall some preliminary definitions and results. We will work over the field of complex numbers \(\mathbb{C}\). ### Veronese embeddings Let \(W:=\mathbb{C}^{n+1}\) and \(W^{*}\) the dual vector space. With \(\mathbb{P}^{n}=\mathbb{P}(W)\) we denote the projective space over \(\mathbb{C}\) of dimension \(n\). We set the following integers \[N_{d}:=\binom{n+d}{n}-1,\quad N_{d}^{k}:=\binom{N_{d}+k}{N_{d}}-1.\] When \(d\) is clear from the context we will indicate \(N_{d}\) simply by \(N\). Notice that the following identities hold: \[\mathrm{h}^{0}(\mathbb{P}^{n},\mathcal{O}_{\mathbb{P}^{n}}(d)) =N_{d}+1\] \[\mathrm{h}^{0}(\mathbb{P}^{N_{d}},\mathcal{O}_{\mathbb{P}^{N_{d}} }(k)) =\binom{N_{d}+k}{N_{d}}+1\] where \(\mathrm{h}^{0}(\mathbb{P}^{n},\mathcal{O}_{\mathbb{P}^{n}}(d))\) denotes the number of global sections of the twisting sheaf \(\mathcal{O}_{\mathbb{P}^{n}}(d)\) on \(\mathbb{P}^{n}\). In other terms, \(N_{d}\) is the dimension of the linear systems of hypersurfaces of degree \(d\) of \(\mathbb{P}^{N}\) which, in turn, is the projectivization of the complex vector spaces of forms of degree \(d\) in \(n+1\) variables. The number \(N_{d}^{k}\) has a similar interpretation. With this in mind, we can make the following identifications: \[\mathbb{P}^{N_{d}}=\mathbb{P}(\mathrm{Sym}^{d}(W^{*})),\quad\mathbb{P}^{N_{d }^{k}}=\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*}))).\] Now we consider the following _Veronese embeddings_: \[\nu_{d}:\mathbb{P}^{n} \longrightarrow V_{n}^{d}\subset\mathbb{P}(\mathrm{Sym}^{d}(W^{*}))\] \[[L] \longmapsto[L^{d}]\] and \[\nu_{k}:\mathbb{P}^{N_{d}} \longrightarrow V_{N_{d}}^{k}\subset\mathbb{P}(\mathrm{Sym}^{k}( \mathrm{Sym}^{d}(W^{*})))\] \[[F] \longmapsto[F^{k}]\] where \(L\in\mathbb{C}[x_{0},\dots,x_{n}]_{1}\) is a linear form and \(F\in\mathbb{C}[x_{0},\dots,x_{n}]_{d}\) is a form of degree \(d\). The image of the embeddings are called _Veronese varieties_. **Remark 2.1**.: _Note that both \(\nu_{d}\) and \(\nu_{k}\) are the maps corresponding to the complete linear systems associated with the line bundles \(\mathcal{O}_{\mathbb{P}^{n}}(d)\) and \(\mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\). As elements of \(\mathbb{P}(\mathrm{Sym}^{d}(W^{*}))\) (respectively \(\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\)) the image of \(\nu_{d}(p)\) (respectively \(\nu_{k}(p)\)), with \(p\) a point, corresponds to the hyperplane parametrizing hypersurfaces of degree \(d\) in \(\mathbb{P}^{n}\) (respectively of degree \(k\) in \(\mathbb{P}^{N_{d}}\)) passing through \(p\)._ We want to parametrize forms in \(\mathbb{P}^{n}\) of degree \(dk\), i.e. elements in \(\mathbb{C}[x_{0},\dots,x_{n}]_{dk}\), that can be written as \(k-\)th powers of forms of degree \(d\). **Remark 2.2**.: _Note that the Veronese varieties \(\nu_{dk}(\mathbb{P}^{n})\) are always contained in the set of all \(k-\)th powers of forms of degree \(d\) because, trivially, \(L^{dk}=(L^{d})^{k}\)._ Now, we let \[\phi_{dk}:\mathbb{P}^{N_{d}} \longrightarrow\mathbb{P}^{N_{dk}}=\mathbb{P}(\mathrm{Sym}^{dk}( W^{*}))\] \[[F] \longmapsto[F^{k}]\] be the map that assigns to each form \(F\in\mathbb{C}[x_{0},\dots,x_{n}]_{d}\) its \(k-\)th power. **Definition 2.3**.: _We call the scheme theoretic image_ \[V_{n,d}^{k}=\phi_{dk}(\mathbb{P}^{N_{d}})\subseteq\mathbb{P}^{N_{dk}}\] _the \((d,k)-\)_Veronese variety_._ Under the previous identification the classical Veronese varieties correspond to the \((d,1)-\)Veronese varieties. On the other hand, for \(k>1\), \(V_{n,d}^{k}\) is not a standard Veronese variety, indeed it is easy to see that the target of \(\phi_{dk}\) has dimension \(\binom{n+dk}{dk}\), which is never equal to \(\binom{N_{d}+a}{a}\) for any \(a\). A priori we don't know if the map \(\phi_{dk}\) is an isomorphism, as it happens for classical Veronese varieties, see Lemma 2.11 below. ### Secant varieties and identifiability In this subsection we recall the definition of secant variety and the notion of identifiability, following [10]. Let \(X\subset\mathbb{P}^{N}\) be a non degenerate reduced variety. Let \(X^{(h)}\) be the \(h\)-th symmetric product of \(X\), that is the variety parameterizing unordered sets of \(h\) points of \(X\). Let \(U_{h}^{X}\subset X^{(h)}\) be the smooth locus, given by sets of \(h\) distinct smooth points. **Definition 2.4**.: _A point \(z\in U_{h}^{X}\) represents a set of \(h\) distinct points, say \(\{z_{1},\dots,z_{h}\}\). We say that a point \(p\in\mathbb{P}^{N}\) is in the span of \(z\), \(p\in\langle z\rangle\), if it is a linear combination of the \(z_{i}\)'s._ With this in mind we define the following object. **Definition 2.5**.: _The abstract \(h\)-secant variety is the \((hn+h-1)\)-dimensional variety_ \[\text{sec}_{h}(X):=\overline{\{(z,p)\in U_{h}^{X}\times\mathbb{P}^{N}|p\in \langle z\rangle\}}\subset X^{(h)}\times\mathbb{P}^{N}.\] _Let \(\pi:X^{(h)}\times\mathbb{P}^{N}\to\mathbb{P}^{N}\) be the projection onto the second factor. The \(h\)-secant variety is_ \[\mathbb{S}ec_{h}(X):=\pi(sec_{h}(X))\subset\mathbb{P}^{N},\] _and \(\pi_{h}^{X}:=\pi_{|sec_{h}(X)}:sec_{h}(X)\to\mathbb{P}^{N}\) is the \(h\)-secant map of \(X\)._ _If the variety \(X\) is irreducible and reduced we say that \(X\) is \(h\)-defective if_ \[\dim\mathbb{S}ec_{h}(X)<\min\{\dim\text{sec}_{h}(X),N\}.\] The following is a classical result. **Theorem 2.6** (Terracini Lemma).: _Let \(X\subset\mathbb{P}^{N}\) be an irreducible variety. Then the following holds._ * _For any_ \(p_{1},\dots,p_{k}\in X\) _and_ \(z\in\langle p_{1},\dots,p_{k}\rangle\)_, we have_ \[\langle T_{p_{1}}X,\dots,T_{p_{k}}X\rangle\subseteq T_{z}\mathbb{S}ec_{k}(X).\] * _There is a dense open set_ \(U\subset X^{(k)}\) _such that_ \[\langle T_{p_{1}}X,\dots,T_{p_{k}}X\rangle=T_{z}\mathbb{S}ec_{k}(X),\] _for any general point_ \(z\in\langle p_{1},\dots,p_{k}\rangle\) _with_ \((p_{1},\dots,p_{k})\in U\)_._ Notions related to that of secant variety are those of rank and of identifiability. **Definition 2.7**.: _Let \(X\subset\mathbb{P}^{N}\) be a non degenerate subvariety. We say that a point \(z\in\mathbb{P}^{N}\) has rank\(h\) with respect to\(X\) if \(z\in\langle p\rangle\), for some \(p\in U_{h}^{X}\) and \(z\not\in\langle p^{\prime}\rangle\) for any \(p^{\prime}\in U_{h}^{X}\), with \(h^{\prime}<h\)._ **Definition 2.8**.: _A point \(z\in\mathbb{P}^{N}\) is \(h\)-identifiable with respect to\(X\subset\mathbb{P}^{N}\) if \(z\) is of rank \(h\) and \((\pi_{h}^{X})^{-1}(z)\) is a single point. The variety \(X\) is said to be \(h\)-identifiable if the \(h\)-secant map \(\pi_{h}^{X}\) is birational, that is the general point of \(\mathbb{S}ec_{h}(X)\) is \(h\)-identifiable._ It is clear by Theorem 2.6 that when \(X\) is \(h\)-defective, or more generally when \(\pi_{h}^{X}\) is of fiber type, then \(X\) is not \(h\)-identifiable. We now recall the recent result in [10], in which the authors generalize the approach in [11] relating identifiability with the non defectivity of the secant variety. **Theorem 2.9**.: _Let \(X\subset\mathbb{P}^{N}\) be an irreducible and non-degenerate variety of dimension \(n\), \(h\geq 1\) an integer, and assume that:_ * \((h+1)n+h\leq N\)_,_ * \(X\) _has non degenerate Gauss map,_ * \(X\) _is not_ \((h+1)-\)_defective._ _Then \(X\) is \(h-\)identifiable._ In the next sections we will see how to translate this theorem in the setting of powers of forms in order to give identifiability results for \(k-\)th powers of forms of degree \(d\). ### Geometric construction of \((d,k)-\)Veronese varieties Let us recall some facts from _apolarity theory_; the main reference is [1]. **Notation 2.10** (Apolarity).: _We consider two polynomial rings in \(n+1\) variables, both endowed with the standard grading:_ \[R =\mathbb{C}[x_{0},\dots,x_{n}]=\bigoplus_{i\in\mathbb{N}}R_{i}:= \mathbb{C}[x_{0},\dots,x_{n}]_{i}\] \[S =\mathbb{C}[y_{0},\dots,y_{n}]=\bigoplus_{i\in\mathbb{N}}S_{i}:= \mathbb{C}[y_{0},\dots,y_{n}]_{i}.\] _Treating elements of \(S\) as partial derivatives in the \(x_{i}\)'s, the pairing \(S_{k}\times R_{l}\to\mathbb{C}\), sends \((F_{k},G_{l})\) to the derivative \(F_{k}\circ G_{l}\in R\) of \(G_{l}\). If \(k=l\) and if \(\mathcal{I}\subset R\) is a homogeneous ideal, the orthogonal \(\mathcal{I}_{k}^{\perp}\subset S_{k}\) is the following space of polynomials_ \[\mathcal{I}_{k}^{\perp}=\{F\in S_{k}|F\circ G=0,\forall G\in R_{k}\}.\] It is a standard fact of representation theory for the linear group \(GL(W)\), see for instance [10], that the space \(\operatorname{Sym}^{k}(\operatorname{Sym}^{d}(W^{*}))\) can be decomposed as direct sum of \(GL(W)-\)modules in the following way: \[\operatorname{Sym}^{k}(\operatorname{Sym}^{d}(W^{*}))=\operatorname{Sym}^{dk }(W^{*})\oplus E\] where \[E=\operatorname{H}^{0}(\mathcal{I}_{V_{n}^{d}}(k))\] is the \(k-\)th homogeneous part of the ideal of forms that vanish on the Veronese variety \(V_{n}^{d}=\nu_{d}(\mathbb{P}^{n})\), cf. notation of Section 2.1. We have the following exact sequence: \[0\to\mathcal{I}_{V_{n}^{d}}(k)\to\mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\to \mathcal{O}_{V_{n}^{d}}(k)\to 0\] After passing to cohomology and using the fact that every Veronese variety is projectively normal, we have: \[0\to\operatorname{H}^{0}(\mathcal{I}_{V_{n}^{d}}(k))\to\operatorname{H}^{0}( \mathcal{O}_{\mathbb{P}^{N_{d}}}(k))\to\operatorname{H}^{0}(\mathcal{O}_{V_{n }^{d}}(k))\to 0\] This shows that \[\dim(E)=\binom{N_{d}+k}{k}-\binom{n+dk}{dk}.\] It is easy to show that \(GL(V)-\)modules \(\operatorname{Sym}^{dk}(W^{*})\) and \(E\) are apolar, i.e. for every pair of forms \(F,G\in\mathbb{C}[x_{0},\dots,x_{N_{d}}]\) with \(F\in\operatorname{Sym}^{dk}(W^{*})\) and \(G\in E\) it holds that \(F\circ G=0\). The constructions above fit into the following commutative diagram: where the map \(\pi_{E}\) is the linear projection from the linear space \(E\). We now prove that the map \(\phi_{dk}\) is in fact an isomorphism. **Lemma 2.11**.: _With the above notations it holds \(\mathbb{S}ec_{2}(V_{N_{d}}^{k})\cap E=\emptyset\). In particular the map \(\phi_{dk}\) is an embedding._ Proof.: Note that an element \(F\in\mathbb{S}ec_{2}(V_{N_{d}}^{k})\) is either of the form \(F=L^{k}\) or \(F=M^{k}+N^{k}\), where \(L,M,N\in\mathbb{C}[x_{0},\dots,x_{N_{d}}]_{1}\) are linear forms in \(\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\), or equivalently degree \(d\) hypersurfaces in \(\mathbb{P}^{n}\). In particular if there exist such a \(F\) with \(F\in E=\mathrm{H}^{0}(\mathcal{I}_{V_{n,d}}(k))\), then \(V_{n,d}\) would be contained in the vanishing locus of \(F\). Since the zero set of \(F\) is either an hyperplane or a union of an hyperplane with a subscheme of degree \(k-1\) and \(V_{n,d}\) is non-degenerate and irreducible, the claim follows. ### Identifiablity for \((d,k)-\)Veronese varieties From now on we will work with the projective notation, in particular \(E\) has to be intended as the projectivization of the affine \(E\) in the previous section. Let us start by characterizing hyperplanes in \(E\) as particular linear subsystems of hypersurfaces in \(\mathbb{P}^{N_{d}}\). Denote with \(\pi_{dk}\) the linear projection from the linear space \(\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))\) to \(E\), i.e. \[\pi_{dk}:\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\dashrightarrow E\] **Lemma 2.12**.: _Let \(\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}^{k}}_{d}}(1)\otimes\mathcal{I} _{\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))})\) be the complete linear system of hyperplane sections of \(\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\) containing the linear space \(\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))\). Then_ \[\nu_{k}^{*}(\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}^{k}}_{d}}(1)\otimes \mathcal{I}_{\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))})\cong\mathrm{H}^{0}( \mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\otimes\mathcal{I}_{V_{n,d}})\] Proof.: Let \(H\in\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\otimes\mathcal{I}_{V_{ n,d}})\) be a degree \(k\) hypersurface in \(\mathbb{P}^{N_{d}}\) that contains the Veronese \(V_{n,d}\). Then the linear span \(\overline{H}\) of \((\nu_{k})_{*}(H)\) is an hyperplane in \(\mathbb{P}^{N_{d}^{k}}\) that contains \(V_{n,dk}:=(\nu_{k})_{*}(V_{n,d})\). Since \(\langle V_{n,dk}\rangle=\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))\), we have that \[\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\otimes\mathcal{I}_{V_{n,d} })\subseteq\nu_{k}^{*}(\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}^{k}}}(1) \otimes\mathcal{I}_{\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))})\] To conclude just observe that \[\mathrm{h}^{0}(\mathcal{O}_{\mathbb{P}^{N_{d}}}(k)\otimes\mathcal{I}_{V_{n,d} })=\dim(E)=\mathrm{codim}(\mathbb{P}(\mathrm{Sym}^{dk}(W^{*}))),\] where \(\mathrm{codim}\) here indicates the codimension in \(\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\), and \(\nu_{k}^{*}\) induces an isomorphism of global sections. We need the following easy technical lemma: **Lemma 2.13**.: _The linear projection_ \[\pi_{dk}:\mathbb{P}(\mathrm{Sym}^{k}(\mathrm{Sym}^{d}(W^{*})))\dashrightarrow E =\mathrm{H}^{0}(\mathcal{I}_{V_{n,d}}(k))\] _is generically finite when restricted to \(V_{N_{d},k}=\nu_{k}(\mathbb{P}(\mathrm{Sym}^{d}(W^{*})))\)._ Proof.: The map \[\pi_{dk_{|_{\nu_{k}(\mathbb{P}(\mathrm{Sym}^{d}(W^{*})))}}}:V_{N_{d},k} \dashrightarrow E\] is induced by a linear subsystem \(\mathcal{F}\) of the line bundle \(\mathcal{L}=\mathcal{O}_{\mathbb{P}^{N_{d}^{k}}}(1)\) with \[\nu_{k}^{*}(\mathcal{F})=|\mathcal{O}_{\mathrm{Sym}^{d}(W^{*})}(1)\otimes \mathcal{I}_{V_{n,d}}(k)|.\] Now the claim follows easily from the fact that \(\mathrm{H}^{0}(\mathcal{I}_{V_{n,d}}(k))\) defines the Veronese variety \(V_{n,d}\) set-theoretically. Let \(p_{1},\ldots,p_{h}\in X\subset\mathbb{P}^{N}\) be general points. By Lemma 2.6, we have that \(\operatorname{\mathbb{S}ec}_{h}(X)\) has the expected dimension if and only if \(\operatorname{H}^{0}(\mathcal{O}_{X}(1)\otimes\mathcal{I}_{p_{1}^{2},\ldots,p_ {h}^{2}})\) has the _expected_ dimension, i.e. \[\dim\operatorname{H}^{0}(\mathcal{O}_{X}(1)\otimes\mathcal{I}_{p_{1}^{2}, \ldots,p_{h}^{2}})=\max\{0,N-(n+1)h\}\] **Notation 2.14**.: _If \(p_{1},\ldots,p_{h}\in X\) are general points, then we denote \(\mathcal{L}_{h,X}:=|\mathcal{O}_{X}(1)\otimes\mathcal{I}_{p_{1}^{2},\ldots,p_ {h}^{2}}|\)._ Before moving on to the explicit description of the identifiability for \(V_{n,d}^{k}\), let us first prove a general proposition about linear systems of projected varieties: **Proposition 2.15**.: _Let \(X\subset\mathbb{P}^{N}\) be a smooth non-degenerate projective variety. Moreover let \(\mathbb{P}^{N}=\langle F,E\rangle\) with \(F,E\) skew linear subspaces. Let \(\pi_{E}:\mathbb{P}^{N}\to F\) and \(\pi_{F}:\mathbb{P}^{N}\to E\) be the natural projections. If the projections restricted to \(X\) are generically finite and \(\mathcal{L}_{h,X}\) has the expected dimension, then \(\mathcal{L}_{h,\pi_{F}(X)}\) has the expected dimension if and only if \(\mathcal{L}_{h,\pi_{E}(X)}\) has the expected dimension._ Proof.: Note that by symmetry of \(E\) and \(F\) it suffices to prove only one of the implications. Let \(q_{1},\ldots,q_{h}\) be general points on \(\pi_{E}(X)\) and call \(p_{i}=\pi_{E}^{-1}(q_{i})\) and \(z_{i}=\pi_{F}(p_{i})\). Since \(\pi_{E_{|X}}:X\mapsto\pi_{E}(X)\) is an isomorphism we have that \(x_{1},\ldots,x_{h}\) are general and so also \(z_{1},\ldots,z_{h}\). Since \(\pi_{F_{X}}\) is generically finite the space of hyperplanes \(\mathcal{L}_{h,\pi_{F}(X)}=|\mathcal{O}_{E}(1)\otimes\mathcal{I}_{z_{1}^{2}, \ldots,z_{h}^{2}}|\) correspond to \(\mathcal{L}_{h,X}\otimes\mathcal{I}_{F}\). Now the splitting \(\mathbb{P}^{N}=\langle F,E\rangle\) induces the linear projection \[\pi:\mathcal{L}_{h,X}\mapsto\mathcal{L}_{h,X}\otimes\mathcal{I}_{E}\] such that \(Ker(\pi)=\mathcal{L}_{h,\pi_{F}(X)}\). Since by assumption both the source and the kernel have the expected dimension by the rank-nullity theorem the assertion follows. We are finally able to characterize the identifiability properties for the case of powers of forms. Consider the following linear system of all hypersurfaces of \(\mathbb{P}^{N_{d}}\) of degree \(k\) containing the Veronese variety \(V_{n,d}\subset\mathbb{P}^{N_{d}}\) and double at the points \(p_{1},\ldots,p_{h}\) that lie in general position in \(\mathbb{P}^{N_{d}}\): \[\mathcal{L}_{N_{d}}(V_{n,d},2^{h}):=\mathcal{O}_{\mathbb{P}^{N_{d}}}(k) \otimes\mathcal{I}_{V_{n,d}}\otimes\mathcal{I}_{p_{1}^{2},\ldots,p_{h}^{2}}.\] **Proposition 2.16**.: _In the above notation, let \(p_{1},\ldots,p_{h}\) be general points in \(\mathbb{P}^{N_{d}}=\mathbb{P}(\operatorname{Sym}^{d}(W^{*}))\). The linear system \(\mathcal{L}_{N_{d}}(V_{n,d},2^{h})\) has the expected dimension if and only if \(\operatorname{\mathbb{S}ec}_{h}(V_{n,d}^{k})\) has the expected dimension._ Proof.: With the notations of Proposition 2.15 we have \(E=\operatorname{H}^{0}(\mathcal{I}_{V_{n,d}}(k))\) and \(F=\operatorname{Sym}^{dk}(W^{*})\). We have that \(\pi_{E}\) restricted to \(X=V_{N_{d},k}\) is an isomorphism by Lemma 2.11, in particular it is generically finite. The same holds for \(\pi_{F}=\pi_{dk}\) by Lemma 2.13. Now \(\mathcal{L}_{h,V_{N_{d},k}}\) has the expected dimension by Alexander-Hirschowitz (see Theorem 4.1 below) and \[\mathcal{L}_{h,\pi_{dk}(V_{N_{d},k})}=|\mathcal{O}_{\mathbb{P}^{N_{d}}}(k) \otimes\mathcal{I}_{V_{n,d}}\otimes\mathcal{I}_{p_{1}^{2},\ldots,p_{h}^{2}}|\] by Lemma 2.12. ## 3. Degeneration techniques In this section we will discuss two types of degenerations that will provide main tools for the proofs of the results of this article. ### The \(\mathbb{F}\mathbb{P}-\)degeneration In this section we recall a degeneration procedure introduced in [10], which consists in degenerating the projective space \(\mathbb{P}^{N}\) to a reducible variety with two components, and then studying degenerations of line bundles on the general fibre. #### 3.1.1. Degenerating the ambient space Let \(\Delta\) be a complex disc centred at the origin and consider the product \(\mathcal{Y}=\mathbb{P}^{N}\times\Delta\) with the natural projections \(\pi_{1}^{\mathcal{Y}}:\mathcal{Y}\to\mathbb{P}^{N}\) and \(\pi_{2}^{\mathcal{Y}}:\mathcal{Y}\to\Delta\). The second projection is a flat morphism and we denote by \(Y_{t}:=\mathbb{P}^{N}\times\{t\}\) the fibre over \(t\in\Delta\). We will refer to \(Y_{0}\) and to \(Y_{t}\), with \(t\neq 0\), as the _central fibre_ and the _general fibre_ respectively. Let \(f:\mathcal{X}\to\mathcal{Y}\) denote the blow-up of \(\mathcal{Y}\) at a point \((p,0)\in Y_{0}\) in the central fibre. Consider the following diagram, where \(\pi_{i}^{\mathcal{X}}:=\pi_{i}^{\mathcal{Y}}\circ f\), for \(i=1,2\): The morphism \(\pi_{2}^{\mathcal{X}}:\mathcal{X}\to\Delta\) is flat with fibres denoted by \(X_{t}\), \(t\in\Delta\). For the general fibre we have \(X_{t}\cong Y_{t}=\mathbb{P}^{N}\), while the central fibre \(X_{0}\) is the reduced union of the strict transform of \(Y_{0}\), that we shall denote with \(\mathbb{F}\), and the exceptional divisor \(\mathbb{P}\cong\mathbb{P}^{N}\) of \(f\). The two components \(\mathbb{P}\) and \(\mathbb{F}\) meet transversally and we will denote by \(R\) the intersection: \(R:=\mathbb{F}\cap\mathbb{P}\cong\mathbb{P}^{N-1}\). We will say that \(\mathbb{P}^{N}\)_degenerates_ to \(X_{0}=\mathbb{P}\cup\mathbb{F}\). We will now endow the general fibre \(X_{t}\) with a line bundle and we will describe its limits on \(X_{0}\) via this degeneration. In order to do so, we will give bases for the Picard groups of the components of \(X_{0}\). **Notation 3.1**.: _We denote by \(H_{\mathbb{P}}\) the hyperplane class of \(\mathbb{P}\), so that the Picard group of the exceptional component is generated by \(H_{\mathbb{P}}\). Moreover we denote with \(H_{\mathbb{F}}\) the hyperplane class of \(\mathbb{F}\), pull-back of a general hyperplane of \(Y_{0}\cong\mathbb{P}^{N}\), and with \(E:=\mathbb{P}|_{\mathbb{F}}\) the exceptional class in \(\mathbb{F}\): \(H_{\mathbb{F}}\) and \(E\) generate the Picard group of \(\mathbb{F}\)._ In these bases, \(R\) has class \(H_{\mathbb{P}}\) in \(\mathrm{N}^{1}(\mathbb{P})\) and \(E\) in \(\mathrm{N}^{1}(\mathbb{F})\). A line bundle on \(X_{0}\) will correspond to a line bundle on \(\mathbb{P}\) and a line bundle on \(\mathbb{F}\), which match on the intersection \(R\). In other terms, we can describe the Picard group of \(X_{0}\) as a fibre product \[\operatorname{Pic}(X_{0})=\operatorname{Pic}(\mathbb{P})\times_{\operatorname{ Pic}(R)}\operatorname{Pic}(\mathbb{F}).\] Consider the line bundle \(\mathcal{O}_{\mathcal{X}}(k)=(\pi_{1}^{\mathcal{X}})^{*}(\mathcal{O}_{\mathbb{ P}^{N}}(k))\) and the following twist by a negative multiple of the exceptional divisor: \[\mathcal{M}_{\mathcal{X}}(k,a):=\mathcal{O}_{\mathcal{X}}(k)\otimes\mathcal{O}_{ \mathcal{X}}(-a\mathbb{P}).\] The line bundle \(\mathcal{M}_{\mathcal{X}}(k,a)\) will induce a line bundle on each fibre \(X_{t}\) by restriction: \[\mathcal{M}_{t}(k,a):=\mathcal{M}_{\mathcal{X}}(k,a)|_{X_{t}},\ t\in\Delta.\] For \(t\neq 0\), since \(\mathbb{P}\cap X_{t}=\emptyset\), we have \[\mathcal{M}_{t}(k,a)=\mathcal{O}_{X_{t}}(k)\] while on the components of the central fibre we have \[\mathcal{M}_{\mathbb{P}}(k,a) :=\mathcal{M}_{\mathcal{X}}(k,a)|_{\mathbb{P}}=\mathcal{O}_{ \mathbb{P}}(aH_{\mathbb{P}}),\] \[\mathcal{M}_{\mathbb{F}}(k,a) :=\mathcal{M}_{\mathcal{X}}(k,a)|_{\mathbb{F}}=\mathcal{O}_{ \mathbb{F}}(kH_{\mathbb{F}}-aE).\] The resulting line bundle on \(X_{0}\) is a flat limit of the bundle \(\mathcal{O}_{X_{t}}(k)\cong\mathcal{O}_{\mathbb{P}^{n}}(k)\), for \(t\to 0\). #### 3.1.2. Degenerating the Veronese variety We will use the same notation as in Section 3.1.1. Let us set \[N:=N_{d}=\binom{n+d}{n}-1,\] and let \(V:=V_{n,d}=v_{d}(\mathbb{P}^{n})\subset\mathbb{P}^{N}\) denote the \(d\)-th Veronese embedding of \(\mathbb{P}^{n}\) in \(\mathbb{P}^{N}\), and consider the \(1\)-parameter family \(\mathcal{V}=V\times\Delta\subset\mathcal{Y}\) with the natural projections \(\pi_{1}^{\mathcal{Y}}|_{\mathcal{V}}:\mathcal{V}\to\mathbb{P}^{N}\) and \(\pi_{2}^{\mathcal{Y}}|_{\mathcal{V}}:\mathcal{V}\to\Delta\). The second projection is a flat morphism and we denote by \(V_{t}:=V\times\{t\}\) the fibre over \(t\in\Delta\). We pick a general point \((p,0)\in V_{0}\subset Y_{0}\) in the central fibre of \(\pi_{2}^{\mathcal{Y}}:\mathcal{Y}\to\Delta\) supported on the Veronese variety and we consider the blow-up \(f:\mathcal{X}\to\mathcal{Y}\) at \((p,0)\). This induces the blow-up \(f|_{\mathcal{V}}:\widetilde{\mathcal{V}}\to\mathcal{V}\) of \(\mathcal{V}\) at \((p,0)\) and the fibres of \((\pi_{2}^{\mathcal{Y}}\circ f)|_{\widetilde{\mathcal{V}}}\) are as follows: the general fibre is a Veronese variety \[\widetilde{V}_{t}\cong V,\] while the central fibre is the reduced union of two components, \[\widetilde{V}_{0}=\widetilde{V}_{\mathbb{F}}\cup\Lambda,\] where \(\widetilde{V}_{\mathbb{F}}\) is the strict transform of \(V_{0}\) under the blow-up at \(p\), while \(\Lambda\cong\mathbb{P}^{n}\) is the exceptional divisor on \(\widetilde{V}\). Moreover we can write \[\Lambda_{R}:=\widetilde{V}_{\mathbb{F}}\cap\Lambda\subset\Lambda,\] and observe that \(\Lambda_{R}\cong\mathbb{P}^{n-1}\). Consider the line bundle \(\mathcal{M}_{\mathcal{X}}(k,a)\) and twist it by the ideal sheaf of \(\widetilde{\mathcal{V}}\): \[\mathcal{M}_{\mathcal{X}}(k,a;\widetilde{\mathcal{V}}):=\mathcal{M}_{\mathcal{ X}}(k,a)\otimes\mathcal{I}_{\widetilde{\mathcal{V}}}=\mathcal{O}_{\mathcal{X}}(k) \otimes\mathcal{O}_{\mathcal{X}}(-a\mathbb{P})\otimes\mathcal{I}_{\widetilde{ \mathcal{V}}}.\] This restricts to the following line bundles on the fibres \(X_{t}\): \[\mathcal{M}_{t}(k,a;V) =\mathcal{O}_{X_{t}}(k)\otimes\mathcal{I}_{V_{t}},\ t\in\Delta \setminus\{0\},\] \[\mathcal{M}_{\mathbb{P}}(k,a;\Lambda) =\mathcal{O}_{\mathbb{P}}(aH_{\mathbb{P}})\otimes\mathcal{I}_{ \Lambda},\] \[\mathcal{M}_{\mathbb{F}}(k,a;\widetilde{V}) =\mathcal{O}_{\mathbb{F}}(kH_{\mathbb{F}}-aE)\otimes\mathcal{I}_{ \widetilde{V}}.\] The resulting line bundle on \(X_{0}\) is a flat limit of the bundle \(\mathcal{O}_{\mathbb{P}^{n}}(k)\otimes\mathcal{I}_{V}\) on the general fibre. #### 3.1.3. Degenerating a collection of points in general position We continue to use the notation of Sections 3.1.1-3.1.2. On the general fibre of \(\mathcal{Y}\to\Delta\), i.e. for \(t\neq 0\), we consider a collection of points \(\{x_{1,t}\dots,x_{h,t}\}\subset Y_{t}\) in general position and that, in particular, lie off the Veronese Variety \(V_{t}\subset Y_{t}\). After choosing a point that lies generically on the Veronese in the central fibre, \(p\in V_{0}\subset Y_{0}\), we degenerate each point \(x_{1,t}\in Y_{t}\) to an infinitely near point to \(p\in V_{0}\) as follows. For every \(i\in\{1,\dots,h\}\), consider the curve \((\pi_{1}^{\mathcal{Y}})^{-1}(x_{i})\subset\mathcal{Y}\) and its pull-back \(C_{i}\) on \(\mathcal{X}\). The union \(\bigcup_{i}C_{i}\) intersects each fibre \(X_{t}\) transversally in \(h\) distinct points. For \(t\neq 0\), the points \(\bigcup_{i}C_{i}\cap X_{t}\) are in general position. Moreover, by the generality assumption, the curves \((\pi_{1}^{\mathcal{Y}})^{-1}(p_{i})\subset\mathcal{Y}\) are not tangent to \(V_{0}\in Y_{0}\), therefore the intersection points \(C_{i}\cap X_{0}\) lie on \(\mathbb{P}\) but not inside \(\Lambda\). Consider the blow-up of \(\mathcal{X}\) along the union of curves \(\bigcup_{i=1}^{h}C_{t}\), \(g_{h}:\widetilde{\mathcal{X}}\to\mathcal{X}\), with exceptional divisors \(E_{C_{i}}\). Since these curves are disjoint, the result does not depend of the order of blow-up. The general fibre, that we continue to call \(X_{t}\) by abuse of notation, is isomorphic to a \(\mathbb{P}^{N}\) blown-up at \(h\) points in general position and its Picard group will be generated by the hyperplane class and by the classes of the exceptional divisors: \[\operatorname{Pic}(X_{t})=\mathbb{Z}\langle H,E_{1,t},\ldots,E_{h,t}\rangle.\] The central fibre has two components, the pull-back of \(\mathbb{F}\) and the strict transform of \(\mathbb{P}\cong\mathbb{P}^{N}\), which is isomorphic to a \(\mathbb{P}^{N}\) blown-up at \(h\) points in general position: abusing notation, we call \(\mathbb{F}\) and \(\mathbb{P}\) the two components, so that \(X_{0}=\mathbb{P}\cup\mathbb{F}\). The Picard group of \(\widetilde{\mathbb{P}}\) is \[\operatorname{Pic}(\mathbb{P})=\mathbb{Z}\langle H_{\mathbb{P}},E_{1,t},\ldots,E_{h,t}\rangle.\] For a vector \(\mathbf{m}=(m_{1},\ldots,m_{h})\in\mathbb{N}^{n}\), consider the following sheaf on \(\widetilde{\mathcal{X}}\): \[\mathcal{M}_{\widetilde{\mathcal{X}}}(k,a;\widetilde{\mathcal{V}},\mathbf{m}) :=\mathcal{O}(k)\otimes\mathcal{O}(-a\mathbb{P})\otimes\mathcal{O}(-(m_{1}E_ {C_{1}}+\cdots+m_{h}E_{C_{h}}))\otimes\mathcal{I}_{\widetilde{V}}.\] It restricts to \[\mathcal{M}_{t}(k,a;V,\mathbf{m}) =\mathcal{O}(k)\otimes\mathcal{O}(-(m_{1}E_{1,t}+\cdots+m_{h}E_{h,t}))\otimes\mathcal{I}_{V_{t}},\ t\neq 0,\] on the general fibre, and to \[\mathcal{M}_{\mathbb{P}}(k,a;\Lambda,\mathbf{m}) =\mathcal{O}_{\mathbb{P}}(aH_{\mathbb{P}}-(m_{1}E_{1}+\cdots+m_{h }E_{h}))\otimes\mathcal{I}_{\Lambda},\] \[\mathcal{M}_{\mathbb{F}}(k,a;\widetilde{V},\mathbf{m}) =\mathcal{O}_{\mathbb{F}}(kH_{\mathbb{F}}-aE)\otimes\mathcal{I}_{ \widetilde{V}}.\] on the components of the central fibre. The resulting line bundle on \(X_{0}\) is a flat limit of the bundle on the general fibre. #### 3.1.4. Matching conditions We will abbreviate the notation of the previous sections by setting \[\mathcal{M}_{\widetilde{\mathcal{X}}}:=\mathcal{M}_{\widetilde{\mathcal{X}}}(k,a;\widetilde{\mathcal{V}},\mathbf{m})\] and \[\mathcal{M}_{t} :=\mathcal{M}_{t}(k,a;V,\mathbf{m})\] \[\mathcal{M}_{\mathbb{P}} :=\mathcal{M}_{\mathbb{P}}(k,a;\Lambda,\mathbf{m})\] \[\mathcal{M}_{\mathbb{F}} :=\mathcal{M}_{\mathbb{F}}(k,a;\widetilde{V},\mathbf{m}).\] We are interested in computing the dimension of the space of global sections of the line bundle on the central fibre, which by upper-semicontinuity is an upper bound for the dimension of the space of global sections of the line bundle on the general fibre: \[\operatorname{h}^{0}(X_{0},\mathcal{M}_{0})\geq\operatorname{h}^{0}(X_{t}, \mathcal{M}_{t}). \tag{3.1}\] In order to do so, we consider the natural restrictions of to the intersection \(R=\mathbb{P}\cap\mathbb{F}\) of the central fibre: \[0 \to\hat{\mathcal{M}}_{\mathbb{P}}\to\mathcal{M}_{\mathbb{P}}\to \mathcal{M}_{\mathbb{P}}|_{R}\to 0,\] \[0 \to\hat{\mathcal{M}}_{\mathbb{F}}\to\mathcal{M}_{\mathbb{F}}\to \mathcal{M}_{\mathbb{F}}|_{R}\to 0,\] where \(\hat{\mathcal{M}}_{\mathbb{P}}=\hat{\mathcal{M}}_{\mathbb{P}}(k,a;\Lambda, \mathbf{m})\) and \(\hat{\mathcal{M}}_{\mathbb{F}}=\hat{\mathcal{M}}_{\mathbb{F}}(k,a;\tilde{V}, \mathbf{m})\) denote the kernels of the restriction maps. Since \(R=H_{\mathbb{P}}\) on \(\mathbb{P}\) and \(R=E\) on \(\mathbb{F}\), we have \[\hat{\mathcal{M}}_{\mathbb{P}} =\mathcal{O}_{\mathbb{P}}((a-1)H_{\mathbb{P}}-(m_{1}E_{1}+\cdots +m_{h}E_{h})\otimes\mathcal{I}_{\Lambda}\] \[\hat{\mathcal{M}}_{\mathbb{F}} =\mathcal{O}_{\mathbb{F}}(kH_{\mathbb{F}}-(a+1)E)\otimes\mathcal{ I}_{\tilde{V}}.\] Consider the restriction maps of global sections: \[r_{\mathbb{P}} :\mathrm{H}^{0}(\mathbb{P},\mathcal{M}_{\mathbb{P}})\to\mathrm{H }^{0}(R,\mathcal{M}_{\mathbb{P}}|_{R}),\] \[r_{\mathbb{F}} :\mathrm{H}^{0}(\mathbb{P},\mathcal{M}_{\mathbb{F}})\to\mathrm{H }^{0}(R,\mathcal{M}_{\mathbb{F}}|_{R}).\] We notice that the spaces of global sections of the restricted systems are both subspaces of the space of global sections of the degree-\(a\) line bundle on \(R\cong\mathbb{P}^{N-1}\): \[\mathrm{H}^{0}(R,\mathcal{M}_{\mathbb{P}}|_{R}),\mathrm{H}^{0}(R,\mathcal{M}_ {\mathbb{F}}|_{R})\subseteq\mathrm{H}^{0}(R,\mathcal{O}_{R}(a)).\] A global sections of \(\mathcal{M}_{0}\) consists of an element of \(\mathrm{H}^{0}(\mathbb{P},\mathcal{M}_{\mathbb{P}})\) and an element of \(\mathrm{H}^{0}(\mathbb{P},\mathcal{M}_{\mathbb{P}})\) which match in \(\mathrm{H}^{0}(R,\mathcal{O}_{R}(a))\), i.e. the space space of global sections \(H^{0}(X_{0},\mathcal{M}_{0})\) is described as a fibre product via the following commutative diagram: This yields the formula for the dimension of the spaces of global sections \[\mathrm{h}^{0}(X_{0},\mathcal{M}_{0})=\mathrm{h}^{0}(\mathbb{P},\hat{ \mathcal{M}}_{\mathbb{P}})+\mathrm{h}^{0}(\mathbb{F},\hat{\mathcal{M}}_{ \mathbb{F}})+\mathrm{h}^{0}(R,\mathcal{M}_{\mathbb{P}}|_{R}\cap\mathcal{M}_{ \mathbb{F}}|_{R}),\] which, in terms of dimensions of line bundles, it reads \[\dim\mathcal{M}_{0}=\dim\hat{\mathcal{M}}_{\mathbb{P}}+\dim\hat{\mathcal{M}}_ {\mathbb{F}}+\dim\mathcal{M}_{\mathbb{P}}|_{R}\cap\mathcal{M}_{\mathbb{F}}|_{ R}+2. \tag{3.2}\] **Remark 3.2**.: _There is an obvious isomorpshism between \(\mathcal{M}_{t}(k,a;V,\mathbf{m})\), line bundle on \(X_{t}\), and the line bundle \(\mathcal{L}_{N,k}(V,\mathbf{m}):=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes \mathcal{I}_{V}\otimes\mathcal{I}_{Z}\) on \(\mathbb{P}^{N}\), where \(Z\) is a union of fat points in general position in \(\mathbb{P}^{N}\) with multiplicities respectively \(m_{1},\dots,m_{h}\). Such isomorphism is given by taking strict transforms of elements of \(\mathcal{L}_{N,k}(V,\mathbf{m})\)._ _Similarly, since \(\mathbb{P}\) is the blow-up of \(\mathbb{P}^{N}\) at \(h\) points in general position and \(\Lambda\subset\mathbb{P}\) is a general linear subspace, then there is an isomorphisms \(\mathcal{M}_{\mathbb{P}}(k,a;\Lambda,\mathbf{m})\cong\mathcal{L}_{N,a}( \Lambda,\mathbf{m}):=\mathcal{O}_{\mathbb{P}^{N}}(a)\otimes\mathcal{I}_{ \Lambda}\otimes\mathcal{I}_{Z}\)._ _Finally, since \(\mathbb{F}\) is a \(\mathbb{P}^{N}\) blown-up at a point \(p\) on the Veronese variety \(V\subset\mathbb{P}^{N}\) and since \(\tilde{V}\) is the strict transform of \(V\) via this blow-up, then there is an isomorphisms \(\mathcal{M}_{\mathbb{F}}(k,a;\tilde{V},\mathbf{m})\cong\mathcal{L}_{N,k}(V,a):= \mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V}\otimes\mathcal{I}_{p^{a}}\)._ ### Toric degeneration of the Veronese variety We refer to [10] for details on projective toric varieties associated with convex lattice polytopes and to [11] for details on coherent triangulations. Let \(\Delta_{n}\subset\mathbb{R}^{n}\) be the \(n\)-dimensional simplex obtained as the convex hull of the points \((0,\ldots,0),(1,0,\ldots,0),\ldots,(0,\ldots,0,1)\). Consider \(d\Delta_{n}=\Delta_{N}+\cdots+\Delta_{N}\), where \(+\) here denotes the Minkowski sum of polytopes in \(\mathbb{R}^{n}\). The polytope \(d\Delta_{n}\) defines the \(d\)-th Veronese embedding of \(\mathbb{P}^{n}\) in \(\mathbb{P}^{N}\), with \(N=N_{d}=\binom{N+d}{N}-1\), that we shall call \(V=V_{n,d}\) as in the previous section. Consider the lattice \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) and the set of lattice points \(\mathcal{A}:=d\Delta_{n}\cap\mathbb{Z}^{n}\). We have \(\sharp\mathcal{A}=N+1\) and each such point corresponds to a coordinate point of the ambient space \(\mathbb{P}^{N}\). #### 3.2.1. Degenerating the Veronese to a union of linear spaces Take a _regular triangulation_ of \(d\Delta_{n}\), that is a decomposition of \(d\Delta_{n}\) into a finite union of simplices \[\bigcup_{i=1}^{d^{n}}S_{i},\] where * each \(S_{i}\) is obtained as the convex hull of \(n+1\) non-aligned points of \(\mathcal{A}\), * \(\sharp S_{i}\cap\mathbb{Z}^{n}=n+1\), * for \(i\neq j\), \(S_{i}\cap S_{j}\) is a common faces of \(S_{i}\) and \(S_{j}\) (possibly empty), * there is a strictly convex piecewise linear function \(\lambda:\mathbb{R}^{n}\to\mathbb{R}\) whose domains of linearity are precisely the \(S_{i}\)'s. We can always assume that \(S_{1}\) is the convex hull of the lattice points \((0,\ldots,0)\), \((1,0,\ldots,0),\ldots,(0,\ldots,0,1)\), so that it lies at a corner of \(d\Delta_{n}\). Consider for example, for \(n=2\) and \(d=3\), the triangulation into \(9\) and the piecewise linear function inducing shown in Figure 1. In this figure, \(S_{1}\) is the triangle with vertices \((0,0),(1,0),(0,1)\). Each \(S_{i}\) defines a \(\mathbb{P}^{n}\) as a toric variety, which we will call \(\Pi_{i}\), for \(i=1,\ldots,d^{n}\). Since a regular triangulation of \(d\Delta_{n}\) induces a \(1\)-parameter embedded degeneration of \(V_{n,d}\subset\mathbb{P}^{N}\) to the union of toric varieties described by the \(S_{i}\)'s (see [1] and [21] for details on the \(2\)-dimensional case), we have a degeneration of the Veronese variety to a union of \(d^{n}\)\(n\)-planes \[\mathbf{\Pi}:=\bigcup_{i=1}^{d^{n}}\Pi_{i}.\] The intersection table of these planes is encoded in the combinatorial data described by the triangulation, that is: if \(S_{i}\cap S_{j}\) is \(r\)-dimensional, then \(\Pi_{i}\cap\Pi_{j}\cong\mathbb{P}^{r}\), for \(0\leq r\leq n-1\). Figure 1. A regular triangulation of \(3\Delta_{2}\). **Remark 3.3**.: _Because of the choice of \(S_{1}\) made, we will say that \(\Pi_{1}\) is a sink. In practice this means that it is possible to choose a hyperplane of \(\mathbb{P}^{N}\) that contains every \(S_{i}\), \(i>1\), but that does not contain \(S_{1}\)._ Moreover, the union of planes \(\mathbf{\Pi}\subset\mathbb{P}^{N}\) is a torus invariant subscheme. In fact, consider the simplex \(\Delta_{N}\), which defines \(\mathbb{P}^{N}\) as a toric variety, with an action of the algebraic torus \((\mathbb{C}^{*})^{N}\). Each \(r\)-dimensional face of \(\Delta_{N}\) corresponds to a torus invariant linear subspace of dimension \(r\) of \(\mathbb{P}^{N}\). In particular vertices of \(\Delta_{N}\) are in one-to-one correspondence with \(N+1\) linearly independent points, which we may assume to be the coordinate points of \(\mathbb{P}^{N}\), up to a change of coordinates. Each \(r\)-dimensional face of \(\Delta_{N}\) corresponds to a \(\mathbb{P}^{r}\) spanned by \(r+1\) coordinate points. Since each \(\Pi_{i}\) is the linear span of \(n+1\) coordinate points of \(\mathbb{P}^{N}\), then the union \(\mathbf{\Pi}\) is embedded in a copy of \(\mathbb{P}^{N}\) and it is invariant under the action of the torus \((\mathbb{C}^{*})^{N}\). In particular, each \(\Pi_{i}\) will correspond to a _marked_\(n\)-dimensional face of \(\Delta_{N}\) and we have \(d^{n}\) such marked faces. #### 3.2.2. Degenerating a linear system intepolating the Veronese We now consider the linear systems on \(\mathbb{P}^{N}\) of degree\(-k\) hypersurfaces containing the Veronese variety on the one hand, and the union of \(n-\)planes \(\mathbf{\Pi}\) on the other hand: \[\mathcal{L}_{N,k}(V_{n,d}) :=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V_{n,d}},\] \[\mathcal{L}_{N,k}(\mathbf{\Pi}) :=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{\mathbf{\Pi}}\] **Lemma 3.4**.: _In the above notation, we have_ \[\dim\mathcal{L}_{N,k}(V_{n,d})\leq\dim\mathcal{L}_{N,k}(\mathbf{\Pi}).\] Proof.: Since \(\mathbf{\Pi}\) is a flat degeneration of \(V_{n,d}\), then the statement follows by semi-continuity of the function \(\dim\). ## 4. Some auxiliary linear systems ### Hypersurfaces containing a linear subspace and \(h\) double points in general position It is a well celebrated result of Alexander and Hirschowitz that if we impose \(h\) double points in general position to the hypersurfaces of degree \(d\) of \(\mathbb{P}^{N}\), there is only a finite list of cases where the dimension is larger than that obtained via a parameter count, i.e. \[\operatorname{edim}\mathcal{L}_{N,d}(2^{h})=\max\left\{-1,\binom{N+d}{N}-h(N+ 1)-1\right\}.\] **Theorem 4.1** (Alexander-Hirschowitz Theorem).: _The linear system \(\mathcal{L}_{N,d}(2^{h})\) is non-special except in the following cases:_ * \(d=2\) _and_ \(N\geq 2\)_,_ \(2\leq h\leq N\)_;_ * \(d=3\) _and_ \((N,h)=(4,7)\)_;_ * \(d=4\) _and_ \((N,h)=(2,5),(3,9),(4,14)\)_._ The interested reader may see [1],[1],[1],[1],[1] for the original proof based on specialisation of points (Horace method), and [1] and [1] for a simplified proof. An alternative proof via a different degeneration construction can be found here [21, 22]. This inspired the degeneration approach developed in Section 3.1 that will be used to prove the main result, Theorem 1.1. In this section we want to present an analogous result about linear systems with \(h\) imposed double points in general position and a linear subspace. Let \(\Lambda\subset\mathbb{P}^{N}\) be a general linear subspace of dimension \(n\) and let \(Z\subset\mathbb{P}^{n}\) be a double point scheme with support a set of points in general position. Let \(\mathcal{I}_{\Lambda}\) be the ideal sheaf of \(\Lambda\subset\mathbb{P}^{N}\) and let \(\mathcal{I}_{Z}\) be the ideal sheaf of \(Z\subset\mathbb{P}^{N}\). Consider the sheaf \[\mathcal{L}_{N,k}(\Lambda,2^{h}):=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes \mathcal{I}_{\Lambda}\otimes\mathcal{I}_{Z}.\] Since the Hilbert polynomial of \(\Lambda\subset\mathbb{P}^{N}\) at degree \(k\) is \[\binom{n+k}{n}\] or, in other terms, \[\mathrm{h}^{0}(\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{\Lambda})= \binom{N+k}{N}-\binom{n+k}{n}, \tag{4.1}\] and since \(h\) double points in general position of \(\mathbb{P}^{N}\) impose \(h(N+1)\) conditions to the hypersurfaces of \(\mathbb{P}^{N}\) of degree \(k\), we can give the following definitions. **Definition 4.2**.: _The virtual dimension of the linear system \(\mathcal{L}_{N,k,\Lambda}(2^{h})\) of hypersurfaces of \(\mathbb{P}^{n}\) that vanish along a linear subspace of dimension \(n\), \(\Lambda\subset\mathbb{P}^{N}\), and double at \(h\) points in general position is the following integer:_ \[\operatorname{vdim}\mathcal{L}_{N,k}(\Lambda,2^{h})=\binom{N+k}{N}-\binom{n+k }{n}-h(N+1)-1.\] _The expected dimension of \(\mathcal{L}_{N,k}(\Lambda,2^{h})\) is_ \[\operatorname{edim}\mathcal{L}_{N,k}(\Lambda,2^{h})=\max\left\{-1, \operatorname{vdim}\mathcal{L}_{N,k}(\Lambda,2^{h})\right\}.\] Since \(\Lambda\) and the scheme of double points are disjoint the virtual dimension, which is obtained by a simple parameter count, provides a lower bound to the actual dimension: \[\dim\mathcal{L}_{N,k}(\Lambda,2^{h})\geq\operatorname{edim}\mathcal{L}_{N,k}( \Lambda,2^{h}). \tag{4.2}\] **Proposition 4.3**.: _Let \(\Lambda\subset\mathbb{P}^{N}\) be linear subspace of dimension \(n\) and let \(Z_{\Lambda}\subset\mathbb{P}^{n}\) be a double point scheme supported on a collection of points in general position in \(\mathbb{P}^{N}\). Then if_ \[h\leq\frac{1}{N+1}\binom{N+k-1}{N}, \tag{4.3}\] _and \((N,k-1,h)\) is not in the list of exceptions of Theorem 4.1, and \(k\geq 2\), then_ \[\dim\mathcal{L}_{N,k}(\Lambda,2^{h})=\operatorname{edim}\mathcal{L}_{N,k}( \Lambda,2^{h}). \tag{4.4}\] Proof.: If \(k=2\) and \(h=0\), the conclusion follows from (4.1). If \(k=2\) and \(h=1\), it is easy to see that all elements of \(\mathcal{L}_{N,2}(\Lambda,2)\) are pointed quadric cones containing \(\Lambda\) and hence we have the isomorphism \(\mathcal{L}_{N,2}(\Lambda,2)\cong\mathcal{L}_{N-1,2}(\Lambda)\). By (4.1), we have that \(\dim\mathcal{L}_{N-1,2}(\Lambda)=\binom{N+1}{2}-\binom{n+2}{2}\). We conclude noticing that the latter equals the expected dimension of \(\mathcal{L}_{N,2}(\Lambda,2)\). Now, assume \(k\geq 4\) and consider the following exact sequence obtained by restricting \(\mathcal{L}_{N,k}(\Lambda,2^{h})\) to a general hyperplane \(H\subset\mathbb{P}^{N}\) such that \(\Lambda\subseteq H\): \[0\to\mathcal{L}_{N,k-1}(2^{h})\to\mathcal{L}_{N,k}(\Lambda,2^{h})\to\mathcal{ L}_{N,k}(\Lambda,2^{h})|_{H}\subseteq\mathcal{L}_{N-1,k}(\Lambda). \tag{4.5}\] Under the assumption (4.3) and using Theorem 4.1, the kernel system \(\mathcal{L}_{N,k-1}(2^{h})\) has dimension equal to its virtual dimension: \[\dim\mathcal{L}_{N,k-1}(2^{h})=\binom{N+k-1}{N}-h(N+1)-1,\] and in particular \(H^{1}(\mathbb{P}^{N},\dim\mathcal{L}_{N,k-1}(2^{h}))=0\), so that we have the following exact sequence in cohomology: \[0\to\operatorname{H}^{0}(\mathbb{P}^{N},\mathcal{L}_{N,k-1}(2^{h}))\to H^{0}( \mathbb{P}^{N},\mathcal{L}_{N,k}(\Lambda,2^{h}))\to\operatorname{H}^{0}(H, \mathcal{L}_{N,k}(\Lambda,2^{h})|_{H})\to 0.\] Moreover by (4.1) \[\operatorname{h}^{0}(\mathbb{P}^{N-1},\mathcal{L}_{N-1,k}(\Lambda))=\binom{N- 1+k}{N-1}-\binom{n+k}{n},\] and so \[\operatorname{h}^{0}(H,\mathcal{L}_{N,k}(\Lambda,2^{h})|_{H})\leq\binom{N-1+k }{N-1}-\binom{n+k}{n}.\] From the exact sequence of global sections we obtain: \[\operatorname{h}^{0}(\mathbb{P}^{N},\mathcal{L}_{N,k}(\Lambda,2^ {h})) =\operatorname{h}^{0}(\mathbb{P}^{N},\mathcal{L}_{N,k-1}(2^{h}))+ \operatorname{h}^{0}(H,\mathcal{L}_{N,k}(\Lambda,2^{h})|_{H})\] \[\leq\binom{N+k}{N}-\binom{n+k}{n}-h(N+1).\] We conclude the proof of this case using (4.2). Finally, assume that \(k=3\). In this case, the bound on the number of points is \(h\leq\frac{N}{2}+1\). We consider the restriction to a general hyperplane containing \(\Lambda\) as in (4.5). The kernel system is special by Theorem 4.1, and one can easily check that it has dimension \[\dim\mathcal{L}_{N,2}(2^{h})=\binom{N+2}{2}-h(N+1)+\binom{h}{2}-1,\] see for instance [10, Section 1.2.1]. Moreover, as a simple consequence of Bezout's Theorem, the linear system \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) contains in its base locus the lines spanned by pairs of points, each of which intersects \(H\) in a point. We claim that the base locus of \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) is supported on the union of \(\Lambda\) and these lines. This implies that the restricted system is the complete linear system of cubics containing \(\Lambda\) and passing simply through the \(\binom{h}{2}\) trace points: \[\mathcal{L}_{N,3}(\Lambda,2^{h})|_{H}=\mathcal{L}_{N-1,3}(\Lambda,1^{\binom{ h}{2}}).\] We claim that the linear system on the right hand side of the above expression is non-special, namely that the scheme given by \(\Lambda\) and the simple points impose independent conditions to the cubics of \(\mathbb{P}^{N-1}\). This shows that \[\dim\mathcal{L}_{N,3}(\Lambda,2^{h}) =\dim\mathcal{L}_{N,2}(2^{h})+\dim\mathcal{L}_{N-1,3}(\Lambda,1^ {\binom{h}{2}})+1\] \[=\left(\binom{N+2}{2}-h(N+1)+\binom{h}{2}-1\right)\] \[\quad+\left(\binom{N+2}{3}-\binom{n+3}{3}-\binom{h}{2}-1\right)+1,\] which implies that \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) is non-special. We are left to proving the two claims. For the second claim, first of all notice that there is a hyperplane inside \(H\), containing all \(\binom{h}{2}\) points. This can be taken to be the intersection with \(H\) of a hyperplane of \(\mathbb{P}^{N}\) containing the \(h\) original points. Call \(H_{1}\subset H\) the intersection and restrict the linear system \(\mathcal{L}_{N-1,3}(\Lambda,1^{\binom{h}{2}})\) to it, giving rise to the following exact sequence: \[0\to\mathcal{L}_{N-1,2}(\Lambda)\to\mathcal{L}_{N-1,3}(\Lambda,1^{\binom{h}{2} })\to\mathcal{L}_{N-2,3}(1^{\binom{h}{2}}).\] Since the two external linear systems are non-special, the so is the middle one, concluding the proof of the claim. As for the first claim: we show that \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) has no additional base locus other than \(\Lambda\) and the lines spanned by pairs of points. Let's call \(p_{1},\ldots,p_{h}\) the \(h\) assigned pints in general position. Assume that \(q\) is a point in \(\mathbb{P}^{N}\) in linearly general position with respect to \(p_{1},\ldots,p_{h}\). Since \(h\leq\frac{N}{2}+1<N\), there is a hyperplane \(A\) containing \(p_{1},\ldots,p_{h}\) but not containing \(q\). Since \(n<N\), there is a hyperplane \(B\) containing \(\Lambda\) but not containing \(q\). The cubic \(2A+B\) belongs to \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) proving that \(q\) cannot be a base point. Assume now that \(q\) is a point in \(\mathbb{P}^{N}\) not in linearly general position with respect to \(p_{1},\ldots,p_{h}\), which means that there is a linear space spanned by some of the \(p_{i}\)'s containing \(q\), but such that \(q\) does not belong to any of the lines \(\langle p_{i},p_{j}\rangle\). Let \(\langle p_{i}:i\in I_{q}\rangle\) be the minimum such linear span and choose two distinct indices \(j_{1},j_{2}\in I_{q}\). Let \(A_{1}\) be a hyperplane containing all \(p_{i}\)'s with \(i\neq j_{1}\) and let \(A_{2}\) be a hyperplane containing all \(p_{i}\)'s with \(i\neq j_{2}\). Let \(B\) be a hyperplabe containing \(\Lambda\), \(p_{i_{1}}\) and \(p_{j_{2}}\) and not containing \(q\). The cubic \(A_{1}+A_{2}+B\) belongs to \(\mathcal{L}_{N,3}(\Lambda,2^{h})\) proving that \(q\) cannot be a base point. Finally, since the multiplicity of the general element of the linear system of cubics along the line \(q\in\langle p_{i}:i\in I_{q}\rangle\) is exactly \(1\), then the above cases are exhaustive and this conclude the proof of the claim. ### Hypersurfaces containing the Veronese variety and a fat point Let \(V=V_{n,d}\subset\mathbb{P}^{n}\) be the \(d\)-th Veronese embedding of \(\mathbb{P}^{n}\) and let \(\{p^{a}\}\subset V\subset\mathbb{P}^{n}\) be a fat point scheme with support on \(V\). Let \(\mathcal{I}_{V}\) be the ideal sheaf of \(V\subset\mathbb{P}^{N}\) and let \(\mathcal{I}_{p^{a}}\) be the ideal sheaf of \(Z\subset\mathbb{P}^{N}\). Consider the sheaf \[\mathcal{L}_{N,k}(V,a):=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V} \otimes\mathcal{I}_{p^{a}}.\] We are interested in computing the dimension of the space of global sections. The Hilbert polynomial of \(V\subset\mathbb{P}^{N}\) at degree \(k\) is \[\binom{n+kd}{n}\] or, equivalently, we have \[\dim\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V}=\binom{N+k}{N}- \binom{n+kd}{n}-1.\] The scheme given by a point of multiplicity \(a\) of \(\mathbb{P}^{N}\) imposes \[\binom{N+a-1}{N} \tag{4.6}\] conditions to the hypersurfaces of \(\mathbb{P}^{N}\) of degree \(k\). Therefore the _virtual dimension_ of \(\mathcal{L}_{N,k}(V,a)\), obtained by a parameter count, is \[\binom{N+k}{N}-\binom{n+kd}{n}-\binom{N+a-1}{N}-1.\] It does not yield a useful notion of expected dimension for the linear system \(\mathcal{L}_{N,k}(V,a)\) due to the fact that the two subschemes \(V\) and \(\{p^{a}\}\) of \(\mathbb{P}^{N}\) have nonempty intersection so that some of the conditions imposed by them individually to the hypersurfaces of degree \(k\) of \(\mathbb{P}^{N}\) will overlap. For instance, if we first impose \(V\) and then \(\{p\}\), clearly the latter will not give any independent condition, because, by the containment relation \(p\in V\), \(p\) is a base point of the linear system \(\mathcal{L}_{N,k}(V)=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V}\). When the support of a fat point subscheme \(Z=\{p^{a}\}\subset\mathbb{P}^{N}\), whose length is given in (4.6), lies on the \(n\)-dimensional subvariety \(V\), the restriction \(Z|_{V}\subset V\) is a subscheme of length \[\binom{n+a-1}{n}.\] Therefore we may define the following notion of expected dimension. #### 4.2.1. A notion of expected dimension We introduce the following refined parameter count. **Definition 4.4**.: _Let \(V\subset\mathbb{P}^{N}\) be the \(d\)-th Veronese embedding of \(\mathbb{P}^{N}\) and let \(\{p^{a}\}\subset\mathbb{P}^{N}\) be a fat point scheme supported on \(V\). The expected dimension of \(\mathcal{L}_{N,k}(V,a)\), denoted by \(\operatorname{edim}\mathcal{L}_{N,k}(V,a)\), is the following integer:_ \[\max\left\{-1,\binom{N+k}{N}-\binom{n+kd}{n}-\left[\binom{N+a-1}{N}-\binom{n+ a-1}{n}\right]-1\right\}.\] That the integer of Definition 4.4 is a lower bound to the actual dimension of \(\mathcal{L}_{N,k}(V,a)\) is not an obvious statement. We will show that it does when \(a\leq k\). **Proposition 4.5**.: _Let \(\nu_{d}:\mathbb{P}^{n}\to\mathbb{P}^{N}\) be the \(d\)-the Veronese embedding. Let \(V=V_{n,d}:=\nu_{d}(\mathbb{P}^{n})\subset\mathbb{P}^{N}\) and let \(Z_{V}=\{p^{a}\}\subset\mathbb{P}^{n}\) be a fat point of multiplicity \(a\leq k\) supported on \(V\). Then_ \[\dim\mathcal{L}_{N,k}(V,a)\geq\operatorname{edim}\mathcal{L}_{N,k}(V,a). \tag{4.7}\] Proof.: We consider the linear system \(\mathcal{L}_{N,k}(a)=\mathcal{O}_{\mathbb{P}^{N}}\otimes\mathcal{I}_{Z_{V}}\) of the degree-\(k\) hypersurfaces of \(\mathbb{P}^{N}\) with a point of multiplicity \(a\) with support on \(V\). Restriction to \(V\) gives the following Castelnuovo sequence: \[0\to\mathcal{L}_{N,k}(V,a)\to\mathcal{L}_{N,k}(a)\to\mathcal{L}_{N,k}(a)|_{V}.\] It is an easy observation that a fat point of multiplicity \(a\) imposes independent conditions to the hypersurfaces of fixed degree of \(\mathbb{P}^{N}\), as long as the multiplicity does not exceed the degree. Therefore we can obtain the dimension of the linear system \(\mathcal{L}_{N,k}(a)\) by a parameter count: \[\dim\mathcal{L}_{N,k,V}(a)=\binom{N+k}{N}-\binom{N+a-1}{N}-1\] In particular \(\operatorname{h}^{1}(\mathbb{P}^{N},\mathcal{L}_{N,k}(a))=0\), so that we have the following sequence in cohomology: \[0 \to\operatorname{H}^{0}(\mathcal{L}_{N,k}(V,a))\to \operatorname{H}^{0}(\mathcal{L}_{N,k}(a))\to\operatorname{H}^{0}(\mathcal{L} _{N,k}(a)|_{V})\] \[\to\operatorname{H}^{1}(\mathcal{L}_{N,k}(V,a))\to 0.\] Since the Veronese morphism \(\nu_{d}:\mathbb{P}^{n}\to\mathbb{P}^{N}\) gives an isomorphism of \(\mathbb{P}^{n}\) to its image \(V\), then the pull-back of \(\mathcal{L}_{N,k}(a)|_{V}\) is a linear system of degree-\(kd\) hypersurfaces of \(\mathbb{P}^{n}\): \[\nu_{d}^{*}(\mathcal{L}_{N,k}(a)|_{V})\subseteq\mathcal{O}_{\mathbb{P}^{n}}(kd) \otimes\mathcal{I}_{Z^{\prime}}=:\mathcal{L}_{n,kd}(a)\] where \(Z^{\prime}\subset\mathbb{P}^{n}\) is a fat point of multiplicity \(a\) with support a general point of \(V\). Since \(nk\geq a\) by the assumption, the linear system \(\mathcal{L}_{n,kd}(a)\) has dimension \[\dim\mathcal{L}_{n,kd}(a)=\binom{n+kd}{n}-\binom{n+a-1}{n}-1.\] From this we obtain \[\dim\mathcal{L}_{N,k}(a)|_{V}\leq\binom{n+kd}{n}-\binom{n+a-1}{n}-1.\] Putting everything together: \[\mathrm{h}^{0}(\mathcal{L}_{N,k}(V,a)) =\mathrm{h}^{0}(\mathcal{L}_{N,k}(a))-\mathrm{h}^{0}(\mathcal{L}_ {N,k}(a)|_{V})+\mathrm{h}^{1}(\mathcal{L}_{N,k}(V,a))\] \[\geq\mathrm{h}^{0}(\mathcal{L}_{N,k}(a))-\mathrm{h}^{0}( \mathcal{L}_{N,k}(a)|_{V})\] \[\geq\left(\binom{N+k}{N}-\binom{N+a-1}{N}\right)-\left(\binom{n+ kd}{n}-\binom{n+a-1}{n}\right),\] which concludes the proof. #### 4.2.2. Dimensionality via apolarity and toric geometry Let \(V=V_{n,d}\subset\mathbb{P}^{N}\) be the Veronese variety and let \(\mathbf{\Pi}\subset\mathbb{P}^{N}\) be a union of \(n\)-planes, degeneration of \(V\), as in Section 3.2. Let \(p\in V\) and \(p_{0}\in\Pi_{1}\subset\mathbf{\Pi}\) be a general points. Consider the linear systems on \(\mathbb{P}^{N}\) \[\mathcal{L}_{N,k}(V,a) :=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V}\otimes \mathcal{I}_{p^{a}},\] \[\mathcal{L}_{N,k}(\mathbf{\Pi},a) :=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{v}\otimes \mathcal{I}_{p_{0}^{a}}.\] Building from Lemma 3.4, we obtain the following result. **Proposition 4.6**.: _In the above notation, we have_ \[\dim\mathcal{L}_{N,k}(V,a)\leq\dim\mathcal{L}_{N,k}(\mathbf{\Pi},a).\] Proof.: Since \(p\) is a general point on \(V\), we may assume that it degenerates to a general point \(p_{0}\in S_{1}\). Since \(\mathbf{\Pi}\cup\{p_{0}^{a}\}\) is a flat degeneration of the scheme \(V\cup\{p^{a}\}\), then the Hilbert functions of the former is at most that of latter, by semi-continuity. This concludes the proof. **Proposition 4.7**.: _In the above notation and for any \(1\leq a\leq k\), then_ \[\dim\mathcal{L}_{N,k}(\mathbf{\Pi},a)=\binom{N+k}{N}-\binom{n+kd}{n}-\binom{N +a-1}{N}+\binom{n+a-1}{n}-1.\] Proof.: Given the union \(\mathbf{\Pi}:=\bigcup_{i=1}^{d^{n}}\Pi_{i}\) of torus invariant \(n\)-planes of \(\mathbb{P}^{n}\), with \(\Pi_{1}\) a sink and \(p_{0}\) supported generically on \(\Pi_{1}\), there is a torus invariant hyperplane \(H\) such that \(\Pi_{1}\cap H\) is an \((n-1)\)-plane and \(\Pi_{i}\subset H\) for \(2\leq i\leq d^{n}\) (cf. Remark 3.3). We can always assume that \(p_{0}\) is a coordinare point of \(\mathbb{P}^{N}\) and we can call \(p_{1},\dots,p_{N}\) the other coordinate (torus invariant) points of \(\mathbb{P}^{N}\). Hence we can choose, without loss of generality, that \(\Pi_{1}=\langle p_{0},\dots,p_{n}\rangle\) and \(H=\langle p_{1},\dots,p_{N}\rangle\), so that \(p_{0}\in\Pi_{1}\) and \(p_{i}\notin\Pi_{i}\) for \(i\geq 2\). Let \(R=\mathbb{C}[x_{0},\ldots,x_{N}]\) be the homogeneous polynomial ring of \(\mathbb{P}^{N}\) and consider the ideals \(\mathcal{I}_{p_{0}}\subset\mathbb{C}[x_{0},\ldots,x_{N}]\) and \(\mathcal{I}_{\Pi_{i}}\subset\mathbb{C}[x_{0},\ldots,x_{N}]\) \[\mathcal{I}_{p_{0}} =\langle x_{1},\ldots,n_{N}\rangle,\] \[\mathcal{I}_{\Pi_{1}} =\langle x_{n+1},\ldots,n_{N}\rangle,\] \[\mathcal{I}_{\Pi_{i}} =\langle x_{i_{n+1}},\ldots,n_{i_{N}}\rangle,\ i\geq 2,\] \[\mathcal{I}_{H} =\langle x_{0}\rangle.\] By construction, for \(i\geq 2\), we have \(0\in\{i_{n+1},\ldots,i_{N}\}\). Using Notation 2.10, we compute: \[\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k} =\{y_{0}^{k-l}F_{l}(y_{1},\ldots,y_{N}):F_{l}\in S_{l},0\leq l\leq a -1\},\] \[\left[\mathcal{I}_{\Pi_{1}}^{-1}\right]_{k} =\{F_{k}(y_{0},\ldots,y_{n}):F_{k}\in S_{k}\},\] \[\left[\mathcal{I}_{\Pi_{i}}^{-1}\right]_{k} =\{F_{k}(y_{i_{0}},\ldots,y_{i_{n}}):F_{k}\in S_{k}\},i\geq 2,\] where for \(i\geq 2\), the index set \(\{i_{0},\ldots,i_{n}\}\) is the complement of \(\{i_{n+1},\ldots,i_{N}\}\subset\{0,\ldots,N\}\). We have the following intersections \[\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k}\cap\left[\mathcal{ I}_{\Pi_{1}}^{-1}\right]_{k} =\emptyset,i\geq 2,\] \[\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k} \cap\left[\mathcal{I}_{\Pi_{1}}^{-1}\right]_{k} =\{y_{0}^{k-l}F_{l}(y_{1},\ldots,y_{n}):F_{l}\in S_{l}\}.\] We compute the dimension of the latter intersection: \[\dim\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k}\cap\left[ \mathcal{I}_{\Pi_{1}}^{-1}\right]_{k} =\dim\{y_{0}^{k-l}F_{l}(y_{1},\ldots,y_{n}):F_{l}\in S_{l},0\leq l \leq a-1\}\] \[=\sum_{l=0}^{a-1}\dim\{F_{l}(y_{1},\ldots,y_{n}):F_{l}\in S_{l}\}\] \[=\sum_{l=0}^{a-1}\binom{n-1+l}{n-1}\] \[=\binom{n+a-1}{n},\] where the last equality follows a standard relation of Newton coefficients, commonly known as the _hockey stick identity_. The number of conditions imposed to the linear system of degree-\(k\) hypersurfaces of \(\mathbb{P}^{N}\) by the scheme \(\{p_{0}^{a}\}\cup\mathbf{\Pi}\) is the dimension of the linear span of \(\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k}\) and \(\left[\mathcal{I}_{\Pi_{i}}^{-1}\right]_{k}\), for \(i=1\ldots,d^{n}\), which is the following integer: \[\dim\left[\mathcal{I}_{p_{0}^{\ast}}^{-1}\right]_{k}+\dim\left\langle\left[ \mathcal{I}_{\Pi_{i}}^{-1}\right]_{k},i=1\ldots,d^{n}\right\rangle-\dim \left[\mathcal{I}_{q_{0}^{\ast}}^{-1}\right]_{k}\cap\left[\mathcal{I}_{\Pi_{ 1}}^{-1}\right]_{k}\] \[=\binom{N+a-1}{N}+\binom{n+kd}{n}-\binom{n+a-1}{n}.\] **Corollary 4.8**.: _The linear system \(\dim\mathcal{L}_{N,k}(V,a)\) has the expected dimension according to Definition 4.4._ Proof.: It follows from Propositions 4.5 and 4.7. ## 5. Proof of the main theorem We are ready to prove our main theorem, Theorem 1.1. Thanks to Proposition 2.16, computing the dimension of the \(h\)-secant varieties of the \((d,k)\)-Veronese variety \(V^{k}_{n,d}\subset\mathbb{P}^{N_{dk}}\) is equivalent to computing the dimension of the linear system in \(\mathbb{P}^{N}\) of all hypersurfaces containing the standard Veronese variety \(V=V_{n,d}\subset\mathbb{P}^{N}\) and double at \(h\) general points. Let \(V_{n,d}\subset\mathbb{P}^{N}\) be the \(d\)-th Veronese embedding of \(\mathbb{P}^{n}\) and let \(Z\subset\mathbb{P}^{n}\) be a double point scheme with support a set of points in general position. Let \(\mathcal{I}_{V}\) be the ideal sheaf of \(V\subset\mathbb{P}^{N}\) and let \(\mathcal{I}_{Z}\) be the ideal sheaf of \(Z\subset\mathbb{P}^{N}\). Consider the sheaf \[\mathcal{L}_{N,k}(V,2^{h}):=\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I} _{V}\otimes\mathcal{I}_{Z}.\] Since the Hilbert polynomial of \(V\subset\mathbb{P}^{N}\) in degree \(k\) is \(\binom{n+kd}{n}\) or, in other terms, \[\dim\mathcal{O}_{\mathbb{P}^{N}}(k)\otimes\mathcal{I}_{V}=\binom{N+k}{N}- \binom{n+kd}{n}-1, \tag{5.1}\] and since \(h\) double points in general position of \(\mathbb{P}^{N}\) impose \(h(N+1)\) conditions to the hypersurfaces of \(\mathbb{P}^{N}\) of degree \(k\), we can give the following definitions. **Definition 5.1**.: _The virtual dimension of the linear system \(\mathcal{L}_{N,k,V}(2^{h})\) of hypersurfaces of \(\mathbb{P}^{n}\) that vanish along the Veronese variety \(V=V_{n,d}\subset\mathbb{P}^{N}\) and double at \(h\) points in general position is the following integer:_ \[\operatorname{vdim}\mathcal{L}_{N,k}(V,2^{h})=\binom{N+k}{N}-\binom{n+kd}{n}- h(N+1)-1.\] _The expected dimension is_ \[\operatorname{edim}\mathcal{L}_{N,k}(V,2^{h})=\max\left\{-1,\operatorname{ vdim}\mathcal{L}_{N,k}(V,2^{h})\right\}.\] Since \(V\) and the scheme of double points are disjoint, the virtual dimension, which is obtained by a simple parameter count, provides a lower bound to the actual dimension: \[\dim\mathcal{L}_{N,k}(V,2^{h})\geq\operatorname{edim}\mathcal{L}_{N,k}(V,2^{h }). \tag{5.2}\] Using a degeneration argument, we shall show that if the number of points \(h\) is not too large, then the linear system \(\mathcal{L}_{N,k,V}(V,2^{h})\) has dimension equal to the expected dimension. **Theorem 5.2**.: _Let \(\nu_{d}:\mathbb{P}^{n}\to\mathbb{P}^{N}\) be the \(d\)-the Veronese embedding. Let \(V=V_{n,d}:=\nu_{d}(\mathbb{P}^{n})\subset\mathbb{P}^{N}\) and let \(Z_{V}\subset\mathbb{P}^{n}\) be a double point scheme supported on \(h\) points in general position of \(\mathbb{P}^{N}\). Then if \(k\geq 3\) and_ \[h\leq\frac{1}{N+1}\binom{N+k-3}{N} \tag{5.3}\] _then_ \[\dim\mathcal{L}_{N,k}(V,2^{h})=\operatorname{edim}\mathcal{L}_{N,k}(V,2^{h}). \tag{5.4}\] Proof.: Using (5.2), it is enough to prove that the inequality \(\dim\mathcal{L}_{N,k}(V,2^{h})\leq\operatorname{edim}\mathcal{L}_{N,k}(V,2^{h})\) holds. If \(k=3\), then \(h=0\) so the statement follows from (5.1). For \(k\geq 4\), we will prove the statement by means of the \(\mathbb{F}\mathbb{P}-\)degeneration introducedin Section 3.1.2, applied to the line bundle \[\mathcal{L}_{\widetilde{\mathcal{X}}}:= \mathcal{M}_{\widetilde{\mathcal{X}}}(k,k-1;\widetilde{\mathcal{ V}},2,\ldots,2).\] By Remark 3.2, the line bundle on the general fibre is isomorphic to \[\mathcal{L}_{t}:= \mathcal{L}_{N,k}(V,2^{h}),\] while on the central fibre the linear systems on the two components are the following: \[\mathcal{L}_{\mathbb{P}}:= \mathcal{L}_{N,k-1}(\Lambda,2^{h}),\] \[\mathcal{L}_{\mathbb{F}}:= \mathcal{L}_{N,k}(V,k-1).\] We consider the restriction to \(R=\mathbb{P}\cap\mathbb{F}\): the kernels on the two components are, respectively: \[\hat{\mathcal{L}}_{\mathbb{P}}:= \mathcal{L}_{N,k-2}(\Lambda,2^{h}),\] \[\hat{\mathcal{L}}_{\mathbb{F}}:= \mathcal{L}_{N,k}(V,k).\] Since \(R\cong\mathbb{P}^{N-1}\), the two restricted systems satisfy the following: \[\mathcal{R}_{\mathbb{P}}:= \mathcal{L}_{\mathbb{P}}|_{R}\subset\mathcal{L}_{N-1,k-1}( \Lambda_{R}),\] \[\mathcal{R}_{\mathbb{F}}:= \mathcal{L}_{\mathbb{F}}|_{R}\subset\mathcal{L}_{N-1,k-1}( \Lambda_{R}),\] where we recall that \(\Lambda_{R}=\Lambda\cap R\cong\mathbb{P}^{n-1}\). We first look at the exceptional component \(\mathbb{P}\). By Proposition 4.3, since \(k-2\geq 2\) and \[h\leq\frac{1}{N+1}\binom{N+k-3}{N},\] both linear systems \(\mathcal{L}_{\mathbb{P}}\) and \(\hat{\mathcal{L}}_{\mathbb{P}}\) have the expected dimension, that is \[\dim\mathcal{L}_{\mathbb{P}} =\binom{N+k-1}{N}-\binom{n+k-1}{n}-h(N+1)-1 \tag{5.5}\] \[\dim\hat{\mathcal{L}}_{\mathbb{P}} =\binom{N+k-2}{N}-\binom{n+k-2}{n}-h(N+1)-1.\] Moreover, we have a short exact sequence of spaces of global sections: \[0\to\operatorname{H}^{0}(\mathbb{P},\hat{\mathcal{L}}_{\mathbb{P}})\to \operatorname{H}^{0}(\mathbb{P},\mathcal{L}_{\mathbb{P}})\to\operatorname{H} ^{0}(R,\mathcal{R}_{\mathbb{P}})\to 0.\] In particular, we can compute \[\dim\mathcal{R}_{\mathbb{P}} =\dim\mathcal{L}_{\mathbb{P}}-\dim\hat{\mathcal{L}}_{\mathbb{P}}+1\] \[=\binom{N+k-2}{N-1}-\binom{n+k-2}{n-1}-1\] \[=\dim\mathcal{L}_{N-1,k-1}(\Lambda_{R}).\] We conclude that \(\mathcal{R}_{\mathbb{P}}\) is the complete linear system \[\mathcal{R}_{\mathbb{P}}=\mathcal{L}_{N-1,k-1}(\Lambda_{R}).\] On the component \(\mathbb{F}\), using Corollary 4.8, we have that both \(\mathcal{L}_{\mathbb{F}}\) and \(\hat{\mathcal{L}}_{\mathbb{F}}\) have the expected dimension, that is \[\dim\mathcal{L}_{\mathbb{F}} =\binom{N+k}{N}-\binom{n+kd}{n}-\left(\binom{N+k-2}{N}-\binom{n+k- 2}{n}\right)-1 \tag{5.6}\] \[\dim\hat{\mathcal{L}}_{\mathbb{F}} =\binom{N+k}{N}-\binom{n+kd}{n}-\left(\binom{N+k-1}{N}-\binom{n+k -1}{n}\right)-1.\] We claim that \[\mathcal{R}_{\mathbb{F}}=\mathcal{L}_{N-1,k-1}(\Lambda_{R}),\] so that, together with the above argument, we have \[\mathcal{R}_{\mathbb{P}}\cap\mathcal{R}_{\mathbb{F}}=\mathcal{O}_{\mathbb{P}^{ N-1}}(k-1)\otimes\mathcal{I}_{\Lambda_{R}}. \tag{5.7}\] In order to prove the claim, we observe that by semicontinuity, and precisely Formula (3.1), and by (5.2), we have \[\dim(\mathcal{L}_{0})\geq\dim(\mathcal{L}_{t})\geq\operatorname{edim} \mathcal{L}_{t}=\binom{N+k}{N}-\binom{n+kd}{n}-h(N+1)-1. \tag{5.8}\] Using Formula (3.2), i.e., \[\dim\mathcal{L}_{0}=\dim\hat{\mathcal{L}}_{\mathbb{P}}+\dim\hat{\mathcal{L}}_ {\mathbb{F}}+\mathcal{R}_{\mathbb{P}}\cap\mathcal{R}_{\mathbb{F}}+2 \tag{5.9}\] and observing that \(\mathcal{R}_{\mathbb{P}}\cap\mathcal{R}_{\mathbb{F}}=\mathcal{R}_{\mathbb{F}}\), we obtain \[\dim\mathcal{R}_{\mathbb{F}}\geq\operatorname{edim}\mathcal{L}_{t}-\dim\hat{ \mathcal{L}}_{\mathbb{P}}-\dim\hat{\mathcal{L}}_{\mathbb{F}}-2=\binom{N+k-2} {N-1}-\binom{n+k-2}{n-1}-1;\] the proof of the latter equality is easy and left to the reader. Since \[\dim\mathcal{R}_{\mathbb{F}}\leq\dim\mathcal{O}_{\mathbb{P}^{N-1}}(k-1) \otimes\mathcal{I}_{\Lambda_{R}}=\binom{N+k-2}{N-1}-\binom{n+k-2}{n-1}-1,\] the claim follows and we have \[\dim\mathcal{R}_{\mathbb{P}}\cap\mathcal{R}_{\mathbb{F}}=\binom{N+k-2}{N-1} -\binom{n+k-2}{n-1}-1, \tag{5.10}\] Using (5.5), (5.6), (5.9) and (5.10), we obtain \(\dim\mathcal{L}_{0}=\operatorname{edim}\mathcal{L}_{t}\). We conclude using (5.8). Theorem 1.1 is now just a corollary of what we just proved. **Corollary 5.3**.: _For \(k\geq 3\), if (5.3) holds, then the \((d,k)-\)Veronese variety \(V^{k}_{n,d}\subset\mathbb{P}^{N_{dk}}\) is non-defective._ Proof.: It follows from Theorem 5.2 and Proposition 2.16. Theorem 1.2 is an easy consequence of Theorem 1.1 and Theorem 2.9. **Corollary 5.4**.: _For \(k\geq 3\), if_ \[h\leq\min\{\frac{1}{N+1}\binom{N+k-3}{N}-1,\frac{\binom{n+kd}{n}}{N+1}-1\}\] _then the \((d,k)-\)Veronese variety \(V^{k}_{n,d}\subset\mathbb{P}^{N_{dk}}\) is \(h\)-identifiable._ Proof.: It follows from Corollary 5.3 and Theorem 2.9. **Remark 5.5**.: _Note that the bound given in Theorem 1.1 can be strictly bigger then the expected generic rank for the \((d,k)-\)Veronese \(V_{n,d}^{k}\), see for example the computations in Section 6. In this case Theorem 1.1 proves the non defectivity of \(V_{n,d}^{k}\) up to the generic rank. The identifiability property instead is more subtle and in order to being able to use Theorem 2.9 we must ensure that the embedded secant variety \(\mathbb{S}ec_{h}(V_{n,d}^{k})\) has the same dimension as the abstract secant variety \(sec_{h}(V_{n,d}^{k})\), see definition 2.5. Equivalently we require that the \(h-\)secant map is generically finite. Because of this assumption we are not able to prove generic identifiability and we must restrict ourselves to the range given in Theorem 1.2._ ## 6. Asymptotical Bound In this section we relate our bound in Theorem 1.1 with the one given in [11]. We first of all state Nenashev's result for the sake of completeness. **Theorem 6.1**.: _[_11_, Theorem 1]_ _Let \(I\) be a homogeneous ideal generated by \(h\in N_{0}\) generic elements of some nonempty variety \(\mathcal{D}\subseteq\operatorname{Sym}^{r}(\mathbb{C}^{n})\) of \(r\)-forms that is closed under linear transformations. Fix an integer \(s\geq 0\). If_ \[h\leq\bigg{(}\binom{r+s+n-1}{n-1}/\binom{s+n-1}{n-1}\bigg{)}-\binom{s+n-1}{n-1}\] _then the dimension of \(I\) in degree \((r+s)\) is maximal, i.e. it equal \(s\)\(h\binom{s+n-1}{n-1}\)._ Note that when \(r=d(k-1)\) and \(s=r\) the degree \(r+s=dk\) component of \(I\) gives us exactly the dimension of the \(h-\) secant variety \(\mathbb{S}ec_{h}(V_{n,d}^{k})\), where \(\mathcal{D}\) is the tangential variety of \(V_{n,d}^{k}\), i.e. \[\mathcal{D}=\{F^{k-1}G|F,G\in\mathbb{C}[x_{0},\dots,x_{n}]_{d}\}.\] As a consequence we have the following. **Corollary 6.2**.: _The dimension of \(\mathbb{S}ec_{h}(V_{n,d}^{k})\) is the expected one, i.e._ \[\dim\mathbb{S}ec_{h}(V_{n,d}^{k})=h\binom{n+d}{d}-1\] _for \(h\leq\frac{\binom{n+dk}{dk}}{\binom{n+d}{d}}-\binom{n+d}{d}\)._ Note that if we fix \(k,n\) and let \(d\gg 0\) we have that \[\frac{\binom{n+dk}{dk}}{\binom{n+d}{d}}-\binom{n+d}{d}\sim k^{n}-d^{n}\] and if \(d\gg k\) the bound is trivial. In Theorem 1.1 and under the same assumptions we get \[\frac{1}{N+1}\binom{N+k-3}{N}\sim d^{n(k-4)}\] which gives non trivial bounds for \(d\gg k\) when \(k>4\). The figure shows in red the bound given by Nenashev, in blue the bound of Theorem 1.2 as a function of \(d\) and in green the expected generic rank \(\binom{n+kd}{kd}/\binom{n+d}{d}\) of \(V_{n,d}^{k}\). In this case we have set the values \(k=5\) and \(n=2\). For \(d>3\) our bound is better and continues to give informations also in the range \(d>4\). Note that for \(d\) big enough our bound in Theorem 1.1 exceeds the expected generic rank \(\binom{n+kd}{kd}/\binom{n+d}{d}\) of \(V_{n,d}^{k}\), as predicted in Remark 5.5. This shows that under the hypothesis of a very high degree reembedding of \(\mathbb{P}^{n}\) the variety \(V_{n,d}^{k}\) is never defective for every \(k\geq 3\).
2308.11708
Layering and subpool exploration for adaptive Variational Quantum Eigensolvers: Reducing circuit depth, runtime, and susceptibility to noise
Adaptive variational quantum eigensolvers (ADAPT-VQEs) are promising candidates for simulations of strongly correlated systems on near-term quantum hardware. To further improve the noise resilience of these algorithms, recent efforts have been directed towards compactifying, or layering, their ansatz circuits. Here, we broaden the understanding of the algorithmic layering process in three ways. First, we investigate the non-commutation relations between the different elements that are used to build ADAPT-VQE ans\"atze. Doing so, we develop a framework for studying and developing layering algorithms, which produce shallower circuits. Second, based on this framework, we develop a new subroutine that can reduce the number of quantum-processor calls by optimizing the selection procedure with which a variational quantum algorithm appends ansatz elements. Third, we provide a thorough numerical investigation of the noise-resilience improvement available via layering the circuits of ADAPT-VQE algorithms. We find that layering leads to an improved noise resilience with respect to amplitude-damping and dephasing noise, which, in general, affect idling and non-idling qubits alike. With respect to depolarizing noise, which tends to affect only actively manipulated qubits, we observe no advantage of layering.
Christopher K. Long, Kieran Dalton, Crispin H. W. Barnes, David R. M. Arvidsson-Shukur, Normann Mertig
2023-08-22T18:00:02Z
http://arxiv.org/abs/2308.11708v1
# Layering and subpool exploration for adaptive Variational Quantum Eigensolvers: ###### Abstract Adaptive variational quantum eigensolvers (ADAPT-VQEs) are promising candidates for simulations of strongly correlated systems on near-term quantum hardware. To further improve the noise resilience of these algorithms, recent efforts have been directed towards compactifying, or _layering_, their ansatz circuits. Here, we broaden the understanding of the algorithmic layering process in three ways. First, we investigate the non-commutation relations between the different elements that are used to build ADAPT-VQE ansatze. Doing so, we develop a framework for studying and developing layering algorithms, which produce shallower circuits. Second, based on this framework, we develop a new subroutine that can reduce the number of quantum-processor calls by optimizing the selection procedure with which a variational quantum algorithm appends ansatz elements. Third, we provide a thorough numerical investigation of the noise-resilience improvement available via layering the circuits of ADAPT-VQE algorithms. We find that layering leads to an improved noise resilience with respect to amplitude-damping and dephasing noise, which, in general, affect idling and non-idling qubits alike. With respect to depolarizing noise, which tends to affect only actively manipulated qubits, we observe no advantage of layering. ## I Introduction Quantum chemistry simulations of strongly correlated systems are challenging for classical computers [1]. While approximate methods often lack accuracy [1; 2; 3; 4; 5], exact methods become infeasible when the system sizes exceed more than 34 spin orbitals--the largest system for which a full configuration interaction (FCI) calculation has been conducted [5]. For this reason, simulations of many advanced chemical systems, such as enzyme active sites and surface catalysts, rely on knowledge-intense, domain-specific approximations [6]. Therefore, developing general chemistry simulation methods for quantum computers could prove valuable. Variational quantum eigensolvers (VQEs) [1; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] are a class of quantum-classical methods intended to perform chemistry simulations on near-term quantum hardware. More specifically, VQEs calculate upper bounds to the ground state energy \(E_{0}\) of a molecular Hamiltonian \(H\) using the Rayleigh-Ritz variational principle \[E_{0}\leq E(\vec{\theta})\equiv\mathrm{Tr}\left(H\Lambda(\vec{\theta})[\rho_{ 0}]\right). \tag{1}\] A quantum processor is used to apply a parametrized quantum circuit to an initial state. In the presence of noise the quantum circuit can, in general, be represented by the parameterized completely positive trace-preserving (CPTP) map \(\Lambda(\vec{\theta})\) and the initial state can be represented by the density matrix \(\rho_{0}\). We will use square brackets to enclose a state acted upon by a CPTP map. This generates a parametrized trial state \(\rho(\vec{\theta})\equiv\Lambda(\vec{\theta})[\rho_{0}]\) that is hard to represent on classical computers. The energy expectation value \(E(\vec{\theta})\) of \(\rho(\vec{\theta})\) gives a bound on \(E_{0}\), which can be accurately sampled using polynomially few measurements [1; 8]. A classical computer then varies \(\vec{\theta}\) to minimize \(E(\vec{\theta})\) iteratively. Provided that the ansatz circuit is sufficiently expressive, \(E(\vec{\theta})\) converges to \(E_{0}\) and returns the ground state energy. Initial implementations of VQEs on near-term hardware have been reported in [7; 17; 18; 19; 20; 21]. Despite these encouraging results, several refinements are needed to alleviate trainability issues [22; 23; 24; 25] and to make VQEs feasible for molecular simulations with larger numbers of orbitals. Moreover, recent results indicate that the noise resilience of VQE algorithms must be improved to enable useful simulations [26; 14; 25]. Adaptive VQEs (ADAPT-VQEs) [10] are promising VQE algorithms, which partially address the issues of trainability and noise resilience. They operate by improving the ansatz circuits in \(t_{\mathrm{max}}\) consecutive steps \[\Lambda_{t}(\theta_{t},\ldots,\theta_{1})=A_{t}(\theta_{t})\circ\Lambda_{t-1} (\theta_{t-1},\ldots,\theta_{1}), \tag{2}\] starting from the identity map \(\Lambda_{0}=\mathrm{id}\). Here, \(t=1,...,t_{\mathrm{max}}\) indexes the step and \(\circ\) denotes functional composition of the CPTP maps. An ansatz element \(A_{t}(\theta_{t})\) is added to the ansatz circuit in each step. The ansatz element \(A_{t}(\theta_{t})\) is chosen from an ansatz-element pool \(\mathcal{P}\) by computing the energy gradient for each ansatz element and picking the ansatz element with the steepest gradient. Numerical evidence suggests that such ADAPT-VQEs are readily trainable and can minimize the energy landscape [27]. In the original proposal of ADAPT-VQE, the ansatz-element pool was physically motivated, comprising single and double fermionic excitations. Since then, different types of ansatz-element pools have been proposed to minimize the number of CNOT gates in the ansatz circuit and thus improve the noise resilience of ADAPT-VQE [11; 12; 13; 28]. ADAPT-VQEs still face issues. Compared with other VQE algorithms, ADAPT-VQEs make more calls to quantum processors. This is because in every iteration, finding the ansatz element with the steepest energy gradient requires at least \(\mathcal{O}\left(|\mathcal{P}|\right)\) quantum processor calls. This makes more efficient pool-exploration strategies desirable. Moreover, noise poses serious restrictions on the maximum depth of useful YQE ansatz circuits [14]. This makes shallower ansatz circuits desirable. A recent algorithm called TETRIS-ADAPT-VQE compresses VQE ansatz circuits into compact layers of ansatz elements [15]. This yields shallower ansatz circuits. However, it has not yet been demonstrated that shallower ansatz circuits lead to improved noise resilience. It is, therefore, important to evaluate whether such shallow ansatz circuits boost the noise resilience of ADAPT-VQEs. In this paper, we broaden the understanding of TETRIS-like layering algorithms. First, we show how non-commuting ansatz elements can be used to define a topology on the ansatz-element pool. Based on this topology, we present Subpool Exploration: a pool-exploration strategy to reduce the number of quantum-processor calls when searching for ansatz elements with large energy gradients. We then investigate several flavors of algorithms to layer and shorten ansatz circuits. Benchmarking these algorithms, we find that alternative layering strategies can yield equally shallow ansatz circuits as TETRIS-ADAPT-VQE. Finally, we investigate whether shallow VQE circuits are more noise resilient. We do this by benchmarking both standard and layered ADAPT-VQEs in the presence of noise. For amplitude damping and dephasing noise, which globally affect idling and non-idling qubits alike, we observe an increased noise resilience due to shallower ansatz circuits. On the other hand, we find that layering is unable to mitigate depolarizing noise, which acts locally on actively manipulated qubits. The remainder of this paper is structured as follows: In Sec. II, we introduce notation and the ADAPT-VQE algorithm. In Sec. III and Sec. IV, subpool exploration and layering for ADAPT-VQE are described and benchmarked, respectively. We study the runtime advantage of layering in Sec. V. In Sec. VI, we investigate the effect of noise on layered VQE algorithms. Finally, we conclude in Sec. VII. ## II Preliminaries: Notation and the Adapt-VQE In what follows, we consider second-quantized Hamiltonians on a finite set of \(N\) spin orbitals: \[H=\sum_{p,q=1}^{N}h_{pq}a_{p}^{\dagger}a_{q}\ +\sum_{p,q,r,s=1}^{N}h_{pqrs}a_{p}^{ \dagger}a_{q}^{\dagger}a_{r}a_{s}. \tag{3}\] \(a_{p}^{\dagger}\) and \(a_{p}\) denote fermionic creation and annihilation operators of the \(p\)th spin-orbital, respectively. The coefficients \(h_{pq}\) and \(h_{pqrs}\) can be efficiently computed classically--we use the Psi4 package [29]. The _Jordan-Wigner transformation_[1] is used to represent creation and annihilation operators by \[a_{p}^{\dagger}\mapsto Q_{p}^{\dagger}\mathcal{Z}_{p}\qquad a_{p}\mapsto Q_{p }\mathcal{Z}_{p}, \tag{4}\] respectively. Here, \[Q_{p}^{\dagger}\coloneqq\frac{1}{2}\left(X_{p}-iY_{p}\right)\qquad Q_{p} \coloneqq\frac{1}{2}\left(X_{p}+iY_{p}\right), \tag{5}\] are the qubit creation and annihilation operators and \(X_{p},Y_{p},Z_{p}\) denote Pauli operators acting on qubit \(p\). The fermionic phase is represented by \[\mathcal{Z}_{p}\coloneqq\bigotimes_{q<p}Z_{q}. \tag{6}\] Anti-Hermitian operators \(T\) generate _ansatz elements_ that form Stone's-encoded unitaries parametrized by one real parameter \(\theta\): \[A(\theta)[\rho]\coloneqq\exp(\theta T)\rho\exp(-\theta T). \tag{7}\] Different ADAPT-VQE algorithms choose \(T\) from different types of operator pools. There are three common types of operator pools. The fermionic pool \(\mathcal{P}^{\text{Fermi}}\)[10] contains fermionic single and double excitations generated by anti-Hermitian operators: \[T_{q}^{p} \coloneqq a_{p}^{\dagger}a_{q}-a_{q}^{\dagger}a_{p}, \tag{8}\] \[T_{rs}^{pq} \coloneqq a_{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s}-a_{s}^{\dagger}a_{r}^{ \dagger}a_{q}a_{p}, \tag{9}\] where \(p,q,r,s=1,...,N\). The QEB pool \(\mathcal{P}^{\text{QEB}}\)[11] contains single and double qubit excitations generated by anti-Hermitian operators: \[T_{q}^{p} \coloneqq Q_{p}^{\dagger}Q_{q}-Q_{q}^{\dagger}Q_{p}, \tag{10}\] \[T_{rs}^{pq} \coloneqq Q_{p}^{\dagger}Q_{q}^{\dagger}Q_{r}Q_{s}-Q_{s}^{\dagger}Q_{r}^{ \dagger}Q_{q}Q_{p}. \tag{11}\] The qubit pool \(\mathcal{P}^{\text{qubit}}\)[13] contains parameterized unitaries generated by strings of Pauli-operators \(\sigma_{p}\in\{X_{p},Y_{p},Z_{p}\}\): \[T_{pq} \coloneqq i\sigma_{p}\sigma_{q}, \tag{12}\] \[T_{pqrs} \coloneqq i\sigma_{p}\sigma_{q}\sigma_{r}\sigma_{s}. \tag{13}\] Further definitions and discussions of all three pools are given in Appendix F. It is worth noting that all ansatz elements have quantum-circuit representations composed of multiple standard single- and two-qubit gates [28]. All three pools contain \(\mathcal{O}\left(N^{4}\right)\) elements. ADAPT-VQEs optimize several objective functions. At iteration step \(t\), the energy landscape is defined by \[E_{t}(\theta_{t},...,\theta_{1})\equiv\operatorname{Tr}\left[HA_{t}(\theta_{t}) \circ\ldots\circ A_{1}(\theta_{1})[\rho_{0}]\right]. \tag{14}\] A global optimizer may repeatedly evaluate \(E_{t}\) and its partial derivatives at the end of the \(t\)th iteration to return a set of optimal parameters: \[(\theta_{t}^{*},\ldots,\theta_{1}^{*})=\operatorname*{arg\,min}_{(\theta_{t}, \ldots,\theta_{1})\in\mathbb{R}^{t}}E_{t}(\theta_{t},...,\theta_{1}). \tag{15}\] These parameters set the upper energy bound of the \(t\)th iteration: \[\mathcal{E}_{t}=E_{t}(\theta_{t}^{*},\ldots,\theta_{1}^{*}). \tag{16}\] A loss function \(L_{t}\colon\mathcal{P}\to\mathbb{R}\) is used to pick an ansatz element from the operator pool \(\mathcal{P}\) at each iteration \(t\): \[A_{t}=\operatorname*{arg\,min}_{A\in\mathcal{P}}L_{t}(A). \tag{17}\] Throughout this paper, we use the standard gradient loss of ADAPT-VQEs, defined in Eq. (20). We denote the state after \(t-1\) iterations with optimized parameters \(\theta_{t-1}^{*},\ldots,\theta_{1}^{*}\) by \[\rho_{t-1}=\Lambda_{t-1}(\theta_{t-1}^{*},\ldots,\theta_{1}^{*})[\rho_{0}]. \tag{18}\] Further, we define the energy expectation after adding the ansatz element \(A\in\mathcal{P}\) as \[E_{t,A}(\theta)=\operatorname{Tr}\left(HA(\theta)[\rho_{t-1}]\right). \tag{19}\] Then, the loss is defined by \[L_{t}(A)=-\left|\frac{\partial E_{t,A}(\theta)}{\partial\theta}\right|_{ \theta=0}=-\left|\operatorname{Tr}\left([H,T]\rho_{t-1}\right)\right|. \tag{20}\] We consider alternative loss functions in Appendix D. The ADAPT-VQE starts by initializing a state \(\rho_{0}\). Often, \(\rho_{0}\) is the Hartree-Fock state \(\rho_{\text{RF}}\). The algorithm then builds the ansatz circuit \(\Lambda_{t}\) by first adding ansatz elements \(A_{t}\in\mathcal{P}\) of minimal loss \(L_{t}\), according to Eq. (17). Then, the algorithm optimizes the ansatz circuit parameters according to Eq. (15). This generates a series of upper bounds, \[\mathcal{E}_{0}>\mathcal{E}_{1}>...>\mathcal{E}_{t}, \tag{21}\] until the improvement of consecutive bounds drops below a threshold \(\varepsilon\) such that \(\mathcal{E}_{t-1}-\mathcal{E}_{t}<\varepsilon\), or the maximum iteration number \(t_{\max}\) is reached. The final bound (\(\mathcal{E}_{t}\) or \(\mathcal{E}_{t_{\max}}\)) is then returned to approximate \(E_{0}\). A pseudo-code of the ADAPT-VQE is given in Algorithm 1. ``` 1:Initialize state \(\rho_{0}\leftarrow\rho_{\text{HF}}\), circuit \(\Lambda_{0}\gets 1\), and pool \(\mathcal{P}\). 2:Initialize energy bound \(\mathcal{E}_{0}\leftarrow\infty\) and accuracy \(\varepsilon\). 3:for\(t=1,...,t_{\max}\)do 4: Select ansatz element: \(A_{t}\leftarrow\operatorname*{arg\,min}_{A\in\mathcal{P}}L_{t}(A)\) 5: Set circuit: \(\Lambda_{t}(\theta_{t},...,\theta_{1})\gets A_{t}(\theta_{t})\circ \Lambda_{t-1}(\theta_{t-1},...,\theta_{1})\) 6: Optimize circuit: \((\theta_{t}^{*},...,\theta_{1}^{*})\leftarrow\operatorname*{arg\,min}E_{t}( \theta_{t},...,\theta_{1})\) 7: Update energy bound: \(\mathcal{E}_{t}\gets E_{t}(\theta_{t}^{*},...,\theta_{1}^{*})\) 8: Update state: \(\rho_{t}\leftarrow\Lambda_{t}(\theta_{t}^{*},...,\theta_{1}^{*})[\rho_{0}]\) 9:if\(\mathcal{E}_{t-1}-\mathcal{E}_{t}<\varepsilon\)then 10:return energy bound \(\mathcal{E}_{t}\) 11:return energy bound \(\mathcal{E}_{t_{\max}}\) ``` **Algorithm 1** ADAPT-VQE [10] ## III Subpoul exploration and layering for ADAPT-VQEs In this section, we present two subroutines to improve ADAPT-VQEs. The first subroutine optimally layers ansatz elements, as depicted in Fig. 1. We call the process of producing dense (right-hand side) ansatz circuits instead of sparse (left-hand side) ansatz circuits "layering". This subroutine can be used to construct shallower ansatz circuits, which may make ADAPT-VQEs more resilient to noise. The second subroutine is subpoul exploration. It searches ansatz-element pools in successions of non-commuting ansatz elements. Subpoul exploration is essential for layering and can reduce the number of calls an ADAPT-VQE makes to a quantum processor. Combining both subroutines results in algorithms similar to TETRIS-ADAPT-VQE [15]. Our work focuses on developing and understanding layering algorithms from the perspective of non-commuting sets of ansatz elements. ### Commutativity and Support Commutativity of ansatz elements is a central notion underlying our subroutines: **Definition 1** (Operator Commutativity): _Two ansatz elements \(A,B\in\mathcal{P}\) are said to "operator commute" iff \(A\left(\theta\right)\) and \(B\left(\phi\right)\) commute for all \(\theta\) and \(\phi\):_ \[\left\{A,B\right\}_{O}=0\iff\forall\theta,\phi\in\mathbb{R}\;:\;\left[A\left( \theta\right),B\left(\phi\right)\right]=0. \tag{22}\] _Conversely, two ansatz elements \(A,B\in\mathcal{P}\) do not operator-commute iff there exist parameters for which the corresponding operators do not commute:_ \[\left\{A,B\right\}_{O}\neq 0\iff\exists\theta,\phi\in\mathbb{R}:\left[A\left( \theta\right),B\left(\phi\right)\right]\neq 0. \tag{23}\] Figure 1: Layering: A sparse ansatz circuit (left), as produced by standard ADAPT-VQEs, can be compressed to a dense structure (right) by layering. Boxes denote ansatz elements. Each line represents a single qubit. Note that ansatz circuit elements entangle two or four qubits. **Definition 2** (Operator non-commuting set): _Given an ansatz-element pool \(\mathcal{P}\) and an ansatz element \(A\in\mathcal{P}\), we define its operator non-commuting set as follows_ \[\mathcal{N}_{O}(\mathcal{P},A)\coloneqq\left\{B\in\mathcal{P}:\left\{A,B\right\} _{O}\neq 0\right\}. \tag{24}\] Operator commutativity is central to layering. The operator non-commuting set is central to subpool exploration. Structurally similar and more-intuitive notions can be defined using qubit support: **Definition 3** (Qubit support): _Let \(\mathcal{B}\left(\mathcal{H}\right)\) denote the set of superoperators on a Hilbert space \(\mathcal{H}\). Let \(\mathcal{H}_{\mathcal{Q}}\coloneqq\bigotimes_{q\in\mathcal{Q}}\mathcal{H}_{q}\) denote the Hilbert space of a set of all qubits \(\mathcal{Q}\equiv\left\{1,\ldots,N\right\}\) where \(\mathcal{H}_{q}\) is the Hilbert space corresponding to the \(q\)th qubit. Consider a superoperator \(A\in\mathcal{B}\left(\mathcal{H}_{\mathcal{Q}}\right)\). First, we define the superoperator subset that acts on a qubit subset \(\mathcal{W}\subseteq\mathcal{Q}\) as_ \[\mathcal{B}_{\mathcal{W}}\coloneqq\left\{B\otimes\mathrm{id}\colon B\in \mathcal{B}\left(\mathcal{H}_{\mathcal{W}}\right)\right\}\subseteq\mathcal{B }\left(\mathcal{H}_{\mathcal{Q}}\right). \tag{25}\] _Then, we define the qubit support of a superoperator \(A\) as its minimal qubit subset \(\mathcal{W}\):_ \[\mathrm{Supp}\left(A\right)\coloneqq\underset{\mathcal{W}\subseteq\mathcal{Q }\colon\,A\in\mathcal{B}_{\mathcal{W}}}{\bigcap}\mathcal{W}\,. \tag{26}\] _The notion of support extends to parameterized ansatz elements:_ \[\mathrm{Supp}\left(B\right)\coloneqq\bigcup_{\theta}\mathrm{Supp}\left(B \left(\theta\right)\right), \tag{27}\] _where \(B\) is a parameterized ansatz element._ Intuitively, the qubit support of an ansatz element \(A\) is the set of all qubits the operator \(A\) acts on nontrivially Fig. 2. The concept of qubit support allows one to define support commutativity of ansatz elements as follows. **Definition 4** (Support commutativity): _Two ansatz elements \(A,B\in\mathcal{P}\) are said to "support-commute" iff their qubit support is disjoint._ \[\left\{A,B\right\}_{S}=0\iff\mathrm{Supp}(A)\cap\mathrm{Supp}(B)=\emptyset. \tag{28}\] _Conversely, two ansatz elements \(A,B\in\mathcal{P}\) do not support-commute iff their supports overlap_ \[\left\{A,B\right\}_{S}\neq 0\iff\mathrm{Supp}(A)\cap\mathrm{Supp}(B)\neq\emptyset. \tag{29}\] **Definition 5** (Support non-commuting set): _Given an ansatz-element pool \(\mathcal{P}\) and an ansatz element \(A\in\mathcal{P}\), we define the set of ansatz elements with overlapping support as_ \[\mathcal{N}_{\mathcal{S}}(\mathcal{P},A)\coloneqq\left\{B\in\mathcal{P}:\left\{ B,A\right\}_{S}\neq 0\right\}. \tag{30}\] Operator commutativity and support commutativity are not equivalent--see Fig. 2. However, the following properties hold. Elements supported on disjoint qubit sets operator commute: \[\forall A,B\in\mathcal{P}\quad\left\{A,B\right\}_{\mathrm{S}}=0\implies\{A,B \}_{\mathrm{O}}=0. \tag{31}\] Conversely, operator non-commuting ansatz elements act on, at least, one common qubit which implies they are support non-commuting: \[\forall A,B\in\mathcal{P}\quad\left\{A,B\right\}_{\mathrm{O}}\neq 0\implies \{A,B\}_{\mathrm{S}}\neq 0. \tag{32}\] The last relation also implies that the operator non-commuting set of \(A\) is contained in its support non-commuting set \[\mathcal{N}_{\mathrm{O}}(\mathcal{P},A)\subseteq\mathcal{N}_{\mathrm{S}}( \mathcal{P},A). \tag{33}\] We further generalize the notions of operator and support commutativity in Appendix J. Henceforth, we will use generalized commutativity to denote either operator or support commutativity or any other type of commutativity specified in Appendix J. Further, \(\mathcal{N}_{\mathrm{G}}\) and \(\{\bullet,\bullet\}_{\mathrm{G}}\) will be used to denote the generalized non-commuting set and the generalized commutator, respectively. For later reference, we note that generalized non-commuting sets induce a topology on \(\mathcal{P}\) via the following discrete metric. **Definition 6** (Pool metric): _Let \(d\colon\mathcal{P}\times\mathcal{P}\to\left\{0,1,2\right\}\) define a discrete metric such that: (i) \(\forall A\in\mathcal{P}\), set \(d(A,A)=0\). (ii) \(\forall A,B\in\mathcal{P}\) with \(A\neq B\) and \(\left\{A,B\right\}_{G}\neq 0\), set \(d(A,B)=1\). (iii) \(\forall A,B\in\mathcal{P}\) with \(A\neq B\) and \(\left\{A,B\right\}_{G}=0\), set \(d(A,B)=2\)._ With this metric, the generalized non-commuting elements \(\mathcal{N}_{\mathrm{G}}(\mathcal{P},A)\) form a ball of distance one around each ansatz element \(A\in\mathcal{P}\). The metric is represented diagrammatically in Fig. 3. This allows us to identify an element \(A\in\mathcal{P}\) as a local minimum if there is no element with lower loss within \(A\)'s generalized non-commuting set. **Property 1** (Local minimum): _Let \(\mathcal{P}\) be an ansatz-element pool with the pool metric of Definition 6, and Figure 2: A diagram comparing support and operator commutativity. (a) The elements both support and operator commute. Note that \(R_{y}(\theta)\) only has qubit support on the top qubit line. (b) The elements operator commute but do not support commute. (c) The elements neither support nor operator commute. _let \(L:\mathcal{P}\rightarrow\mathbb{R}\) denote a loss function. Then, any element \(A\in\mathcal{P}\) for which_ \[L\left(A\right)=\min_{B\in\mathcal{N}_{\mathrm{G}}\left(\mathcal{P},A\right)}L \left(B\right), \tag{34}\] _is a local minimum on \(\mathcal{P}\) with respect to \(L\)._ This property is important as we will later show that subpool exploration always returns local minima. To gain intuition about the previously defined notions, we consider the ansatz elements of the QEB and the Pauli pools, Eqs. (10) to (13). The ansatz elements of these pools have qubit support on either two or four qubits, as is illustrated in Fig. 1. Commuting ansatz elements with disjoint support can be packed into an ansatz-element layer, which can be executed on the quantum processor in parallel. This is the core idea of layering, which helps to reduce the depths of ansatz circuits. Moreover, as generalized non-commuting ansatz elements must share at least one qubit, we conclude that the generalized non-commuting set \(\mathcal{N}_{\mathrm{G}}\left(\mathcal{P},A\right)\) has at most \(\mathcal{O}\left(N^{3}\right)\) ansatz elements. This is a core component of subpool exploration. Analytic expressions for the cardinalities of the generalized non-commuting sets are given in Appendix H. In Appendix G, we prove that two different fermionic excitations operator commute iff they act on disjoint or equivalent sets of orbitals. The same is true for qubit excitations. Pauli excitations operator commute iff the generating Pauli strings differ in an even number of places within their mutual support. ### Subpool exploration In this section, we introduce _subpool exploration_, a strategy to explore ansatz-element pools with fewer quantum-processor calls. Subpool exploration differs from the standard ADAPT-VQE as follows. Standard ADAPT-VQEs evaluate the loss of every ansatz element in the ansatz-element pool \(\mathcal{P}\) in every iteration of ADAPT-VQE, (Algorithm 1, Line 4). This leads to \(\mathcal{O}\left(\left|\mathcal{P}\right|\right)\) quantum-processor calls to identify the ansatz element of minimal loss. Instead, subpool exploration evaluates the loss of a reduced number of ansatz elements by exploring a sequence of generalized non-commuting ansatz-element subpools. This can lead to a reduced number of quantum-processor calls and returns an ansatz element which is a local minimum of the pool \(\mathcal{P}\). The details of subpool exploration are as follows. _Algorithm:_--Let \(\mathcal{P}\) denote a given pool and \(L\) a given loss function. Instead of naively computing the loss of every ansatz element in \(\mathcal{P}\), our algorithm explores \(\mathcal{P}\) iteratively by considering subpools, \(\mathcal{S}_{m}\subsetneq\mathcal{P}\), in consecutive steps. During this process, the algorithm successively determines the ansatz element with minimal loss within subpool \(\mathcal{S}_{m}\) as \[A_{m}=\operatorname*{arg\,min}_{A\in\mathcal{S}_{m}}L(A). \tag{35}\] Meanwhile, the corresponding loss value is stored: \[\mathcal{L}_{m}=L(A_{m}). \tag{36}\] Iterations are halted when loss values stop decreasing. The key point of subpool exploration is to update the subpools \(\mathcal{S}_{m}\) using the generalized non-commuting set generated by \(A_{m}\): \[\mathcal{S}_{m+1}=\mathcal{N}_{\mathrm{G}}\left(\mathcal{P}\backslash \mathcal{S}_{\leq m},A_{m}\right)\subseteq\mathcal{N}_{\mathrm{G}}\left( \mathcal{P},A_{m}\right),\quad\forall m\geq 0, \tag{37}\] where \(\mathcal{S}_{\leq m}\coloneqq\cup_{l=0}^{m}\mathcal{S}_{l}\). A pseudo-code summary of subpool exploration is given in Algorithm 2, and a visual summary is displayed in Fig. 4. We now discuss aspects of subpool exploration. ``` 1:Input: Pool \(\mathcal{P}\) and loss function \(L\). 2:Initialize subpool \(\mathcal{S}_{0}\) and loss value \(\mathcal{L}_{0}\leftarrow\infty\). 3:for\(m=0,...\)do 4: Select ansatz element \(A_{m}\leftarrow\operatorname*{arg\,min}_{A\in\mathcal{S}_{m}}L(A)\). 5: Update loss value \(\mathcal{L}_{m}\gets L(A_{m})\). 6:if\(\mathcal{L}_{m}<\mathcal{L}_{m-1}\)then 7: Update subpool \(\mathcal{S}_{m+1}=\mathcal{N}_{\mathrm{G}}\left(\mathcal{P}\backslash \mathcal{S}_{\leq m},A_{m}\right)\). 8:else 9:return\(A_{m},\mathcal{S}_{m}\). ``` **Algorithm 2** Subpool Exploration _Efficiency:_-- Let \(m_{s}\) denote the index of the final iteration and define the set of searched ansatz elements as \[\mathcal{S}\coloneqq\cup_{m=0}^{m_{s}}\mathcal{S}_{m}. \tag{38}\] Figure 3: A diagram of the distance \(d\) from an element \(A\in\mathcal{P}\) under the pool metric, Definition 6. The ansatz element \(A\) (black dot) is surrounded by ansatz elements of the non-commuting set \(\mathcal{N}_{\mathrm{G}}(\mathcal{P},A)\) of distance 1 (white circle). All other elements \(\mathcal{P}\setminus\mathcal{N}_{\mathrm{G}}(\mathcal{P},A)\) have distance 2 (gray shaded region). As loss values of ansatz elements that have been explored are stored in a list, it follows that subpool exploration requires only \(|\mathcal{S}|\) loss function calls. On the other hand, exploring the whole pool in ADAPT-VQE requires \(|\mathcal{P}|\) loss-function calls. Since \(\mathcal{S}\) is a subset of \(\mathcal{P}\), subpool exploration may reduce the number of quantum-processor calls: \[\mathcal{S}\subseteq\mathcal{P}\implies|\mathcal{S}|\leq|\mathcal{P}|. \tag{39}\] To give a specific example, consider the QEB and qubit pools. Those pools contain \(\mathcal{O}\left(N^{4}\right)\) ansatz elements. On the other hand, generalized non-commuting sets have \(\mathcal{O}\left(N^{3}\right)\) ansatz elements. Thus, by choosing an appropriate initial subpool, we can ensure that \(|\mathcal{S}_{m}|=\mathcal{O}\left(N^{3}\right)\) for all subpools. Especially if the number of searched subpools is \(m_{s}=\mathcal{O}\left(1\right)\), subpool exploration can return ansatz elements of low loss while exploring only \(\mathcal{O}\left(N^{3}\right)\) ansatz elements. We note that this pool-exploration strategy ignores certain ansatz elements. In particular, it may miss the optimal ansatz element with minimal loss. Nevertheless, as explained in the following paragraphs, it will always return ansatz elements which are _locally optimal_. This ensures that the globally optimal ansatz element can always be added to the ansatz circuit later in the algorithm. _Optimality:_--As the set of explored ansatz elements \(\mathcal{S}\) is a subset of \(\mathcal{P}\), the ansatz element returned by subpool exploration \[A_{\mathcal{S}}^{*}\coloneqq\operatorname*{arg\,min}_{A\in\mathcal{S}}L(A), \tag{40}\] may be sub-optimal to the ansatz element returned by exploring the whole pool \[A_{\mathcal{P}}^{*}\coloneqq\operatorname*{arg\,min}_{A\in\mathcal{P}}L(A). \tag{41}\] That is, \[\mathcal{S}\subset\mathcal{P}\implies\operatorname*{arg\,min}_{A\in\mathcal{P} }L(A)\leq\operatorname*{arg\,min}_{A\in\mathcal{S}}L(A). \tag{42}\] Yet, there are a couple of useful properties that pertain to the output \(A_{\mathcal{S}}^{*}\) of subpool exploration. At first, the outputs of subpool exploration are local minima. **Property 2** (Local Optimality): _Any ansatz element \(A_{\mathcal{S}}^{*}\) returned by subpool exploration is a local minimum._ The proof of this property is immediate. Moreover, as subpool exploration constructs subpools from generalized non-commuting sets, the only ansatz elements \(B\in\mathcal{P}\) with \(L(B)<L(A_{\mathcal{S}}^{*})\) must necessarily generalized commute with \(A_{\mathcal{S}}^{*}\in\mathcal{P}\). **Property 3** (Better ansatz elements generalized commute): _Let \(\mathcal{P}\) denote a pool and \(L\) denote a loss function. Let \(A_{\mathcal{S}}^{*}\in\mathcal{P}\) denote the final output of subpool exploration. Then,_ \[\forall B\in\mathcal{P}\text{ with }L(B)<L(A_{\mathcal{S}}^{*}) \implies\{A_{\mathcal{S}}^{*},B\}_{G}=0 \tag{43}\] \[\implies\{A_{\mathcal{S}}^{*},B\}_{O}=0. \tag{44}\] Proof.: We prove this property by contradiction. Assume that there is an ansatz element \(L(B)<L(A_{\mathcal{S}}^{*})\) such that \(\{A_{\mathcal{S}}^{*},B\}_{G}\neq 0\). This implies that \(B\) is in the generalized non-commuting set \(\mathcal{N}_{\mathrm{G}}(\mathcal{P},A_{\mathcal{S}}^{*})\) and exploring the corresponding subpool would have produced \(L(B)<L(A_{\mathcal{S}}^{*})\) leading to the exploration of \(\mathcal{N}_{\mathrm{G}}(\mathcal{P},B)\). This, in turn, can only return an ansatz element with a loss \(L(B)\) or smaller. This would contradict \(A_{\mathcal{S}}^{*}\) having been the final output of the algorithm. Finally, we use Eq. (31) to show Eq. (44) Figure 4: Visualization of our strategy for subpool exploration. The pool \(\mathcal{P}\) is successively explored in subpools \(\mathcal{S}_{m}\), with ansatz elements \(A_{m}\) of minimal loss generating future subpools through their generalized non-commuting set. Property 3 is useful as it ensures that subpool exploration can find better ansatz elements, which first were missed, in subsequent iterations. To see this, suppose a first run of subpool exploration returns a local minimum \(A\in\mathcal{P}\). Further, suppose there is another local minimum \(B\in\mathcal{P}\) such that \(L(B)<L(A)\). Property 3 ensures that \(A\) and \(B\) generalized commute. Hence, by running subpool exploration repeatedly on the remaining pool, we are certain to discover the better local minimum eventually. Ultimately, this will allow for restoring the global minimum. _Initial subpool:--_So far, we have not specified any strategy for choosing the initial set \(\mathcal{S}_{0}\). This can, for example, be done by taking the subpool of a single random ansatz element \(A_{0}\in\mathcal{P}\). Alternatively, one can compose \(\mathcal{S}_{0}\) of random ansatz elements enforcing an appropriate pool size, e.g., \(|\mathcal{S}_{0}|=\mathcal{O}\left(N^{3}\right)\) for QEB and qubit pools. We will refer to the ADAPT-VQE with subpool exploration as the Explore-ADAPT-VQE. This algorithm is realized by replacing Line 4 in Algorithm 1 with subpool exploration, Algorithm 2, with \(L\to L_{t}\). ### Layering Below, we describe two methods for arranging generalized non-commuting ansatz elements into ansatz-element layers. **Definition 7** (Ansatz-element layer): _Let \(\mathcal{A}\) be a subset of \(\mathcal{P}\). We say that \(\mathcal{A}\) is an ansatz-element layer iff_ \[\{A,B\}_{G}=0\quad\forall A,B\in\mathcal{A}\text{ such that }A\neq B. \tag{45}\] _We denote the operator corresponding to the action of the mutually generalized-commuting ansatz elements of an ansatz-element layer \(\mathcal{A}\) with_ \[\mathcal{A}^{\mathrm{o}}(\vec{\theta})=\prod_{A\in\mathcal{A}}A(\theta_{A}). \tag{46}\] _Here, \(\vec{\theta}\) is the parameter vector for the layer:_ \[\vec{\theta}\equiv\left\{\theta_{A}\colon A\in\mathcal{A}\right\}. \tag{47}\] _We note that for support commutativity, the product can be replaced by the tensor product._ Since ansatz-element layers depend on parameter vectors, the update rule is \[\Lambda_{t}(\vec{\vartheta}_{t})=\mathcal{A}^{\mathrm{o}}(\vec{\theta}_{t}) \circ\Lambda_{t-1}(\vec{\vartheta}_{t-1}),\quad\vec{\vartheta}_{t}=(\vec{ \theta}_{t},\vec{\vartheta}_{t-1}), \tag{48}\] As before, the algorithm is initialized with \(\Lambda_{0}=\mathrm{id}\) and \(\vec{\vartheta}=()\). To make the dependence on the ansatz circuit explicit, we denote the energy landscape as \[E_{\Lambda}(\vec{\vartheta}):=\mathrm{tr}\left[H\Lambda(\vec{\vartheta}) \left[\rho_{0}\right]\right]. \tag{49}\] The energy landscape of the \(t\)th iteration is denoted as \[E_{t}(\vec{\vartheta}_{t})\equiv E_{\Lambda_{t}}(\vec{\vartheta}_{t}), \tag{50}\] and its optimal parameters are \[\vec{\vartheta}_{t}^{*}=\operatorname*{arg\,min}_{\vec{\vartheta}}E_{t}(\vec{ \vartheta}). \tag{51}\] Further, the gradient loss (c.f. Eq. (20)) is \[L_{\Lambda(\vec{\vartheta})}(A)\equiv-\left|\mathrm{tr}\left[\left[H,T_{A} \right]\Lambda(\vec{\vartheta})\left[\rho_{0}\right]\right]\right|, \tag{52}\] where the definitions in Eqs. (20) and (52) satisfy the following relation \[L_{t}(A)=L_{\Lambda_{t-1}(\vec{\vartheta}_{t-1}^{*})}(A). \tag{53}\] With this notation in place, we proceed to describe two methods to construct ansatz-element layers. ### Static layering Our algorithm starts by initializing an empty ansatz-element layer and the remaining pool \(\mathcal{P}^{\prime}\) to be the entire pool \(\mathcal{P}\). Further, the loss is set such that \(L\gets L_{\Lambda_{t-1}}\) for the \(t\)th layer. The algorithm proceeds to fill the ansatz-element layer by successively running subpool exploration to pick an ansatz element \(A_{n}\) in \(n=0,\ldots,n_{\max}\) iterations. This naturally induces an ordering on the layer. At every step of the iteration, the corresponding generalized non-commuting set \(\mathcal{N}_{\mathrm{G}}(\mathcal{P}^{\prime},A_{n})\) is removed from the remaining pool \(\mathcal{P}^{\prime}\). If the loss of the selected ansatz element \(A_{n}\) is smaller than a predefined threshold \(L(A)<\ell\), it is added to the ansatz-element layer \(\mathcal{A}\). The layer is completed once the pool is exhausted (\(\mathcal{P}^{\prime}=\emptyset\)) or the maximal iteration count \(n_{\max}\) is reached. A pseudocode summary of static layering is given in Algorithm 3. ``` 1:Input: Pool \(\mathcal{P}\), loss \(L\), max. loss \(\ell\), max. iteration \(n_{\max}\) 2:Initialize remaining pool \(\mathcal{P}^{\prime}\leftarrow\mathcal{P}\) 3:Initialize ansatz layer \(\mathcal{A}\leftarrow\emptyset\) 4:for\(n=0,\ldots,n_{\max}\)do 5: Set \(A\leftarrow\mathrm{SubpoolExploration}(\mathcal{P}^{\prime},L)\) 6:if\(L(A)<\ell\)then 7: Update layer \(\mathcal{A}\leftarrow\mathcal{A}\cup\{A\}\). 8: Reduce pool \(\mathcal{P}^{\prime}\leftarrow\mathcal{P}^{\prime}\setminus\mathcal{N}_{ \mathrm{G}}(\mathcal{P}^{\prime},A)\) 9:if\(\mathcal{P}^{\prime}=\emptyset\)thenbreak return\(\mathcal{A}\) ``` **Algorithm 3** Build Static Layer In Static-ADAPT-VQE, static layering is used to grow an ansatz circuit iteratively. In each iteration, the layer is appended to the ansatz circuit, and the ansatz-circuit parameters are re-optimized. Iterations halt once the decrease in energy falls below \(\varepsilon\), the energy accuracy per ansatz element. A summary of Static-ADAPT-VQE is given in Algorithm 4. We establish the close relationship between static layering and TETRIS-ADAPT-VQE in the following property. **Property 4**: _Assume that all ansatz elements \(A,B\in\mathcal{P}\) have distinct loss \(L(A)\neq L(B)\). Using support commutativity and provided that \(\ell=0\) and \(n_{\text{max}}\) are sufficiently large to ensure that the whole layer is filled, Static-ADAPT-VQE and TETRIS-ADAPT-VQE will produce identical ansatz-element layers._ This property is proven by induction. Assume that the previous iterations of ADAPT-VQE have yielded a specific ansatz circuit \(\Lambda_{t-1}(\vec{\vartheta}^{*})\). The next layer of ansatz elements \(\mathcal{A}_{t}\) can be constructed either by Static-ADAPT-VQE or TETRIS-ADAPT-VQE. For both algorithms, the equivalence of \(\Lambda_{t-1}(\vec{\vartheta}^{*})\) implies that the loss function, Eq. (53), of any ansatz element is identical throughout the construction of the layer \(\mathcal{A}_{t}\). First, by picking \(\ell=0\), we ensure that both TETRIS-ADAPT-VQE and Static-ADAPT-VQE only accept ansatz elements with a non-zero gradient. Next, we note that if an ansatz element is placed on a qubit by Static-ADAPT-VQE, then by Property 3, there exists no ansatz element that acts actively on this qubit and generates a lower loss. Moreover, there exists no ansatz element with identical loss that acts nontrivially on qubit, as we assume that all ansatz elements have a distinct loss. Similarly, TETRIS-ADAPT-VQE places ansatz elements from lowest to highest loss and ensures no two ansatz elements have mutual support. Thus, if an ansatz element is placed on a qubit by TETRIS-ADAPT-VQE, there exists no ansatz element with a lower loss that acts nontrivially on this qubit. Again, there also exists no ansatz element with identical loss supported by this qubit, as we assume that all ansatz elements have a distinct loss. Combining these arguments, both Static- and TETRIS-ADAPT-VQE will fill the ansatz-element layer \(\mathcal{A}_{t}\) with equivalent ansatz elements. The ansatz elements may be chosen in a different order. By induction, the equivalence of \(\Lambda_{t-1}\) and \(\mathcal{A}_{t}\) implies the equivalence of the ansatz circuit \(\Lambda_{t}\). #### iii.3.2 Dynamic layering In static layering, ansatz-circuit parameters are optimized after appending a whole layer with several ansatz elements. In dynamic layering, on the other hand, ansatz-circuit parameters are re-optimized every time an ansatz element is appended to a layer. The motivation for doing so is to simplify the optimization process. The price is having to run the global optimization more times. We now describe how to perform dynamic layering. The starting point is a given ansatz circuit \(\Lambda\), a set of optimal parameters \(\vec{\vartheta}^{*}\) [Eq. (51)] and their corresponding energy bound \(\mathcal{E}\equiv E_{\Lambda}(\vec{\vartheta}^{*})\). The remaining pool \(\mathcal{P}^{\prime}\) is initiated to be the entire pool \(\mathcal{P}\). Starting from an empty layer \(\mathcal{A}^{\prime}\) and a temporary ansatz circuit \(\Lambda^{\prime}=\Lambda\), a layer is constructed dynamically by iteratively adding ansatz elements \(A\) to \(\mathcal{A}^{\prime}\) and \(\Lambda^{\prime}\) while simultaneously re-optimizing the ansatz-circuit parameters \(\vec{\vartheta}^{*}\). Based on the loss \(L^{\prime}\) induced by the currently optimal ansatz circuit \(\Lambda^{\prime}(\vec{\vartheta}^{*})\), subpool exploration is used to select ansatz elements \(A\). Simultaneously, the pool of remaining ansatz elements \(\mathcal{P}^{\prime}\) is shrunk by the successive removal of the generalized non-commuting sets \(\mathcal{N}_{\text{G}}(\mathcal{P}^{\prime},A)\). Finally, ansatz elements are only added to the layer \(\mathcal{A}\) if their loss is below a threshold \(\ell\) and the updated energy bound \(\mathcal{E}^{\prime}\) exceeds a gain threshold of \(\varepsilon\). A pseudocode summary is given in Algorithm 5 ``` 1:Initialize state \(\rho_{0}\leftarrow\rho_{\text{HF}}\), ansatz circuit \(\Lambda_{0}\gets 1\), pool \(\mathcal{P}\). 2:Initialize energy bound \(\mathcal{E}_{0}\leftarrow\infty\) and accuracy \(\varepsilon\). 3:Initialize maximal loss \(\ell\) and iteration count \(n_{\text{max}}\). 4:for\(t=1,...,t_{\text{max}}\)do 5: Get layer: \(\mathcal{A}_{t}\leftarrow\text{BuildStaticLayer}(\mathcal{P},L_{\Lambda_{t-1}}, \ell,n_{\text{max}})\) 6: Set ansatz circuit: \(\Lambda_{t}(\vec{\vartheta}_{t})\leftarrow\mathcal{A}_{t}^{*}(\vec{\theta}_{t })\circ\Lambda_{t-1}(\vec{\vartheta}_{t-1})\) 7: Optimize ansatz circuit: \(\vec{\vartheta}_{t}^{*}\leftarrow\arg\min E_{t}(\vec{\vartheta}_{t})\) 8: Set ansatz circuit: \(\Lambda_{t}\leftarrow\Lambda_{t}(\vec{\vartheta}_{t}*)\) 9: Update energy bound: \(\mathcal{E}_{t}\leftarrow E_{t}(\theta_{t}^{*},...,\theta_{1}^{*})\) 10:if\(\mathcal{E}_{t-1}-\mathcal{E}_{t}<\varepsilon\,|\mathcal{A}_{t}|\)then 11:return energy bound \(\mathcal{E}_{t}\) 12:return energy bound \(\mathcal{E}_{t_{\text{max}}}\) ``` **Algorithm 4** Static-ADAPT-VQE Dynamic-ADAPT-VQE iteratively builds dynamic layers \(\mathcal{A}_{t}\) and appends those to the ansatz circuit \(\Lambda_{t-1}\). The procedure is repeated until an empty layer is returned; that is, no ansatz element is found that reduces the energy by more than \(\varepsilon\). Alternatively, the algorithm halts when the (user-specified) maximal iteration count \(t_{\text{max}}\) is reached. A pseudocode summary is given in Algorithm 6. Figure 5: Visualization of layer construction and successive pool reduction. Gray areas indicate the removal of generalized non-commuting sets corresponding to ansatz elements \(A_{n}\) added to the layer \(\mathcal{A}\). Parameters can either be optimized once the whole layer is fixed (static layering) or after adding each ansatz element (dynamic layering). ## IV Benchmarking Noiseless Performance In this section, we benchmark various aspects of sub-pool exploration and layering in noiseless settings. To this end, we use numerical state-vector simulations to study a wide variety of molecules summarized in Table 1. While BeH\({}_{2}\) and H\({}_{2}\)O are among the larger molecules to be benchmarked, H\({}_{4}\) and H\({}_{6}\) are prototypical examples of strongly correlated systems. Our simulations demonstrate the utility of subpool exploration in reducing quantum-processor calls. Further, we show that when compared to standard ADAPT-VQE, both Static- and Dynamic-ADAPT-VQE reduce the ansatz circuit depths to similar extents. All simulations use the QEB pool because it gives a higher resilience to noise than the fermionic pool and performs similarly to the qubit pool [14]. Moreover, unless stated otherwise, we use support commutativity to ensure that Static-ADAPT-VQE produces ansatz circuits equivalent to TETRIS-ADAPT-VQE. ## IV Efficiency of subpool exploration We begin by illustrating the ability of subpool exploration to reduce the number of loss function calls when \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Name & H\({}_{4}\) & LiH & H\({}_{6}\) & BeH\({}_{2}\) & H\({}_{2}\)O \\ \hline Orbitals \(N\) & 8 & 12 & 12 & 14 & 14 \\ Structure & H & H & H & H & H & H & H & H & H & H \\ & \(d=3\,\mathrm{\AA}\) & \(d=1.546\,\mathrm{\AA}\) & \(d=0.735\,\mathrm{\AA}\) & \(d=1.316\,\mathrm{\AA}\) & \(d=1.0285\,\mathrm{\AA}\) & \(d=1.0285\,\mathrm{\AA}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Table of molecular conformations and the corresponding number of spin-orbitals \(N\) used in numerical simulations. Figure 6: Histograms of the relative frequencies of the number of subpools searched for identifying a suitable ansatz element \(m_{*}\) with Explore-ADAPT-VQE. The mean and uncertainty in the mean are indicated by solid and dashed lines, respectively. searching for a suitable ansatz element \(A\) to append to an ansatz circuit. To this end, we present Explore-ADAPT-VQE (ADAPT-VQE with subpool exploration) using the QEB pool, Eq. (10), and operator commutativity. We set the initial subpool, \(\mathcal{S}_{0}\), such that it consists of a single ansatz element selected uniformly at random from the pool. To provide evidence of a reduction in the number of loss-function calls, we track the number of subpools searched, \(m_{s}\), to find a local minimum. The results are depicted in Fig. 6. There is a tendency to terminate subpool exploration after visiting two or three subpools. This should be compared with the maximum possible QEB-pool values of \(m_{s}\): \(N-2=6,10,10,12,12\) for H\({}_{4}\), LiH, H\({}_{6}\), BeH\({}_{2}\), and H\({}_{2}\)O, respectively. Thus, Fig. 6 shows that subpool exploration reduces the number of loss-function calls in the cases tested. ### Reducing ansatz-circuit depth Next, we compare the ability of Static-(TETRIS)- and Dynamic-ADAPT-VQE to reduce the depth of the ansatz circuits as compared to standard and Explore-ADAPT-VQE. The data is depicted in Fig. 7. Here, we depict the energy error, \[\Delta_{t}=\mathcal{E}_{t}-E_{FCI}, \tag{54}\] given as the distance of the VQE predictions \(\mathcal{E}_{t}\) from the FCI ground state energy \(E_{FCI}\) as a function of (left) the ansatz-circuit depths and (right) the number of ansatz-circuit parameters. The left column shows that layered ADAPT-VQEs achieve lower energy errors with shallower ansatz circuits. Meanwhile, the right column demonstrates that all ADAPT-VQEs achieve similar energy accuracy with respect to the number of ansatz-circuit parameters. ### Reducing runtime In this section, we provide numerical evidence that subpool exploration and layering reduce the runtime of ADAPT-VQE. A mathematical analysis of asymptotic runtimes will follow in Section V. To provide evidence of a runtime reduction in numerical simulations, we show that layered ADAPT-VQEs require fewer expectation value evaluations (and thus shots and quantum processor runtime) to reach a given accuracy. Our numerical results are depicted in Figs. 8 and 9 for expectation-value evaluations related to calculating losses and parameter optimizations, respectively. We now discuss our results. To convert data accessible in numerical simulations (such as loss function and optimizer calls) into runtime data (such as expectation values and shots), we proceed as follows. For our numerical data, we evaluate runtime in terms of the number of expectation value evaluations rather than processor calls or shots. This is justified as the number of shots (or processor calls) is directly proportional to the number of expectation values in our simulations, as detailed in Appendix B. Next, we evaluate the runtime requirements associated with loss-function evaluations by tracking the number of times a loss function is called. The evaluation of the loss function over a subpool \(\mathcal{S}\) is recorded as \(|\mathcal{S}|+1\) expectation-value evaluations, assuming the use of a finite difference rule. Thus we produce the data presented in Fig. 8. Finally, we evaluate the runtime requirements of the optimizer by tracking the number of energy expectation values or gradients it requests. The gradient of \(P\) variables is then Figure 7: The energy accuracy is plotted as a function of ansatz-circuit depth (left) and the number of ansatz-circuit parameters (ansatz elements, right), for QEB standard, Explore-, Static-(TETRIS)-, and Dynamic-ADAPT-VQE. These simulations use support commutation. Each row shows data for a specific molecule. The region of chemical accuracy is shaded. recorded as \(P+1\) energy expectation value evaluations, assuming the use of a finite difference rule. This gives the data in Fig. 9. In Fig. 8, we show that layered ADAPT-VQEs require fewer loss-related expectation-value evaluations to reach a given energy accuracy. We attribute this advantage to subpools gradually shrinking during layer construction. They thus require fewer loss function evaluations per ansatz element added to the ansatz-element circuit. We further notice that Explore-ADAPT-VQE does not reduce the loss-related expectation values required for standard ADAPT-VQE. We attribute this result to our examples' small pool sizes, with only 8 to 14 qubits. As qubit sizes increase, we expect a more noticeable advantage for Explore-ADAPT-VQE, as discussed in Sec. V. In Fig. 9 (left), we show that Static-ADAPT-VQE reduces the number of optimizer calls needed to reach a given accuracy. As expected, the left column shows that Static-ADAPT-VQE calls the optimizer \(\mathcal{O}\left(N\right)\) times less than any other algorithm. This is expected, as standard, Explore-, and Dynamic-ADAPT-VQE calls the optimizer each time a new ansatz element is added to the ansatz Figure 8: Energy accuracy against the number of loss function calls. Using the QEB pool and support commutation, we compare standard-, Explore-, Static-(TETRIS)-, and Dynamic-ADAPT-VQE. Each row shows data for a specific molecule, with the number of orbitals increasing up the page. Energy accuracies better than chemical accuracy are shaded in cream. Figure 9: Energy accuracy against the number of times the ansatz is optimized (left column); and the number of expectation values calculated during optimizer calls (right column). Using the QEB pool and support commutation, we compare standard-, Explore-, Static-(TETRIS)-, and Dynamic-ADAPT-VQE. Each row shows data for a specific molecule, with the number of orbitals increasing up the page. Energy accuracies better than chemical accuracy are shaded in cream. element circuit. Meanwhile, Static-ADAPT-VQE calls the optimizer only after adding a whole layer of \(\mathcal{O}\left(N\right)\) ansatz elements to the ansatz-element circuit. In Fig. 9 (right), we analyze how the reduced number of optimizer calls translates to the number of optimizer-related expectation values required to reach a given accuracy. The data was obtained using a BFGS optimizer with a gradient norm tolerance of \(10^{-12}\) Ha and a relative step tolerance of zero. Compared to the optimizer calls on the left of the figure, we notice two trends. Dynamic-ADAPT-VQE, while being on par with standard and Explore-ADAPT-VQE for optimizer calls, tends to use a higher number of expectation value evaluations. Similarly, Static-ADAPT-VQE, while having a clear advantage over standard and Explore-ADAPT-VQE for optimizer calls, tends to have a reduced advantage (and for LiH, even a disadvantage) when it comes to optimizer-related expectation value evaluations. These observations hint towards an increased optimization difficulty for layered ADAPT-VQEs. These observations may be highly optimizer dependant and should be further investigated in the future. ## IV Additional bemchmarks We close this section by referring the reader to additional benchmarking data presented in the appendices. In Appendix C, we compare support to operator commutativity for the qubit pool. In Appendix D, we compare the steepest-gradient loss to the largest-energy-reduction loss. We also compare the QEB pool to the qubit pool in Appendix D. ## V Runtime analysis In this section, we analyze the asymptotic runtimes of standard, Explore-, Dynamic-, and Static-ADAPT-VQE. We find that under reasonable assumptions, Static-ADAPT-VQE can run up to \(\mathcal{O}\left(N^{2}\right)\) faster than standard ADAPT-VQE. In what follows, we quantify asymptotic runtimes using \(\mathcal{O}\left(x\right)\), \(\Omega\left(x\right)\), or \(\Theta\left(x\right)\) to state that a quantity scales at most, at least, or exactly with \(x\), respectively. For definitions, see Appendix A. We begin our runtime analysis by listing some observations, assumptions, and approximations. 1. Each algorithm operates on \(N\) qubits. 2. Ansatz circuits are improved by successively adding ansatz elements with a single parameter to the ansatz circuit. This results in iterations \(p=1,...,P\), where the \(p\)th ansatz circuit has \(p\) parameters. 3. In each iteration \(p\), the algorithm spends runtime on evaluating \(N_{L}(p)\) loss functions. 4. In each iteration \(p\), the algorithm spends runtime on optimizing \(p\) circuit parameters. 5. Using the finite difference method, we approximate each of the \(N_{L}(p)\) loss functions in (c) by using two energy-expectation values. This results in evaluating at most \(2N_{L}(p)\) energy expectation values on the quantum computer in the \(p\)th iteration. 6. We assume that the optimizer in (d) performs a heuristic optimization of ansatz circuits with \(p\) parameters in polynomial time. Thus, in the \(p\)th iteration, a quantum computer must conduct \(N_{O}(p)=\Theta\left(p^{\alpha}\right)\) evaluations of the energy landscape and \(N_{O}(p)=\Theta\left(p^{\alpha}\right)\) evaluations of energy expectation values. 7. For each energy expectation value in (e) and (f), we assume that a constant number of shots \(N_{S}=\Theta\left(1\right)\) is needed to reach a given accuracy. This is a standard assumption in VQE [1; 8], and further details justifying this assumption are discussed in Appendix B. 8. For each shot in (g), one must execute an ansatz circuit with \(p\) ansatz elements. Here, we assume that the runtime \(C(p)\) of an ansatz circuit with \(p\) ansatz elements is proportional to its depth \(d(p)\), i.e., \(C(p)=\Theta\left(d(p)\right)\). Combining (e,g,h) and (f,g,h), we can estimate the runtime each algorithm spends on evaluating losses and performing the optimization, respectively: \[R_{L} =\sum_{p=1}^{P}2N_{L}(p)N_{S}C(p)=\sum_{p=1}^{P}N_{L}(p)\Theta \left(d(p)\right), \tag{55a}\] \[R_{O} =\sum_{p=1}^{P}N_{O}(p)N_{S}C(p)=\sum_{p=1}^{P}\Theta\left(p^{ \alpha}\right)\Theta\left(d(p)\right). \tag{55b}\] Below we analyze further these runtime estimates for standard, Explore-, Dynamic-, and Static-ADAPT-VQE. In standard ADAPT-VQE, we re-evaluate the loss of each ansatz element in every iteration. Thus, \(N_{L}(p)=|\mathcal{P}|\). Moreover, the circuit depth \(d(p)\) is upper bounded by \(d(p)=\mathcal{O}\left(p\right)\). In the best-case scenario, ADAPT-VQE may arrange ansatz elements into layers accidentally. (An effect more likely for large \(N\).) This can compress the circuit depths down to \(d=\Omega\left(p/N\right)\). We summarize this range of possible circuit depths using the compact expression \(d(p)=\Theta\left(pN^{-\gamma}\right)\), with \(\gamma\in[0,1]\). In numerical simulations, we typically observe that \(\gamma\approx 0\), i.e., the depth of an ansatz circuit, is proportional to the number of ansatz elements. These expressions for \(N_{L}(p)\) and \(d(p)\) allow us to estimate the runtime of standard ADAPT-VQE algorithms: \[R_{L}^{A}=|\mathcal{P}|\Theta\left(P^{2}N^{-\gamma}\right), \tag{56a}\] \[R_{O}^{A}=\Theta\left(P^{2+\alpha_{A}}N^{-\gamma}\right). \tag{56b}\] Explore-ADAPT-VQE results in circuits of the same depths as ADAPT-VQE, i.e., \(d=\Theta\left(pN^{-\gamma}\right)\). However, the use of subpool exploration in Explore-ADAPT-VQE may reduce the number of loss-function evaluations \(N_{L}(p)\). As discussed in Section III B (paragraph on _Efficiency_), in the best case scenario, the number of loss function evaluations per iteration is lower bounded by \(N_{L}(p)=\Omega\left(|\mathcal{P}|/N\right)\). In the worst case scenario, subpool exploration may explore the whole pool of ansatz elements, such that \(N_{L}(p)=\mathcal{O}\left(|\mathcal{P}|\right)\). Based on these relations, we can estimate the runtime of Explore-ADAPT-VQE: \[R_{L}^{E} =\left\{\begin{aligned} &|\mathcal{P}|\Omega\left(P^{2}N^{-(1+ \gamma)}\right)\\ &|\mathcal{P}|\mathcal{O}\left(P^{2}N^{-\gamma}\right)\end{aligned} \right.\quad, \tag{57a}\] \[R_{O}^{E} =\Theta\left(P^{2+\alpha_{E}}N^{-\gamma}\right). \tag{57b}\] Dynamic-ADAPT-VQE has the same scaling of the number of loss function evaluations per iteration, \(N_{L}(p)\), as Explore-ADAPT-VQE. Thus, \(N_{L}(p)=\Omega\left(|\mathcal{P}|/N\right)\) in the best case and \(N_{L}(p)=\mathcal{O}\left(|\mathcal{P}|\right)\) in the worst case. The circuit depth of Dynamic-ADAPT-VQE scales as \(d(p)=\Theta\left(p/N\right)\). One can observe a clear benefit from layering. The upper bound, \(d(p)=\mathcal{O}\left(p\right)\), in standard and Explore-ADAPT-VQE becomes \(d(p)=\mathcal{O}\left(p/N\right)\) in Dynamic-ADAPT-VQE. Using these relations for \(N_{L}(p)\) and \(d(p)\), we can estimate the runtime of Dynamic-ADAPT-VQE: \[R_{L}^{D} =\left\{\begin{aligned} &|\mathcal{P}|\Omega\left(P^{2}N^{-2} \right)\\ &|\mathcal{P}|\mathcal{O}\left(P^{2}N^{-1}\right)\end{aligned} \right.\quad, \tag{58a}\] \[R_{O}^{D} =\Theta\left(P^{2+\alpha_{D}}N^{-1}\right). \tag{58b}\] The analysis of Static-ADAPT-VQE's runtime is more straightforward with respect to the layer count \(t\) than to the parameter count \(p\). Therefore, we revisit and modify our previous observations, assumptions, and approximations. 1. Static-ADAPT-VQE operates on \(N\) qubits. 2. Static-ADAPT-VQE builds ansatz circuits in layers indexed by \(t=1,...,t_{\text{max}}\). The \(t\)th layer contains \(n_{\text{tot}}(t)=\Theta\left(N\right)\) ansatz elements. Since each ansatz element depends on a single parameter, a layer contains \(n_{\text{tot}}(t)=\Theta\left(N\right)\) circuit parameters. Summing the parameters in each layer gives the total number of parameters in the circuit: \(P=\sum_{t=1}^{t_{\text{max}}}n_{\text{tot}}(t)\). 3. For each layer \(t\), Static-ADAPT-VQE spends runtime on evaluating the loss of \(N_{L}(t)=|\mathcal{P}|\) ansatz elements. 4. For each layer \(t\), Static-ADAPT-VQE spends runtime on optimizing \(p(t)=\sum_{t^{\prime}=1}^{t}n_{\text{tot}}(t^{\prime})=\Theta\left(N\right)t\) circuit parameters. 5. Using the finite difference method, we approximate each of the \(N_{L}(t)\) loss functions in (c) by using two energy expectation values. This results in evaluating at most \(2N_{L}(t)=2|\mathcal{P}|\) energy expectation values on the quantum computer in the \(t\)th iteration. 6. Again, we assume that the optimizer in (d) performs a heuristic optimization of ansatz circuits with \(p(t)\) parameters in polynomial time. Thus, in the \(t\)th layer a quantum computer must conduct \(N_{O}(t)=\Theta\left(p(t)^{\alpha_{S}}\right)\) evaluations of the energy landscape and \(N_{O}(t)=\Theta\left(p(t)^{\alpha_{S}}\right)\) evaluations of energy expectation values. Using \(p(t)=\Theta\left(N\right)t\) from (d), this implies that \(N_{O}(t)=\Theta\left(N^{\alpha_{S}}t^{\alpha_{S}}\right)\). 7. As before, for each energy expectation value in (e) and (f), we assume that a constant number of shots \(N_{S}=\Theta\left(1\right)\) is needed to reach a given accuracy. 8. For each shot in (g), one must execute an ansatz circuit with \(p(t)\) ansatz elements. Again, we assume that the runtime \(C(t)\) of an ansatz circuit with \(p(t)\) ansatz elements is proportional to its depth \(d(p(t))\), i.e., \(C(t)=\Theta\left(d(p(t))\right)\). Due to layering, the circuit depth of Static-ADAPT-VQE scales as \(d(p)=\Theta\left(p/N\right)\). (This scaling is identical for Dynamic-ADAPT-VQE.) This results in \(C(t)=\Theta\left(p(t)/N\right)\). Further, using \(p(t)=\Theta\left(N\right)t\) from (d), we find that each shot in (g) requires a circuit runtime of \(C(t)=\Theta\left(t\right)\). Combining the updated (e,g,h) and (f,g,h), we find the loss- and optimization-related runtimes of Static-ADAPT-VQE, respectively: \[R_{L}^{S} =\sum_{t=1}^{t_{\text{max}}}2N_{L}(t)N_{S}C(t)=|\mathcal{P}| \Theta\left(t_{\text{max}}^{2}\right), \tag{59a}\] \[R_{O}^{S} =\sum_{t=1}^{t_{\text{max}}}N_{O}(t)N_{S}C(t)=\Theta\left(N^{ \alpha_{S}}t_{\text{max}}^{\alpha_{S}+2}\right). \tag{59b}\] Since \(P=\Theta\left(N\right)t_{\text{max}}\) implies \(t_{\text{max}}=P\Theta\left(N^{-1}\right)\), we can simplify these runtime estimates: \[R_{L}^{S} =|\mathcal{P}|\Theta\left(P^{2}N^{-2}\right), \tag{60a}\] \[R_{O}^{S} =\Theta\left(P^{2+\alpha_{S}}N^{-2}\right). \tag{60b}\] We summarize this section by listing the ratios of asymptotic runtimes for Explore-, Dynamic-, and Static-ADAPT-VQE divided by the asymptotic runtime of standard ADAPT-VQE in Table 2. Here, we assume equal polynomial scaling (\(\alpha_{A}=\alpha_{E}=\alpha_{D}=\alpha_{S}\)) of the optimization runtime for standard, Explore, Dynamic-, and Static-ADAPT-VQE. As expected from our numerical runtime analysis in Section IV C, for typical ADAPT-VQE circuit depth (where \(\gamma=0\)), Static-ADAPT-VQE can provide the largest runtime reduction. This reduction is quadratic in the number of qubits: \(\Theta\left(N^{-2}\right)\). Further improvements to bounding the number of losses in Explore- and Dynamic-ADAPT-VQE are discussed in Appendix I. ## VI Noise In this section, we explore the benefits of reducing ADAPT-VQEs' ansatz-circuit depths with respect to noise. Our main finding is that the use of layering to reduce ansatz-circuit depths mitigates global amplitude-damping and global dephasing noise, where idling and non-idling qubits are affected alike. However, reduced ansatz-circuit depths do not mitigate the effect of local depolarizing noise, which exclusively affects qubits operated on by noisy (two-qubit, CNOT) gates. The explanation for this, we show, is that the ansatz-circuit depth is a good predictor for the effect of global amplitude-damping and dephasing noise. On the other hand, we show that the errors induced by local depolarizing noise are approximately proportional, not to the depth, but to the number of (CNOT) gates. For this reason, a shallower ansatz circuit with the same number of noisy two-qubit gates will not reduce the sensitivity to depolarizing noise. ### Noise models Our noise models focus on superconducting architectures, where the native gates are arbitrary single-qubit rotations and two-qubit CZ or iSWAP gates [30]. Further, we assume all-to-all connectivity. We tune our analysis towards the ibmq_quito (IBM Quantum Falcon r4T) processor. For this processor, the quoted two-qubit gate times are for CNOT gates. Thus, we will take CNOT gates to be our native two-qubit gate. To this end, our simulations use one- and two-qubit gate-execution times of \(35.5\,\mathrm{ns}\) and \(295.1\,\mathrm{ns}\), respectively.1 Similar native gates and execution times apply to silicon quantum processors [31]. Footnote 1: These values were taken from the IBM Quantum services. In our simulations, we model _amplitude damping_ of a single qubit by the standard amplitude-damping channel. (For detailed expressions of the amplitude-damping channel and the other noise channels we use, see Appendix K.1.) Its decay constant is determined by the inverse \(T_{1}\) time: \(\omega_{1}=1/T_{1}\). Similarly, we model _dephasing_ of a single qubit by the standard dephasing channel. The \(T_{1}\) and \(T_{2}^{*}\) times determine its phase-flip probability via the decay constant \(\omega_{z}=2/T_{2}^{*}-1/T_{1}\). Finally, we model _depolarization_ of a single qubit by a symmetric depolarizing channel with depolarization strength \(p\in[0,1]\), where \(p=0\) leaves a pure qubit pure and \(p=1\) brings it to the maximally mixed state. In our simulations, we model the effects of amplitude damping, dephasing, and depolarizing noise on the ansatz circuits \(\Lambda_{t}\) in a layer-by-layer approach. This is illustrated in Fig. 10. We decompose the ansatz circuit \(\Lambda_{t}\) into \(l=1,...,L_{t}\) layers of support-commuting ansatz-element layers \(\{\mathcal{A}_{l}\}\): \[\Lambda_{t}=\mathcal{A}_{L_{t}}^{\mathrm{o}}\circ\cdots\circ\mathcal{A}_{l}^{ \mathrm{o}}\circ\cdots\circ\mathcal{A}_{1}^{\mathrm{o}}. \tag{61}\] For amplitude damping and dephasing noise, each ansatz-element layer \(\mathcal{A}_{l}^{\mathrm{o}}\) is transpiled into columns of native gates that can be implemented in parallel (see Ref. [28] for more details). The native gate with the longest execution time of each native-gate column sets the column execution time. The sum of the column execution times then gives the execution time \(\tau_{l}\) of the ansatz-element layer \(\mathcal{A}_{l}^{\mathrm{o}}\). After each ansatz-element layer \(\mathcal{A}_{l}^{\mathrm{o}}\), amplitude damping is implemented by applying an amplitude-damping channel to every qubit \(r=1,...,N\) in an amplitude-damping layer. This results in an amplitude-damped ansatz circuit \(\Lambda_{t}(\omega_{1})\). Similarly, after each ansatz-element layer \(\mathcal{A}_{l}^{\mathrm{o}}\), dephasing is implemented by applying a dephasing channel to every qubit \(r=1,...,N\) in a dephasing layer. This results in a dephased ansatz circuit \(\Lambda_{t}(\omega_{z})\). Finally, for depolarizing noise, we apply the whole ansatz-element layer and then a depolarizing channel to each qubit. The strength of a qubit's depolarizing channel is determined by the exact number of times that the qubit was the target in a CNOT gate in the preceding layer. This results in a depolarized ansatz circuit \(\Lambda_{t}(p)\). For a visualization of our layer-by-layer-based noise model, see Fig. 10. For detailed mathematical expressions, see Appendix K.2. We note that applying the noise channels after each ansatz-element layer could be refined by applying the noise channels after each gate in the ansatz-element layer. However, as shown in Ref. [14], such a gate-by-gate noise model, as opposed to our layer-by-layer based noise model, would increase computational costs and has limited effect on the results. In what follows, we collectively refer to amplitude-damped ansatz circuits \(\Lambda_{t}(\omega_{1})\), dephased ansatz circuits \(\Lambda_{t}(p)\) as \(\Lambda_{t}(\omega)\). Here, \(\alpha\) refers to the key noise parameters \(\omega_{1}\), \(\omega_{z}\), or \(p\) of each respective noise model. ### Energy error and noise susceptibility Going forward, we analyze the effect of noise on the energy error [c.f. Eq. (54)] \[\Delta_{t}(\alpha)=\mathcal{E}_{t}(\alpha)-E_{FCI}. \tag{62}\] \begin{table} \begin{tabular}{l l l} Algorithm & \(R_{L}/R_{L}^{\mathrm{ADAPT}}\) & \(R_{O}/R_{O}^{\mathrm{ADAPT}}\) \\ \hline Explore & \(\Omega\left(N^{-1}\right)\), \(\mathcal{O}\left(1\right)\) & \(\Theta\left(1\right)\) \\ Dynamic & \(\Omega\left(N^{-2+\gamma}\right)\), \(\mathcal{O}\left(N^{-1+\gamma}\right)\) & \(\Theta\left(N^{-1+\gamma}\right)\) \\ Static & \(\Theta\left(N^{-2+\gamma}\right)\) & \(\Theta\left(N^{-2+\gamma}\right)\) \\ \end{tabular} \end{table} Table 2: The ratio of the runtimes of the listed algorithms to the runtime of standard ADAPT-VQE. \(N\) is the qubit number, and \(P\) is the number of parameters in the final ansatz circuit. See text for further explanation. \(\Delta_{t}(\alpha)\) now depends, not only on the iteration step \(t\), but also the noise parameter \(\alpha\), via the noise-dependent expectation value \[\mathcal{E}_{t}(\alpha)=\operatorname{tr}\left[H\Lambda_{t}(\alpha)\left[\rho_{0 }\right]\right]. \tag{63}\] To analyze the energy error, we expand the methodology of Ref. [14]. More specifically, we decompose the energy error into two contributions: \[\Delta_{t}(\alpha)=\Delta_{t}(0)+\left[\Delta_{t}(\alpha)-\Delta_{t}(0) \right]. \tag{64}\] The first term, \(\Delta_{t}(0)\), is the energy error of the noiseless ansatz circuit. The second term, \(\Delta_{t}(\alpha)-\Delta_{t}(0)\), is the energy error due to noise. Subsequently, we Taylor expand the energy error due to noise to first order: \[\left[\Delta_{t}(\alpha)-\Delta_{t}(0)\right]=\chi_{t}\alpha+\mathcal{O} \left(\alpha^{2}\right). \tag{65}\] As depicted in Fig. 11, in the regime of small noise parameters \(\alpha\) (where energy errors are below chemical accuracy), the linear approximation is an excellent predictor for the energy error. Conveniently, this allows us to summarize the effect of noise on the energy error through the noise susceptibility \(\chi_{t}\), defined as \[\chi_{t}\coloneqq\left.\frac{\partial\mathcal{E}_{t}(\alpha)}{\partial\alpha }\right|_{\alpha=0}=\operatorname{tr}\left[H\left.\frac{\partial\Lambda_{t}( \alpha)}{\partial\alpha}\right|_{\alpha=0}\left[\rho_{0}\right]\right]. \tag{66}\] In Appendix L, we calculate the noise susceptibility \(\chi_{t}\) of amplitude damping \(\mathcal{F}\), dephasing \(\mathcal{C}\), and depolarizing \(\mathcal{D}\) noise: \[\chi_{t}^{\mathcal{F}} \coloneqq\left.\frac{\partial\mathcal{E}_{t}(\omega_{1})}{ \partial\omega_{1}}\right|_{\omega_{1}=0}=L_{t}N\times d\mathcal{E}(\Lambda_{ t},\mathcal{F}), \tag{67a}\] \[\chi_{t}^{\mathcal{C}} \coloneqq\left.\frac{\partial\mathcal{E}_{t}(\omega_{2})}{ \partial\omega_{z}}\right|_{\omega_{z}=0}=L_{t}N\times d\mathcal{E}(\Lambda_{ t},\mathcal{C}),\] (67b) \[\chi_{t}^{\mathcal{D}} \coloneqq\left.\frac{\partial\mathcal{E}_{t}(p)}{\partial p} \right|_{p=0}=N_{II}\times d\mathcal{E}(\Lambda_{t},\mathcal{D}), \tag{67c}\] respectively. Here, \(N\) denotes the number of qubits; \(L_{t}\) is the number of ansatz-element layers in the ansatz circuit \(\Lambda_{t}\); \(N_{II}\) is the number of noisy (two-qubit, CNOT) gates in the ansatz circuit \(\Lambda_{t}\); and the \(d\mathcal{E}\)'s denote the average energy fluctuations, defined in Eqs. (11) of Appendix L. As discussed further in Appendix L, the average energy fluctuations can be calculated from noiseless expectation values. This allows us to compute the noise susceptibility with faster state-vector simulations rather than computationally demanding density-matrix simulations. Figure 11: Energy error \(\Delta_{t}(\alpha)\) for an ansatz circuit \(\Lambda_{t}\) of H\({}_{4}\) as a function of noise strength \(\alpha\). Connected dots are calculated using full density-matrix simulations. Dashed lines show the corresponding extrapolation using noise susceptibility. Figure 10: Circuit diagram visualizing the layer-by-layer noise model: on the left, a noiseless ansatz-element layer with two support-commuting ansatz elements is decomposed into columns of native gates. On the right, noise is added to the ansatz-element layer. For global amplitude damping (or dephasing) noise, the channel \(\mathcal{F}\) (or \(\mathcal{C}\)) is applied to each qubit. For local depolarizing noise, the channels \(\mathcal{D}\) are applied to the target of the noisy (two-qubit, CNOT) gate. ### Benchmarking layered circuits with noise In this section, we compare the noise susceptibility of standard, Static- (TETRIS-), and Dynamic-ADAPT-VQE in the presence of noise. As before, we showcase these algorithms on a range of molecules (summarized in Table 1) using the QEB pool with support commutativity. When performing our comparison, we grow the ansatz circuits \(\Lambda_{t}\) and optimize its parameters in noiseless settings, as previously discussed in [14]. We then compute the noise susceptibility of \(\Lambda_{t}\) as described in the previous section. The results for amplitude damping, dephasing, and depolarizing noise are depicted in Fig. 12, Fig. 13, and Fig. 14, respectively. In all three figures, we plot the noise susceptibility as a function of (left) the noiseless energy accuracy \(\Delta_{t}(0)\) or (right) the number of parameters. The rows of each plot depict different molecules in order of increasing spin orbitals from bottom to top: H\({}_{4}\), H\({}_{6}\), LiH, BeH\({}_{2}\), and H\({}_{2}\)O. tage is less consistent across different ansatz circuits and molecules. Finally, in Fig. 14, we observe that for depolarizing noise, all algorithms tend to produce similar noise susceptibilities. Sometimes one shows an advantage over the other, and vice versa, depending on the ansatz circuit and molecule. Our simulations indicate no clear disadvantage of using layering in the presence of depolarizing noise. In summary, our numerical simulations suggest that layering is useful for mitigating global amplitude damping and dephasing noise. Moreover, layering seems to have neither a beneficial nor a detrimental effect in the presence of local depolarizing noise. In order to explain these findings, we further investigate the dependence of noise susceptibility on several circuit parameters in Sec. VI D. _Gate-fidelity requirements:--_ We now use the noise susceptibility data in Fig. 12, Fig. 13, and Fig. 14 to estimate the fidelity requirements for operating ADAPT-VQEs. For this estimation, recall that quantum chemistry simulations of energy eigenvalues target an accuracy of 1.6 mHa. To achieve this chemical accuracy, we require the energy error due to noise to be smaller than \(\approx 1\) milliHartree: \(\chi_{t}\alpha\lesssim 1\)mHa. Applying this condition to amplitude damping (where \(\alpha=1/T_{1}\)), dephasing (where \(\alpha\approx 1/T_{2}^{*}\)), and depolarizing noise (where \(\alpha=p\)), we find a set of gate fidelity requirements: \[T_{1}\gtrsim\frac{\chi_{t}^{\mathcal{F}}}{1\text{mHa}},\quad T_{2}^{*}\gtrsim \frac{\chi_{t}^{\mathcal{C}}}{1\text{mHa}},\quad p\lesssim\frac{1\text{mHa}}{ \chi_{t}^{\mathcal{D}}}. \tag{68}\] The data presented in Figs. 12, 13, and 14, suggests the following requirements for the gate operations to enable chemically accurate simulations: \[T_{1}\gtrsim 1\text{s},\quad T_{2}^{*}\gtrsim 100\text{ms},\quad p\lesssim 10^{ -6}. \tag{69}\] A more detailed breakdown of the maximal \(p\) and minimal \(T_{1}\) and \(T_{2}^{*}\) times for each algorithm and molecule is presented in Fig. 15. These requirements are beyond the current state-of-the-art quantum processors [31; 32]. How much these requirements can be improved by error-mitigation techniques [33] remains an open question for future research. Figure 14: Same as Fig. 12, but for depolarizing noise. In addition, black crosses correspond to density matrix simulations corroborating noise susceptibility via finite differences. The black crosses are discussed further in Appendix E. Figure 15: The ratio of the minimal \(T_{1}\) and \(T_{2}^{*}\) times and maximal depolarizing probability \(p\) for Dynamic- and Static-ADAPT-VQE to standard ADAPT-VQE required to reach an accuracy of \(\Delta=10^{-7}\) mHa for each molecule. ## VI D. Noise-susceptibility scalings In this section, we investigate the dependence of noise susceptibility on basic circuit parameters, such as the number of qubits \(N\), circuit depth \(d\propto L_{t}\), or the number of noisy (two-qubit, CNOT) gates \(N_{II}\). Our analysis will help in understanding why layering can mitigate global amplitude damping and dephasing noise but not local depolarizing noise. We study numerically how noise susceptibility scales with circuit depth and the number of noisy (two-qubit, CNOT) gates \(N_{II}\). The data is presented Fig. 16. The top panels show the noise susceptibility in the presence of amplitude damping (left), dephasing (center), and depolarizing noise (right) for various algorithms and molecules. The noise-susceptibility data is presented on a log-log plot as a function of circuit depths \(d\) (left and center) as well as \(N_{II}\) (right), respectively. From Fig. 16, we find that the noise susceptibility scales roughly linearly with the plotted parameters. To further analyze this rough linearity, we produce a log-log plot in the bottom panels of \(\chi_{t}^{\mathcal{F}}/d\) (left), \(\chi_{t}^{\mathcal{C}}/d\) (center), and \(\chi_{t}^{\mathcal{D}}/N_{II}\) (right) as a function of \(d\), \(d\), and \(N_{II}\), respectively. Had the scalings of interest been linear, the bottom panels would have depicted constant curves. This is not entirely the case. But, the curves' deviations from constants are sufficiently sublinear to support our claim that the curves in the upper plots are roughly linear. The scalings observed in Fig. 16 confirm our previous intuition. Based on Eq. (67), and using the assumption that \(d\mathcal{E}\) is roughly constant, we would expect that the noise susceptibility in the presence of amplitude damping or dephasing noise is proportional to the circuit depth and the number of qubits: \[\chi_{t}^{\mathcal{F}}\underset{\sim}{\propto}Nd\quad\text{and}\quad\chi_{t}^ {\mathcal{C}}\underset{\sim}{\propto}Nd. \tag{70}\] This claim is supported by Fig. 16. Moreover, previous studies [14] have found that the noise susceptibility scales linearly with the number of depolarizing two-qubit gates: \[\chi_{t}^{\mathcal{D}}\underset{\sim}{\propto}N_{II}. \tag{71}\] Also, this claim is supported by Fig. 16. Thus, for global (amplitude damping and dephasing) noise, which affects idling and non-idling qubits alike, our analysis indicates that circuit depth is a good predictor of noise susceptibility. On the other hand, for local (depolarizing) noise, which affects only the qubits which are nontrivially operated on, \(N_{II}\) is a good predictor of the noise susceptibility. Consequently, we expect that compressing the depth of an ansatz circuit by layering can mitigate noise in the former, but not the latter, of these settings. Figure 16: Noise susceptibility for (a) amplitude-damping, (b) dephasing, and (c) depolarizing noise as a function of (a, b) ansatz-circuit depths \(d\) and (c) the number of noisy CNOT-gates \(N_{II}\). The top panels show noise susceptibility as a function of \(d\) (a, b) and \(N_{II}\) (c). The bottom panels of (a) and (b) show noise susceptibility divided by \(d\), as a function \(d\). The bottom panel of (c) shows noise susceptibility divided by \(N_{II}\) as a function of \(N_{II}\). The colored lines correspond to the simulation of Dynamic-ADAPT-VQE using support commutation for a range of molecules. The black lines represent simulations based on QEB-ADAPT-VQE. ## VII Summary and conclusion In this paper, we introduced layering and subpool-exploration strategies for ADAPT-VQEs that reduced circuit depth, runtime, and susceptibility to noise. In noiseless numerical simulations, we demonstrate that layering reduces the depths of an ansatz circuit when compared to standard ADAPT-VQE. We further showed that our layering algorithms achieve circuits that are as shallow as TETRIS-ADAPT-VQE. The reduction in ansatz circuit depth is achieved without increasing the number of ansatz elements, circuit parameters, or CNOT gates in the ansatz circuit. The noiseless numerical simulations further provide evidence that layering and subpool-exploration can reduce the runtime of ADAPT-VQE by up to \(\mathcal{O}\left(N^{2}\right)\), where \(N\) is the number of qubits in the simulation. Finally, we benchmarked the effect of reducing the depth of ADAPT-VQEs on the algorithms' noise susceptibility. For global noise models, which affect idling and non-idling qubits alike (such as our amplitude-damping and dephasing model), we show that the noise susceptibility is approximately proportional to the ansatz-circuit depth. For these noise models, reduced circuit depth due to layering is beneficial in reducing the noise susceptibility of ADAPT-VQEs. For local noise models, where only non-idling qubits are affected by noise (as with our depolarizing noise model), we show that the noise susceptibility is approximately proportional to the number of noisy (two-qubit, CNOT) gates. For these noise models, layering strategies are neither useful nor harmful, as they hardly change the CNOT count of ADAPT-VQEs. We finish our paper by stating three conclusions from our work. _To layer or not to layer?:--_Depending on the dominant noise source of a quantum processor, layering may or may not lead to improved noise resilience. For processors where global noise dominates, we recommend layering. _Static or dynamic layering?:--_Our paper considered static and dynamic layering. Which of the two should be used? Static layering optimizes each layer once, while dynamic layering optimizes the ansatz after adding each ansatz element. Both layering strategies lead to ansatz circuits of similar depths and require a similar number of parameters and CNOT gates to reach a certain energy accuracy. However, static layering calculates significantly fewer energy expectation values on the quantum processor. Therefore, we recommend static layering for the small molecules studied in this work. For larger molecules, dynamic layering could be preferable. _How useful is subpool exploration?:--_Our paper introduced a new pool-exploration strategy, that reduces the number of loss-function evaluations and, thereby, the number of calls to the quantum processor. However, in the examples studied in this work, the number of loss-function evaluations was exceeded by the energy-expectation-value calls. Thus, subpool exploration had little impact on the algorithms. Again, this could change when larger molecules are studied. ## Acknowledgements The authors thank Yordan S. Yordanov for the use of his codebase for VQE protocols and Wilfred Salmon for insightful discussions. We further thank Sophia E Economou, Nicholas J Mayhall, Edwin Barnes, Panagiotis G Anastasiou and the Virginia Tech group for fruitful discussions. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team.
2308.16067
Consensus of state of the art mortality prediction models: From all-cause mortality to sudden death prediction
Worldwide, many millions of people die suddenly and unexpectedly each year, either with or without a prior history of cardiovascular disease. Such events are sparse (once in a lifetime), many victims will not have had prior investigations for cardiac disease and many different definitions of sudden death exist. Accordingly, sudden death is hard to predict. This analysis used NHS Electronic Health Records (EHRs) for people aged $\geq$50 years living in the Greater Glasgow and Clyde (GG\&C) region in 2010 (n = 380,000) to try to overcome these challenges. We investigated whether medical history, blood tests, prescription of medicines, and hospitalisations might, in combination, predict a heightened risk of sudden death. We compared the performance of models trained to predict either sudden death or all-cause mortality. We built six models for each outcome of interest: three taken from state-of-the-art research (BEHRT, Deepr and Deep Patient), and three of our own creation. We trained these using two different data representations: a language-based representation, and a sparse temporal matrix. We used global interpretability to understand the most important features of each model, and compare how much agreement there was amongst models using Rank Biased Overlap. It is challenging to account for correlated variables without increasing the complexity of the interpretability technique. We overcame this by clustering features into groups and comparing the most important groups for each model. We found the agreement between models to be much higher when accounting for correlated variables. Our analysis emphasises the challenge of predicting sudden death and emphasises the need for better understanding and interpretation of machine learning models applied to healthcare applications.
Yola Jones, Fani Deligianni, Jeff Dalton, Pierpaolo Pellicori, John G F Cleland
2023-08-30T14:44:04Z
http://arxiv.org/abs/2308.16067v1
Consensus of state of the art mortality prediction models: From all-cause mortality to sudden death prediction ###### Abstract Worldwide, many millions of people die suddenly and unexpectedly each year, either with or without a prior history of cardiovascular disease. Such events are sparse (once in a lifetime), many victims will not have had prior investigations for cardiac disease and many different definitions of sudden death exist. Accordingly, sudden death is hard to predict. This analysis used NHS Electronic Health Records (EHRs) for people aged \(\geq\)50 years living in the Greater Glasgow and Clyde (GG&C) region in 2010 (n = 380,000) to try to overcome these challenges. We investigated whether medical history, blood tests, prescription of medicines, and hospitalisations might, in combination, predict a heightened risk of sudden death. We compared the performance of models trained to predict either sudden death or all-cause mortality. We built six models for each outcome of interest: three taken from state-of-the-art research (BEIRT, Deepr and Deep Patient), and three of our own creation. We trained these using two different data representations: a language-based representation, and a sparse temporal matrix. We used global interpretability to understand the most important features of each model, and compare how much agreement there was amongst models using Rank Biased Overlap. It is challenging to account for correlated variables without increasing the complexity of the interpretability technique. We overcame this by clustering features into groups and comparing the most important groups for each model. We found the agreement between models to be much higher when accounting for correlated variables. Our analysis emphasises the challenge of predicting sudden death and emphasises the need for better understanding and interpretation of machine learning models applied to healthcare applications. Electronic health records, sudden death, all-cause mortality ## I Introduction Worldwide each year, millions of people die suddenly and unexpectedly [1, 2, 3, 4, 5, 6, 7, 8]. Sudden death (SD) often occurs in the setting of chronic disease long before the condition has become terminal but may also be the first and only manifestation of disease, most often cardiovascular. For many individuals, SD is not preceded by any warning symptoms that could prompt clinical investigations or an admission to hospital. Current estimates of the incidence of SD vary widely because of heterogeneity in cohort inclusion criteria and how SD is defined. Moreover, in clinical practice, SD occurring out of hospital is commonly certified as due to myocardial infarction. Probably only a minority of SD are reported as such. Accordingly, it is not surprising that predicting SD is challenging. The availability of a large volume of routinely collected, longitudinal, electronic health records (EHR), including blood results, prescriptions, imaging investigations and diagnoses provides opportunities to try to predict SD, which might offer insights into underlying mechanisms and potential prevention or management strategies. However, EHRs are complex including thousands of clinical variables and many data types collected at different times and for different reasons. Without prior expert knowledge, conventional statistical models cannot cope. Despite the recent increase in the use of machine learning in healthcare, developing and deploying deep learning for risk prediction with EHRs is still an emerging field. The nature of Electronic Health Records (EHRs) accumulated for over a decade add further challenges, since the risk prediction relies on recognising long-term dependencies between sparse, irregular events such as hospitalisations, lab tests and combinations of prescriptions. Steyerberg et al. highlighted the importance of clinical utility and probability calibration for evaluating clinical prediction models [9]. Although, in 2014 Steyerberg et. al set guidelines for models development and validation [10], but only recentlt have these recommendations been extended to risk prediction models using artificial intelligence and machine learning [9, 11]. Hond et. al [9] highlighted interpretability and model transparency as prerequisites for healthcare applications [12, 13]. Interpretability highlights the importance of input features and helps identify model biases inherent in healthcare datasets [14, 15, 16, 17]. This should enhance clinicians' trust in machine learning by providing them with a model they can understand, and allow extraction of useful insights from complex data. In this paper, we use EHRs to adapt and compare five state-of-the art learning models along with a baseline logistic regression model for the prediction of SD and other catastrophic cardiovascular (CV) events (myocardial infarction (MI), stroke, survived cardiac arrest), and all-cause mortality in people aged 50 years, or older, served by NHS Greater Glasgow and Clyde, the largest health board and healthcare provider in Scotland. Three of these models are taken from existing state-of-the-art open-source models for mining EHR [18, 19, 20], whereas others are inspired from research in deep learning and have been developed in-house. These six models cover two different data representations: a sparse temporal matrix, and a language representation. Our population of around 380,000 are people aged 50 or older as of the 1st of January 2010, with a Greater Glasgow and Clyde postcode. In order to
2303.14502
VERN: Vegetation-aware Robot Navigation in Dense Unstructured Outdoor Environments
We propose a novel method for autonomous legged robot navigation in densely vegetated environments with a variety of pliable/traversable and non-pliable/untraversable vegetation. We present a novel few-shot learning classifier that can be trained on a few hundred RGB images to differentiate flora that can be navigated through, from the ones that must be circumvented. Using the vegetation classification and 2D lidar scans, our method constructs a vegetation-aware traversability cost map that accurately represents the pliable and non-pliable obstacles with lower, and higher traversability costs, respectively. Our cost map construction accounts for misclassifications of the vegetation and further lowers the risk of collisions, freezing and entrapment in vegetation during navigation. Furthermore, we propose holonomic recovery behaviors for the robot for scenarios where it freezes, or gets physically entrapped in dense, pliable vegetation. We demonstrate our method on a Boston Dynamics Spot robot in real-world unstructured environments with sparse and dense tall grass, bushes, trees, etc. We observe an increase of 25-90% in success rates, 10-90% decrease in freezing rate, and up to 65% decrease in the false positive rate compared to existing methods.
Adarsh Jagan Sathyamoorthy, Kasun Weerakoon, Tianrui Guan, Mason Russell, Damon Conover, Jason Pusey, Dinesh Manocha
2023-03-25T15:46:28Z
http://arxiv.org/abs/2303.14502v1
# VERN: Vegetation-aware Robot Navigation in Dense Unstructured Outdoor Environments ###### Abstract We propose a novel method for autonomous legged robot navigation in densely vegetated environments with a variety of pliable/traversable and non-pliable/ultraversable vegetation. We present a novel few-shot learning classifier that can be trained on a few hundred RGB images to differentiate flora that can be navigated through, from the ones that must be circumvented. Using the vegetation classification and 2D lidar scans, our method constructs a vegetation-aware traversability cost map that accurately represents the pliable and non-pliable obstacles with lower, and higher traversability costs, respectively. Our cost map construction accounts for misclassifications of the vegetation and further lowers the risk of collisions, freezing and entrapment in vegetation during navigation. Furthermore, we propose holonomic recovery behaviors for the robot for scenarios where it freezes, or gets physically entrapped in dense, pliable vegetation. We demonstrate our method on a Boston Dynamics Spot robot in real-world unstructured environments with sparse and dense tall grass, bushes, trees, etc. We observe an increase of 25-90% in success rates, 10-90% decrease in freezing rate, and up to 65% decrease in the false positive rate compared to existing methods. ## I Introduction In recent times, mobile robots have been used for many outdoor applications in agriculture [1, 2, 3] (automatic seeding, harvesting, and measuring plant and soil health), gardening [4], forest exploration, search and rescue [3], etc. Operating in such environments entails navigating in the presence of vegetation with varying height, density, and rigidity. In such dense and unstructured vegetation, the robot may not always find free space to circumvent the flora. This causes the robot to freeze [5]; a phenomenon where its planner cannot compute any collision-free velocity to reach its goal. The robot either halts or starts oscillating indefinitely, leading to collisions and not progressing to its goal. Additionally, a small wheeled or legged robot could get physically entrapped in vegetation when its wheels or legs get intertwined in dried tall grass, bushes, etc. In such cases, the robot's dynamics determines whether it can autonomously recover itself [6]. To effectively navigate such environments, the robot must assess the traversability of the various flora around it. Firstly, it must differentiate flora based on their _pliability_[7]. We define a plant's pliability as its degree of flexibility or ability to bend such that a robot can navigate through it. For instance, robots can traverse _through_ tall grass (high pliability) whereas trees are non-pliable and should be avoided. Secondly, the robot must detect the height and density of pliable vegetation since they affect the resistance offered to the robot's motion. A major challenge in differentiating pliable/traversable from non-pliable/ultraversable flora around the robot stems from sensors (e.g., laser scans, point clouds, and ultrasound) detecting all plants as solid obstacles [8]. Furthermore, vegetation such as tall grass scatters laser beams from lidars and leads to poor characterization of their shape and structure [9]. In RGB images, the shape and structure of various plants are accurately represented. However, there is a lack of research on differentiating flora based on their pliability from images. There are several works in computer vision to detect and segment simple vegetation such as trees, and short grass [2, 12, 13]. However, such works require extensive datasets with intricate annotations to train, requiring significant human effort. Additionally, segmenting traversable versus untraversable regions in dense vegetation is still an unsolved problem [14]. Navigation methods for outdoor domains have dealt with flora such as tall grass [8], trees, etc. in isolation. Such methods do not consider scenarios where plants of all pliability/traversability are in close proximity. Therefore, such methods cause the robot to collide or get entrapped, and do not have autonomous mechanisms to recover in such environments. Navigation methods based on end-to-end learning [8, 15] require real-world negative experiences such as collisions with non-pliable obstacles for training. Fig. 1: Comparison of VERN with other methods navigating a Spot robot through a complex environment (scenario 3) with traversable (e.g. tall grass), and untraversable (e.g. tree, bush) vegetation. In this trial, we observe that only VERN successfully reaches its goal due to its vegetation classification, height estimation, and novel cost map clearing scheme. Other methods either collide (GrASPE [10], Spot’s in-built autonomy), or freeze (DWA [11], GA-Nav [12]) due to the density of vegetation. This would be impractical for highly unstructured regions where rates of collision and entrapment are high. **Main Contributions:** To address these limitations and navigate in densely vegetated environments, we present VERN (**VE**getation-aware **R**obot **N**avigation) using multi-modal perception sensors. The novel components of our work include: * A novel classifier to differentiate vegetation of various pliability/traversability from RGB images. We propose a novel few-shot learning approach and a siamese network framework to train our classifier with only a few hundred images of different kinds of vegetation (tall grass, bushes/shrubs, and trees) with minimal human labeling. This is a significant decrease in the required dataset size and human effort compared to existing methods for outdoor terrain perception. * A novel method to compute a _vegetation-aware_ traversability cost map by accounting for estimated vegetation height, pliability classification, and the classification confidence. Our method accounts for misclassifications in the vegetation classifier and leads to a more accurate representation of the traversable vegetation in the robot's surroundings in the cost map. This leads to an improvement of 25-90% in terms of success rate and a decrease of up to 65% false positive rate. * A local planner that produces cautious navigation behaviors when in highly unstructured vegetation or regions with low confidence pliability classification. Additionally, the planner initiates novel holonomic behaviors to recover the robot if it freezes or gets physically entrapped in dense vegetation. This improves the success rate by up to 50%, and the freezing rate by 75%. ## II Related Work In this section, we discuss previous works in outdoor navigation and perception of unstructured vegetation. ### _Perception in Dense Vegetation_ Early works on vegetation detection were typically chlorophyll detectors [16, 17, 18], or basic classification models [19]. Nguyen et al. [17] proposed a sensor setup to detect near-IR reflectance and used it to define a novel vegetation index. [18] extended this detection by using an air compressor device to create strong winds and estimating the levels of resistance to robot motion using the movement of vegetation. However, it needed the robot to be static. Multiple sensor configurations have also been explored for obstacle detection within vegetation [20] of which thermal camera and RADAR stand out as effective modalities. Modeling the frictional/lumped-drag characteristics of vegetation [21, 7, 22] as a measure of the resistance offered to motion, or modeling plant stems using their rotational stiffness and damping characteristics [23] has also been proposed. However, the method does not use visual feedback, requiring the robot to drive through vegetation first to gauge its pliability. Wurm et al. [24, 25] presented methods to detect short grass from lidar scans based on its reflective properties on near IR light. Astolfi et al. [26] demonstrated accurate SLAM and navigation through sensor fusion in a vineyard. However, such methods operated in highly structured environments. There are several works that utilize semantic segmentation to understand terrain traversability [12, 13, 27]. [27] applied semantic segmentation classifier trained on RGB images to oct-tree maps obtained from RGB-D point cloud. Maturana et al. [13] took a similar approach by training a segmenter and augmenting a 2.5D grid map with it to distinguish tall grass from regular obstacles. However, these existing methods are not suitable for perception in dense, unstructured vegetation. ### _Navigation in Unstructured Vegetation_ Although there are many works on outdoor, off-road navigation to handle slopes [28] and different terrain types [29], there have been only a few methods for detecting and navigating through vegetation. [30] proposed using a terrain gradient map along with the A* algorithm to navigate a large wheeled robot and demonstrated moving over tall grass and short bushes. [31] extended this by adding a segmentation layer to the map to detect soft obstacles. However, these methods mostly operate in structured, isolated vegetation and may not work well in unstructured scenarios. There are several works that have addressed navigating through pliable vegetation such as tall grass [8, 15]. Kahn et al. [8] demonstrated a model that learns from a robot's real-world experiences such as collisions, bumpiness, and its position to navigate outdoor environments. The robot learned from RGB images and associated experiences (labels) to consider tall grass as traversable. However, it does not account for the presence of non-traversable bushes or trees alongside traversable vegetation. Polevoy et al. [15] proposed a model that regresses the difference between the robot's dynamics model and its actual realized trajectory in unstructured vegetation. This acts as a measure of the surrounding vegetation and terrain's traversability. However, both these methods require "negative" examples such as collisions during training, which may be impractical or dangerous to collect in the real world. With the advent of legged robots, several recent works have been focused on developing robust controllers for locomotion purely using proprioception [32] or fusing it with exteroceptive perception such as elevation maps [33]. A few navigation works focus on estimating the underlying support surface that is hidden by vegetation [34] by fusing haptic feedback from the robot, depth images, and detecting the height of the vegetation. Our method is complementary to these existing works. ## III Background In this section, we provide our problem formulation, briefly explain siamese networks used for vegetation classification, and the Dynamic Window Approach (DWA) [11] used as a basis for our navigation. Our overall system architecture is shown in Fig. 2. ### _Setup and Conventions_ We assume a legged robot setup with holonomic dynamics equipped with a 3D lidar (that provides point clouds and 2D laser scans) and an RGB camera. Certain quantities are measured relative to a global coordinate frame (referred to as odom hereafter) with the robot's starting location as its origin. Some quantities are relative to a cost map grid moving with and centered at the robot. The quantities can be transformed between these frames using transformation matrices \(T^{O}_{C}\) or \(T^{C}_{O}\). The reference frame for each quantity is indicated in the superscript (\(O\) or \(C\)) of its symbol. We denote vectors in bold, and use symbols such as i, j, row, col as indices. Apart from its pose w.r.t odom, the robot does not have access to any global information. ### _Problem Domain_ We consider navigation in environments containing three kinds of vegetation together: 1. Tall grass, 2. Bushes/shrubs, and 3. Trees. We make the following assumptions about the three types. #### Iii-B1 Properties of Vegetation Tall grass is pliable (therefore traversable), with height \(>0.3m\) (could be taller than the height the robot's sensors are mounted at), and could be of varying density in the environment. Bushes/shrubs (height \(0.1-0.5m\)), and trees (height \(>2m\)) are non-pliable/untraversable and must be circumvented. #### Iii-B2 Adverse Phenomena Navigating through such vegetation, the robot could encounter three adverse phenomena: 1. Freezing, 2. Physical entrapment in vegetation, and 3. Collisions. The conditions for these phenomena are, 1. Freezing: \(V_{r}=\varnothing\), 2. Entrapment: \((v^{*},\omega^{*})\neq(0,0)\) but \(\Delta x^{O}_{rob}=\Delta y^{O}_{rob}=\Delta\theta^{O}_{rob}=0\), 3. Collisions: \(dist(\text{robot},\{\text{bushes/trees}\})=0\). Here, \(V_{r}\) refers to the robot's feasible, collision-free velocity space. \((v^{*},\omega^{*})\) is the velocity commanded by the robot's planner, and \(\Delta x^{O}_{rob},\Delta y^{O}_{rob},\Delta\dot{\theta}^{O}_{rob}\) refer to the robot's change in position and yaw orientation relative to the global odom frame. Freezing occurs in densely vegetated environments if the planner cannot find any collision-free velocity to reach its goal. To avoid this, the robot must accurately identify pliable vegetation to navigate through. The robot could also get physically entrapped in dense vegetation if its legs are caught in grass, small branches of bushes, etc. Finally, the robot could collide if its planner misclassifies a non-pliable obstacle as pliable. With these definitions, VERN's problem formulation can be stated as follows: **Formulation III.1**.: _To compute a robot velocity \((v^{*},\omega^{*})\) such that \(traj^{O}(v^{*},\omega^{*})\in Free^{O}\cup Grass^{O}\), \(traj^{O}(v^{*},\omega^{*})\notin Tree^{O}\cup Bush^{O}\), assuming that \(Tree^{O}\cap Grass^{O}=Tree^{O}\cap Bush^{O}=Bush^{O}\,\cap\,Grass^{O}=\varnothing\)._ Here, \(Tree^{O},Bush^{O},Grass^{O}\) represent the sets of locations containing trees, bushes, and grass, respectively that were detected by the robot w.r.t the odom frame. \(Free^{O}\) represents free space locations. \(traj^{O}()\) returns the forward-simulated trajectory of a \((v,\omega)\) pair relative to the odom frame. We assume that the robot can balance and walk on all the terrains we consider. We also assume that the robot never encounters a scene with no free space and only non-pliable vegetation (an unresolvable scenario). ### _Siamese Networks_ Siamese networks [35] are a class of neural networks used for one-shot or few-shot learning (learning with very few samples). They use two sub-networks with identical structures, parameters, and weights, and are connected at the end using an energy function (e.g., L2 norm). They accept two distinct images as inputs for the two sub-networks and are trained to map similar images close to each other in a low-dimensional latent space. Dissimilar images have latent representations that are mapped far away from each other. ### _Dynamic Window Approach_ Our planner adapts the Dynamic Window Approach (DWA) [11] to perform navigation. We represent the robot's actions as linear and angular velocity pairs \((v,\omega)\). Let \(V_{s}=[[0,v_{max}],[-\omega_{max},\omega_{max}]]\) be the space of all the possible robot velocities based on the maximum velocity limits. DWA considers two constraints to obtain dynamically feasible and collision-free velocities: (1) The dynamic window \(V_{d}\) contains the reachable velocities during the next \(\Delta t\) time interval based on acceleration constraints; (2) The admissible velocity space \(V_{a}\) includes the collision-free velocities. The resulting velocity space \(V_{r}=V_{s}\cap V_{d}\cap V_{a}\) is utilized to calculate the optimal velocity pair \((v^{*},\omega^{*})\) by minimizing the objective function below, \[Q(v,\omega)=\sigma\big{(}\gamma_{1}.head(.)+\gamma_{2}.obs(.)+\gamma_{3}.vel(. )\big{)}, \tag{1}\] where \(head(.)\), \(obs(.)\), and \(vel(.)\) are the cost functions [11] to quantify a velocity pair's ((\(v,\omega\)) omitted on RHS for readability) heading towards the goal, distance to the closest obstacle in the trajectory, and the forward velocity of the robot, respectively. \(\sigma\) is a smoothing function and \(\gamma_{i},(i=1,2,3)\) are adjustable weights. ## IV VERN: Vegetation Classification Our vegetation classifier uses an RGB image (\(I^{RGB}_{t}\)) obtained from a camera on the robot at a time \(t\) as input. Although a plant's structure is well preserved in an image, using the entire image for classification is infeasible because the scene in \(I^{RGB}_{t}\) typically contains two or more types of vegetation together. Therefore, we split \(I^{RGB}_{t}\) into four quadrants \(Q_{1}\) (top-left), \(Q_{2}\) (top-right), \(Q_{3}\) (bottom-left), \(Q_{4}\) (bottom-right) as shown in Figs. 3, 4. A quadrant predominantly contains a single type of vegetation typically (see Fig. 4). We classify vegetation in the quadrants into four classes: 1. sparse grass, 2. dense grass, 3. bush, and 4. tree. We separate sparse and dense grass due to their visual dissimilarity. Fig. 2: VERN’s overall system architecture for vegetation classification to estimate plausibility/traversability. Our model uses classification, its confidence, and vegetation height to create a vegetation-aware cost map \(C_{VA}\) used for planning. In high traversability cost regions, our planner executes cautious behaviors by limiting the robot’s velocity space. If the robot freezes or gets entrapped, it executes holonomic recovery behaviors. ### _Data Preparation_ To create the training dataset, images collected from different environments are split into quadrants and grouped together manually based on the vegetation type predominant in a quadrant. Next, pairs of images are created either from the same group or from different groups. The pair with similar images (same group) is automatically labeled 1, or 0 otherwise. The entire data preparation process takes about 30-45 minutes of manual effort. The obtained image pairs and the corresponding labels are passed into the classification network for training. ### _Network Architecture_ Our vegetation classifier network consists of two identical feature extraction branches to identify the similarity between input image pairs. Our feature extraction branches are based on the MobileNetv3 [36] backbone. We choose MobileNetv3 because it incorporates depth-wise separable convolutions (i.e., fewer parameters), leading to a comparatively lightweight and fast neural network. The outputs of the MobileNetv3 branches (\(h_{1}\) and \(h_{2}\)) are one-dimensional latent feature vectors of the corresponding input images. The euclidean distance between the two feature vectors is calculated and passed through a \(\mathit{sigmoid}\) activation layer to obtain the predictions. We utilize the _contrastive loss_ function, which is capable of learning discriminative features to evaluate our model during training. Let \(\hat{y}\) be the prediction output from the model. The contrastive loss function \(J\) is: \[J=\hat{y}\cdot d^{2}+(1-\hat{y})\cdot max(margin-d,0)^{2}, \tag{2}\] where \(d\) is the euclidean distance between the feature vectors \(h_{1}\) and \(h_{2}\). \(Margin\) is used to ensure that dissimilar image pairs are at least \(margin\) distance apart. ### _Network Outputs_ During run-time, the quadrants \(Q_{1},Q_{2},Q_{3},\) and \(Q_{4}\) are each paired with several reference images of the four classes. The several reference images per class have different viewpoints and lighting conditions. They are fed into the classifier model \(\mathcal{F}\) as a batch to obtain predictions as, \[\mathcal{F}(Q_{1},Q_{2},Q_{3},Q_{4})=\tilde{V}_{4\times 4}|\ (\tilde{V}_{ij} \in[0,1])\ i,j\in\{1,2,3,4\}. \tag{3}\] \(\tilde{V}_{4\times 4}\) is the output prediction matrix whose values \(\tilde{V}_{i,j}\) correspond to the _least_ euclidean distance of the quadrant \(Q_{i}\) from any of the reference images for the \(j^{th}\) class. The closer \(\tilde{V}_{i,j}\) is to 0, the higher the similarity between \(Q_{i}\) and class \(j\). We extract two outputs from \(\tilde{V}_{4\times 4}\). Namely, for each quadrant \(Q_{i}\): 1. the vegetation class that is most similar to it (\(\tilde{v}_{i}\)), and 2. the corresponding similarity score (\(d_{i}\)): \[\tilde{v}_{i}=\operatorname*{argmin}_{j}\tilde{V}_{i,j},\text{ and }d_{i}= \tilde{V}_{i,j}. \tag{4}\] For readability, hereafter we denote \(\tilde{v}_{i}\) as belonging to the Pliable Vegetation (PV) set if \(\tilde{v}_{i}=1\) or \(2\) (sparse/dense grass), and to the Non-Pliable Vegetation (NPV) set if \(\tilde{v}_{i}=3\) or \(4\) (bush/tree). We define the confidence of classification \(\kappa_{i}\) as, \[\kappa_{i}=e^{-\alpha\cdot d_{i}}, \tag{5}\] where \(\alpha\) is a tunable parameter. We observe that \(d_{i}\to 0\implies\kappa_{i}\to 1\), and vice versa. ## V VERN: Navigation in Dense Vegetation We use occupancy grids/cost maps generated using 2D lidar scans to detect obstacles around the robot. Our cost map can be defined as, \[C_{z}(row,col)=\{p\ |\ p\in\{0,100\}\}. \tag{6}\] Here, z is the height (along the robot's Z-axis) at which the 2D scan is recorded on a plane parallel to the ground. \(p\) is a binary variable representing if an obstacle is present at \((row,col)\) of the cost map. In a densely vegetated environment, \(C_{z}\) contains obstacles in all the locations where the lidar's scans have a finite proximity value. However, it does not account for the pliability of certain types of vegetation, which makes them passable and therefore not true obstacles. Therefore, we use the classification results, its confidence (Section IV-C), and estimated vegetation height to augment our cost map prior to navigation. ### _Multi-view Cost Maps_ To estimate the height of the environmental obstacles, we use three cost maps of equal dimensions \(C_{low},C_{mid}\), and \(C_{high}\) corresponding to 2D scans from three different heights in 3D lidar's point clouds. The cost maps correspond to a height lower, equal to, and higher than the height at which the robot's 3D lidar is mounted, respectively. \(C_{low}\) contains obstacles of all heights around the robot. \(C_{mid}\) and \(C_{high}\) contain taller obstacles such as tall grass, trees, walls, humans, etc. Using only three views instead of projecting the entire point cloud onto a 2D plane reduces the computation burden. It also helps identify truly tall obstacles and avoids misclassifying overhanging leaves from trees as tall obstacles. We consider tall obstacles as _critical obstacles_ due to the high probability of them being solid obstacles such as trees and walls. To identify critical obstacles, we perform the element-wise sum operation as follows, \[C_{crit}=C_{high}\bigoplus C_{mid}\bigoplus C_{low}. \tag{7}\] \(C_{crit}(row,col)\in\{0,100,200,300\}\), and regions with higher costs contain critical obstacles. Fig. 3: VERN’s classification network architecture. Quadrants of the real-time camera image are paired with several reference images for each class and fed into the two identical branches of our network. Example reference images for each class are shown at the bottom. ### _Vegetation-aware Cost Map_ To combine \(\tilde{v}_{i}\), \(\kappa_{i}\) and \(C_{crit}\), we must correlate the regions viewed by quadrants \(Q_{1,2,3,4}\) in \(I_{t}^{RGB}\) with the corresponding regions in \(C_{crit}\). #### V-B1 Homography To this end, we apply a homography transformation \(H\) to project \(I_{t}^{RGB}\) onto the cost map. Let \(reg()\) denote a function that returns the real-world \((x,y)\) coordinates corresponding to a region of a cost map or image. We obtain the image quadrant relative to the cost map (see green/red rectangles in Fig. 4 bottom) as, \[Q_{i}^{C}=\{(row,col)|reg(C(row,col))=reg(H(Q_{i}))\}. \tag{8}\] #### V-B2 Cost Map Clearing We now use \(\tilde{v}_{i}\), \(\kappa_{i}\), \(C_{crit}\), and \(Q_{i}^{C}\) to clear/modify the costs of the grids in \(C_{low}\). We choose to modify and plan over \(C_{low}\) since it detects obstacles of all heights. First, we calculate the normalized height measure (between 0 to \(\pi/2\)) of the obstacles in each quadrant \(i\) using \(C_{crit}\) as, \[h_{i}=mean(C_{crit}(Q_{i}^{C}))/c_{max}\cdot\frac{\pi}{2}, \tag{9}\] where, \(c_{max}=300\) the maximum value in \(C_{crit}\). Next, in each quadrant \(Q_{i}^{C}\) of \(C_{low}\) we modify the cost as, \[C_{VA}(Q_{i}^{C})=C_{low}(Q_{i}^{C})\cdot\frac{clear(\kappa_{i},h _{i})}{max(clear(\kappa_{i},h_{i}))} \tag{10}\] \[clear()=\begin{cases}w_{s\,\text{\scriptsize$\sigma$\,d$}}\cdot( 1-\kappa_{i})+\frac{2\,h_{i}}{\pi}&\text{if}\,\tilde{v}_{i}\,\text{is PV}\\ (w_{NPV}\cdot\kappa_{i}+b_{NPV})+\sin(h_{i})&\text{if}\,\tilde{v}_{i}\,\text{ is NPV}.\end{cases} \tag{11}\] Here, \(C_{VA}\) is a vegetation-aware cost map, \(w_{s},w_{d},w_{NPV}\), and \(b_{NPV}\) are positive constants satisfying the condition \(w_{d}>w_{s}\), and \(b_{NPV}>w_{d}+1\). The weights \(w_{s}\) and \(w_{d}\) are used when \(\tilde{v}_{i}\) is sparse and dense grass respectively. We incorporate \(sin(.)\) for NPV to ensure that the significantly tall obstacles have higher costs and differentiable cost values from short obstacles. In contrast, we consider a linearly varying cost function w.r.t. the vegetation height for PV since the resistance to the robot from pliable tall objects is correlated with their height. We note that \(\sin(h_{i})\geq\frac{2\cdot h_{i}}{\pi}\) for \(h_{i}\in[0,\pi/2]\). For PV classifications, low confidence and tall vegetation lead to higher navigation costs. Intuitively, this leads to the robot preferring to navigate through high-confidence, and short pliable vegetation whenever possible. Conversely, for NPV classifications, high confidence, and tall vegetation lead to higher costs since such regions should definitely be avoided. Accounting for low-confidence classifications in the formulation helps handle misclassifications and assign costs accordingly. **Proposition V.1**.: _The clear() function modifies \(C_{low}\) such that traversability costs in PV regions are always lower than in NPV regions._ Proof.: The maximum cost for PV is \(w_{d}+1\) (when \(\kappa_{i}\to 0\) and \(h_{i}\rightarrow\pi/2\)), and the minimum cost for NPV is \(b_{NPV}\) (when \(\kappa_{i}\to 0\) and \(h_{i}\to 0\)). Conditions \(w_{s}<w_{d}\), and \(b_{NPV}>w_{d}+1\Longrightarrow\max\) cost of PV \(<\min\) cost of NPV. Therefore, in the absence of free space, the robot always navigates through PV and avoids NPV regions. Therefore, regions with PV will always be preferred for navigation. ### _Cautious Navigation_ We adapt DWA (section III-D) to use our vegetation-aware cost map for robot navigation. We calculate the obstacle cost (obs(.)) associated with every \((v,\omega)\) pair by projecting its predicted trajectory \(traj^{C}(v,\omega)\) relative to the cost map over \(C_{VA}\) and summing as, \[\mathit{obs}(v,\omega)=\sum_{(row,col)\in traj^{C}(v,\omega)}C_{VA}(row,col). \tag{12}\] Next, we compute the total cost \(Q(v,\omega)\) (equation 1). The \((v,\omega)\) that minimizes this cost is used for navigation. In some cases, the robot might have to navigate a region with high traversability cost (represented say in \(Q_{i}^{C}\)) which typically occurs with NPV. To imbibue cautious navigation behaviors for such scenarios, we stunt the robot's complete velocity space as \(V_{s}=[[0,\kappa_{i}\cdot v_{max}],\kappa_{i}\cdot[-\omega_{max},\omega_{ max}]]\). ### _Recovery Behaviors_ In highly dense vegetation or in regions with low confidence classifications, the robot could still freeze or get physically entrapped. We observe that Spot's high degrees of freedom and superior dynamics allows it to recover itself from such situations if it avoids rotational motions. Therefore, we propose holonomic recovery behaviors to recover Spot from such situations. To this end, the robot periodically stores \(\mathit{safe}^{O}\) locations relative to the odom frame whenever the conditions for freezing or entrapment (section III-B2) are not satisfied. If the conditions are satisfied, the robot stores its current location into an \(\mathit{unsafe}^{O}\) location list and chooses a safe location such that, \[\mathit{unsafe}^{C}=T_{C}^{C}\cdot\mathit{unsafe}^{O}, \tag{13}\] \[C_{VA}(unsafe^{C})=\infty, \tag{14}\] \[\textbf{p}_{rec}^{O}=\underset{\textbf{p}^{O}\in safe}{\text{ argmin}}\left(dist(\textbf{g}^{O},\textbf{p}^{O})\right),\] \[s.t\ (\textbf{p}^{O}-\textbf{p}_{rob}^{O})\in Free^{O}\cap Grass^{O}, \tag{15}\] \[[v_{x},v_{y}]=k_{p}\cdot[\textbf{p}_{rec}-\textbf{p}_{rob}]. \tag{16}\] Here, \(\textbf{p}_{rec}^{O}\), \(\textbf{g}^{O}\), and \(\textbf{p}_{rob}^{O}\) are the safe location chosen to recover to, the robot's goal and current location. The condition in equation 15 ensures that the line connecting the robot and the recovery point only lies in traversable regions. Once the robot recovers, it proceeds to its goal after marking the \(\mathit{unsafe}\) location with high costs in \(C_{VA}\) to circumvent it. ## VI Results and Evaluations We detail our method's implementation and experiments conducted on a real robot. Then, we perform ablation studies and comparisons to highlight VERN's benefits. ### _Implementation and Dataset_ VERN's classifier is implemented using Tensorflow 2.0. It is trained in a workstation with an Intel Xeon 3.6 GHz processor and an Nvidia Titan GPU using real-world data collected from a manually teleoperated Spot robot. The robot is equipped with an Intel NUC 11 (a mini-PC with Intel i7 CPU and NVIDIA RTX 2060 GPU), a Velodyne VLP16 LiDAR, and an Intel RealSense L515 camera. The training images (\(\sim 600\) per class) were collected from various environments in the University of Maryland, College Park campus. Our model takes about 6 hours to train for 55 epochs. ### _Evaluations_ We use the following evaluation metrics to compare VERN's performance against several navigation methods: 1. Boston Dynamics' in-built autonomy on Spot, 2. DWA [11], 3. GA-Nav [12], and 4. GrASPE [10]. Spot's in-built autonomy uses its stereo cameras in four directions around the robot, estimates obstacles, and navigates to a goal. DWA is a local planner that utilizes a 2D LiDAR scan for obstacle avoidance. GA-Nav combines semantic segmentation for terrain traversability estimation with elevation maps for outdoor navigation. It is trained on publically available image datasets (RUGD [37] and RELLIS-3D [38]). GrASPE is a multi-modal fusion framework that estimates perception sensor reliability to plan paths through unstructured vegetation. We further compare VERN without height estimation and recovery behaviors. The metrics we use for evaluations are, * The number of successful goal-reaching attempts (while avoiding non-pliable vegetation and collisions) over the total number of trials. * The number of times the robot got stuck or started oscillating for more than 5 seconds while avoiding obstacles over the total number of attempts. Lower values are better. * The ratio between the robot's trajectory length and the straight-line distance to the goal in all the successful trajectories. * The ratio between the number of false positive predictions (i.e., actually untraversable/non-pliable obstacles predicted as traversable) and the total number of actual negative (untraversable) obstacles encountered during a trial. We report the average over all the trials. If the sum of the success and freezing rates do not equal 100, it indicates that the robot has collided in those cases. We also compare these methods qualitatively using the trajectories pursued while navigating. For perception comparisons, we quantitatively evaluate the accuracy and F-score of MobileNetv3 [36], EfficientNet [39], and Vision Transformer [40] when trained and evaluated on our dataset. We also show VERN's classification results and cost map clearing in Fig. 4. ### _Navigation Test Scenarios_ We compare the performance of all the navigation methods in the following real-world outdoor scenarios (see Figs. 1 and 5) that differ from the training environments. The scenarios are described in increasing order of difficulty. Ten trials are conducted in each scenario for each method. * Contains sparse tall grass, and trees. * Contains trees and dense tall grass in close proximity. The robot must identify grass and pass through it to reach its goal successfully. * Contains trees, bushes, and dense grass in close proximity. The robot must identify grass and pass through it. See Fig. 1 Fig. 4: Snapshots of the vegetation classification results on RGB images (top), and the corresponding cost maps \(C_{VA}\) marked with critical obstacles (bottom) from trials in scenarios 2 (left), 3 (center), and 4 (right). A green/red rectangle in a quadrant \(Q_{1,2,3,4}\) in the RGB image represents a PV/NPV classification respectively. The same quadrants are projected onto the cost maps (\(Q_{1,2,3,4}^{C}\)) along with their classifications for visualization. The cost for the vegetation within the green regions is reduced significantly using equation 11. The encircled regions in each cost map correspond to an obstacle in the RGB image. [**Left**]: White - Tree, Blue - Tall grass, [**Center**]: White - Tree, Blue - Bush, [**Right**]: White - Tree, Blue - Human. Our classifier accurately detects trees, and bushes as non-pliable/untraversable, and tall grass as traversable. In scenario 4, it accurately detects the human as untraversable due to its dissimilarity with all training classes and tall height. Fig. 5: Spot robot navigating in: [**Left**] Scenario 1, [**Center**] Scenario 2, and [**Right**] Scenario 4. We observe that VERN navigates through pliable/traversable vegetation in the absence of free space. This leads to significantly higher success rates, low freezing rates, and trajectory lengths. Other methods either freeze or take long, meandering trajectories to the goal. - Contains dense grass, trees, and an obstacle (human) unseen during training. ### _Analysis and Discussion_ **Classification Accuracy:** We observe from Table II that MobileNetv3 has the best accuracy and F-score compared to other methods. EfficientNet is difficult to train and tune especially due to longer training time requirements. Similarly, ViT requires a lot more data for training and is not compatible with the siamese network framework. **Navigation Comparison:** The quantitative navigation results are presented in Table I. We observe that VERN's (with height estimation used in equation 11 and recovery behaviors) success rates are significantly higher compared to the other methods. Spot's in-built planner and DWA consider all vegetation as obstacles. This leads to Spot becoming unstable or crashing when using its in-built planner and leads to excessive freezing and oscillations when using DWA. GA-Nav considers all vegetation (except trees) as partially traversable because it segments most flora as grass. It also cannot differentiate vegetation due to the lack of such precisely human-annotated datasets. Additionally, in cases where vegetation like tall grass partially occludes the RGB image (see scenario 2 in Fig. 5 [center]), GA-Nav struggles to produce segmentation results leading to the robot freezing or oscillating. In regions with tall grass, GA-Nav's elevation maps also become erroneous. GrASPE performs the best out of all the existing methods. It is capable of passing through sparse grass and avoiding untraversable obstacles. However, in highly dense grass that is in close proximity to trees, GrASPE is unable to accurately detect and react quickly to avoid collisions with trees. Since some existing methods do not reach the goal even once, their trajectory lengths are less than 1. The values are still reported to give a measure of their progress toward the goal. Spot's in-built planner progress the least before the robot froze or crashed. In some cases, the methods take meandering trajectories (e.g. DWA) in scenario 1 before reaching the goal. Notably, VERN deviates the least from the robot's goal and that reflects in the low trajectory length. **FPR:** We compare VERN's vegetation classifier's false positive rate (FPR) with GA-Nav and the EfficientNet-based classifier using manual ground truth labeling of the vegetation in the trials. We observe that GA-Nav leads to a significantly high FPR in all four scenarios primarily because GA-Nav's terrain segmentation predictions are trained on the RUGD dataset. Moreover, GA-Nav's incorrect segmentation under varying lighting conditions and occlusions increases the false positive predictions. GrASPE and the EfficientNet-based classifier are trained using the same images we used to train VERN. However, we observe that their accuracy is comparatively lower than our VERN model. This is primarily due to the fine-grained feature learning capabilities of VERN's MobileNetv3-based backbone. Additionally, GrASPE's predictions lead to erroneous results when its 3D point cloud cannot identify the geometry of the vegetation such as trees and bushes under visually cluttered instances. **Ablation study**: We compared VERN and its variants without using height estimation, and recovery behaviors. We observe that when the height estimation is not used for cost map clearing, the robot's success rate drops. This is mainly because height helps differentiate short and tall pliable vegetation (where the robot could freeze). Additionally, estimating the critical regions based on the height helps avoid obstacles such as humans (Scenario 4) which are not part of our classifier. Using recovery behaviors helps reduce freezing in the presence of dense obstacles (especially in scenarios 2, 3, and 4). Additionally, moving the robot to a safe location, and marking and remembering the unsafe region allows the robot to preemptively avoid the region in subsequent trials. ## VII Conclusions, Limitations and Future Work We present a novel algorithm to navigate a legged robot in densely vegetated environments with various traversability. We utilize a few-shot learning classifier trained on a few hundred quadrants of RGB images to distinguish vegetation with different pliability with minimal human annotation. This classification model is combined with estimated vegetation height (from multiple local cost maps), and classification confidence using a novel cost map clearing scheme. Using the resulting vegetation-aware cost map, we deploy a local planner with recovery behaviors to save the robot if it freezes or gets entrapped in dense vegetation. Our algorithm has a few limitations. Our primary navigation assumes non-holonomic robot dynamics to utilize DWA's dynamic feasibility guarantees. However, the legged robot dynamics are holonomic and further investigation is required to extend our method to relax its action constraints. We assume that the different kinds of vegetation are not intertwined with one another. This may not be the case in highly forested environments. VERN could lead to collisions in the presence of thin obstacles such as branches that are \begin{table} \begin{tabular}{|c|c|c|} \hline **Methods** & **Accuracy** & **F-score** \\ \hline MobileNetv3 [36] & **0.957** & **0.833** \\ \hline EfficientNet [39] & 0.758 & 0.632 \\ \hline Vision Transformer (ViT) [40] & 0.689 & 0.546 \\ \hline \end{tabular} \end{table} TABLE II: The accuracy and F-scores (higher values are better) of three feature-extracting backbones used to train our dataset. We observe that our MobileNetv3 has the best accuracy and F-score because of its depth-wise separable convolutions. This leads to faster and better learning. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Metrics** & **Method** & **Scenario** & **Scenario** & **Scenario** \\ \cline{2-5} & **1** & **2** & **3** & **4** \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Success** \\ **Rate (\%)** \\ (biter)** \\ \end{tabular} } & 30 & 0 & 0 & 0 \\ & DWA [11] & 80 & 0 & 0 \\ & GA-Nav[12] & 60 & 20 & 30 & 30 \\ & GA-Nav[12] & 60 & 60 & 50 & 40 \\ & VERNWeb height estimation & 80 & 60 & 40 & 30 \\ & VERNWeb discovery behavior & 100 & 60 & 40 & 40 \\ & VERNWeb(ours) & **100** & **90** & **79** & **79** \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Freeling** \\ **Rate (\%)** \\ (hour is)** \\ \end{tabular} } & 30 & 100 & 100 & 90 \\ & GA-Nav[12] & 40 & 80 & 70 & 70 \\ & GA-Nav[12] & 10 & 20 & 20 & 50 \\ & GASPNet[10] & 10 & 20 & 40 & 70 \\ & VERNWebWeb estimation & 20 & 40 & 40 & 70 \\ & VERNWeb discovery behavior & 0 & 40 & 60 & 60 \\ & VERNWeb(ours) & **0** & **10** & **29** & **30** \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Norm.** \\ **Tra.** \\ **Cluster to it better)** \\ \end{tabular} } & 30 & 134 & 12.37 & 0.24 & 0.22 \\ & DWA [11] & 1.52 & 0.46 & 0.34 & 0.62 \\ & GA-Nav[12] & 1.31 & 1.39 & 1.32 & 1.49 \\ & GASNet[10] & 1.18 & 1.09 & 1.46 & 1.37 \\ & VERNWeb neighbor estimation & 1.39 & 1.41 & 1.35 & 1.30 \\ & VERNWeb discovery behavior & 1.42 & 1.48 & 1.46 & 1.39 \\ & VERNWeb discovery behavior & 1.41 & 1.19 & 1.28 & 1.23 \\ & GA-Nav[12] & 0.33 & 0.39 & 0.30 & 0.51 \\ **Fake** & GA-Nav[12] & 0.25 & 0.31 & 0.28 & 0.44 \\ **Positive** & EI-Gancet[39] & 0.28 & 0.23 & 0.32 & 0.35 \\ **Rate** & VEEN(ours) & **0.15** & **0.18** & **0.12** & **0.21** \\ \hline \end{tabular} \end{table} TABLE I: Navigation performance of our method compared to other methods on various metrics. VERN outperforms other methods consistently in terms of success rate, freezing rate, false positive rate, and normalized trajectory length in different unstructured outdoor vegetation scenarios. not detected in the cost maps. In the future, we would like to augment our current method with proprioceptive sensing for navigating more complex terrains with occluded surfaces.
2309.01632
Representing Edge Flows on Graphs via Sparse Cell Complexes
Obtaining sparse, interpretable representations of observable data is crucial in many machine learning and signal processing tasks. For data representing flows along the edges of a graph, an intuitively interpretable way to obtain such representations is to lift the graph structure to a simplicial complex: The eigenvectors of the associated Hodge-Laplacian, respectively the incidence matrices of the corresponding simplicial complex then induce a Hodge decomposition, which can be used to represent the observed data in terms of gradient, curl, and harmonic flows. In this paper, we generalize this approach to cellular complexes and introduce the flow representation learning problem, i.e., the problem of augmenting the observed graph by a set of cells, such that the eigenvectors of the associated Hodge Laplacian provide a sparse, interpretable representation of the observed edge flows on the graph. We show that this problem is NP-hard and introduce an efficient approximation algorithm for its solution. Experiments on real-world and synthetic data demonstrate that our algorithm outperforms state-of-the-art methods with respect to approximation error, while being computationally efficient.
Josef Hoppe, Michael T. Schaub
2023-09-04T14:30:20Z
http://arxiv.org/abs/2309.01632v3
# Representing Edge Flows on Graphs via Sparse Cell Complexes ###### Abstract Obtaining sparse, interpretable representations of observable data is crucial in many machine learning and signal processing tasks. For data representing flows along the edges of a graph, an intuitively interpretable way to obtain such representations is to lift the graph structure to a simplicial complex: The eigenvectors of the associated Hodge-Laplacian, respectively the incidence matrices of the corresponding simplicial complex then induce a Hodge decomposition, which can be used to represent the observed data in terms of gradient, curl, and harmonic flows. In this paper, we generalize this approach to cellular complexes and introduce the flow representation learning problem, i.e., the problem of augmenting the observed graph by a set of cells, such that the eigenvectors of the associated Hodge Laplacian provide a sparse, interpretable representation of the observed edge flows on the graph. We show that this problem is NP-hard and introduce an efficient approximation algorithm for its solution. Experiments on real-world and synthetic data demonstrate that our algorithm outperforms state-of-the-art methods with respect to approximation error, while being computationally efficient. Graph Signal Processing Topological Signal Processing Cell Complexes Topology Inference ## 1 Introduction In a wide range of applications, we are confronted with data that can be described by flows supported on the edges of a graph [1, 2]. Some particularly intuitive and important examples include traffic flows within a street network [3], flows of money between economic agents [4, 5], or flows of data between routers in a computer network [6]. However, many other scenarios in which some energy, mass, or information flows along the edges of a graph may be abstracted in a similar way [7]. As is the case for many other setups in machine learning and signal processing [8, 9, 10], finding a compact and interpretable approximate representation of the overall pattern of such flows is an important task to assess qualitative features of the observed flow data. In the context of flows on graphs, the so-called (discrete) Hodge-decomposition [11, 12, 13, 14, 15, 16] has recently gained prominence to process such flow signals, as it can be employed to represent any flow on a graph (or more generally, cellular complex) as a sum of a gradient, curl and harmonic components, which can be intuitively interpreted. This representation of the data may then be used in a variety of downstream tasks [17], such as prediction of flow patterns [18, 19, 20], classification of trajectories [21, 22, 23], or to smooth or interpolate (partially) observed flow data [24, 25]. Furthermore, deep learning approaches on cell complexes that need or infer cells are currently gaining traction [26, 27, 28, 29]. Commonly considered mathematical problem formulations to find a compact representation of data are (variants of) sparse dictionary learning [10], in which the aim is to find a sparse linear combination of a set of fundamental atoms to approximate the observed data. Accordingly, such types of dictionary learning problems have also been considered to learn representations of flows on the edges of a graph [14, 30, 31, 32, 33]. Since the Hodge-decomposition yields an orthogonal decomposition of the flows into non-cyclic (gradient) flows and cyclic flows, these signal components can be approximated via separate dictionaries, and as any gradient flow component can be induced by a potential function supported on the vertices of the graph, the associated problem of fitting the gradient flows can be solved via several standard techniques, e.g., by considering the associated eigenvectors of the graph Laplacian. To find a corresponding representation of the cyclic flows, in contrast, it has been proposed to lift the observed graph to a simplicial or cellular complex, and then identifying which simplices (or more generally, cells) need to be included in the complex to obtain a good sparse approximation of the circular components of the observed flows [14, 30, 31, 32]. Such inferred cell complexes can moreover be useful for a variety of downstream tasks, e.g., data analysis based on as neural networks [26, 27]. In fact, even augmenting graphs by adding randomly selected simplices can lead to significant improvements [34] for certain learning tasks. Arguably, having more principled selection methods available could thus be beneficial. For instance, the approach of [28] could potentially be improved by replacing the selection from a pre-defined set of cells with our approach, reducing its input data demands. Simplicial and cellular complex based representations have also gained interest in neuroscience recently to desribe the topology of interactions in the brain [35]. However, previous approaches for the inference of cells from edge flow data limit themselves to simplicial complexes or effectively assume that the set of possible cells to be included is known beforehand. Triangles are not sufficient to model all cells that occur in real-world networks: For example, grid-like road networks contain virtually no triangles, necessitating the generalization to cell complexes. In other applications, triangles may be able to approximate longer 2-cells, which still results in a unnecessarily complex representation. In this work, we consider a general version of this problem which might be called flow representation learning problem: given a set of edge-flows on a graph, find a lifting of this graph into a regular cell-complex with a small number of 2-cells, such that the observed (cyclic) edge-flows can be well-approximated by potential functions associated to the 2-cells. As the solution of this problem naturally leads to the construction of an associated cell complex, we may alternatively think of the problem as inferring an (effective) cell-complex from observed flow patterns. Our main contributions are as follows: * We provide a formal introduction of the flow representation learning problem and its relationships to other problem formulations. * We prove that the general form of flow representation learning we consider here is NP hard. * We provide heuristics to solve this problem and characterize their computational complexity. * We demonstrate that our algorithms outperform current state of the art approaches in this context. ### Related work **Finding cycle bases.** The cycle space of an undirected graph \(\mathcal{G}\) is the set of all even-degree subgraphs of \(\mathcal{G}\). Note that the cycle space is orthogonal to the space of gradient flows and (for unweighted graphs) isomorphic to the space of cyclic flows. A lot of research has been conducted on finding both general and specific cycle bases [36, 37, 38]. Our algorithm uses the central idea that the set of all cycles induced by combining a spanning tree with all non-tree edges is a cycle basis. However, since we aim for a sparse representation instead of a complete cycle basis, this paper has a different focus. In this paper, you may therefore think of a cycle basis as a set of simple cycles that covers all edges. **Graph Signal Processing and Topological Signal Processing.** The processing of signals defined on graphs has received large attention over the last decade [39, 40, 41]. The extension of these ideas to topological spaces defined via simplicial or cellular complexes has recently gained attention [14, 15, 17, 25, 28, 30, 31, 33], with a particular focus on the processing of flows on graphs [25]. The problem we consider here is closely related to a sparse dictionary learning problem [10] for edge-flows. In contrast to previous formulations [14, 28, 30, 31], we do not assume that set of cells (the dictionary) is given, which creates a more computationally difficult problem we need to tackle. **Compressive Sensing** Compressive Sensing (CS) [42, 43] may be interpreted as a variant of sparse dictionary learning that finds a sparse approximation from an underdetermined system of equations. Although different in both methodology and goals, it is noteworthy that it has been successfully applied in the context of graphs [44]. A CS application of the graph Laplacian [45] indicates that the lifting to a higher-dimensional Hodge Laplacian could also be used in this context. ### Outline The remainder of this article is structured as follows. In Section 2, we provide a brief recap of notions from algebraic topology, as well as ideas from graph and topological signal processing. Section 3 then provides a formal statement of the problem considered, followed by our proposed algorithmic solution (see Section 4). Our theoretical hardness results are given in Section 5. We demonstrate the utility of our approach with numerical experiments in Section 6, before providing a brief conclusion. ## 2 Background and Preliminaries In this section, we recap common concepts from algebraic topology and set up some notation. In this paper we only consider cell complexes with cells of dimension two or lower, so we will only introduce the required parts of the theory. However, in general, cell complexes have no such limitation, and our methodology can be adapted to also work on cells of higher dimensions [46]. **Cell Complexes.** At an intuitive level, cell complexes are extensions of graphs that not only have vertices (0-dimensional cells) and edges (1-dimensional cells), but also (polyonal) faces (2-dimensional cells). Such faces can be defined by a closed, non-intersecting path (or _simple cycle_) along the graph, such that the path forms the boundary of the polygonal cell. Simplicial complexes may be seen as a special case of of cell complexes, in which only triangles are allowed as 2-dimensional cells. We refer to [46] for a general introduction to algebraic topology. Our exposition of the background on cell complexes in the whole section below is adapted from [33]. Within the scope of this paper, a cell complex (CC) \(\mathscr{C}\) consists of a set of so-called cells of different dimensions \(k\in\{0,1,2\}\). For our complex \(\mathscr{C}\), we denote the set \(k\)-cells by \(\mathscr{C}_{k}\). The \(k\)-skeleton of \(\mathscr{C}\) is the cell complex consisting of all \(l\)-cells in \(\mathscr{C}\) with dimension \(l\leq k\). Akin to graphs, we call the elements of \(\mathscr{C}_{0}\) the nodes and denote them by \(v_{i}\) for \(i\in 1,\ldots,|\mathscr{C}_{0}|\). Analogously, the elements \(e_{i}\) for \(i\in 1,\ldots,|\mathscr{C}_{1}|\) included in \(\mathscr{C}_{1}\) are called the edges of the cell complex, and we call the elements \(\theta_{i}\) for \(i\in 1\ldots,|\mathscr{C}_{2}|\) included in \(\mathscr{C}_{2}\) the polygons of the complex. **Oriented Cells.** To facilitate computations we assign a reference orientation to each edge and polygon within a cell complex. We use the notation \(\vec{e}_{k}=[v_{i},v_{j}]\) to indicate the \(k\)-th oriented edge from node \(v_{i}\) to node \(v_{j}\), and denote its oppositely oriented counterpart as \(\widetilde{e}_{k}=[v_{j},v_{i}]\). The \(k\)-th oriented \(2\)-cell, labeled as \(\vec{\theta}_{k}\), is defined by the ordered sequence \(\vec{e}_{i},\ldots,\vec{e}_{j}\) of oriented edges, forming a non-intersecting closed path. Note that within the sequence \(\vec{\theta}_{k}\) some edges \(\widetilde{e}_{\ell}\) may appear opposite their reference orientation. Any cyclic permutation of the ordered tuple \(\vec{\theta}_{k}\) defines the same \(2\)-cell; a flip of both the orientation and ordering of all the edges defining \(\vec{\theta}_{k}\) corresponds to a change in the orientation of the \(2\)-cell, i.e., \(\vec{\theta}_{k}=[\overline{e}_{j},\ldots,\overline{e}_{i}]\). **Chains and cochains.** Given a reference orientation for each cell, for each \(k\), we can define a finite-dimensional vector space \(\mathcal{C}_{k}\) with coefficients in \(\mathbb{R}\) whose basis elements are the oriented \(k\)-cells. An element \(\varphi_{k}\in\mathcal{C}_{k}\) is called a \(k\)-_chain_ and may be thought of as a formal linear combination of these basis elements. For instance, a \(1\)-chain may be written as \(\varphi_{1}=\sum_{i}a_{i}\vec{e}_{i}\) for some \(a_{i}\in\mathbb{R}\). We further impose that an orientation change of the basis elements corresponds to a change in the sign of the coefficient \(a_{i}\). Hence, flipping the orientation of a basis element, corresponds to multiplying the corresponding coefficient \(a_{i}\) by \(-1\), e.g., \(a_{1}\vec{e}_{1}=-a_{1}\vec{e}_{1}\). As for any \(k\) the space \(\mathcal{C}_{k}\) is isomorphic to \(\mathbb{R}^{|\mathscr{C}_{k}|}\), we may compactly represent each element \(\varphi_{k}\in\mathcal{C}_{k}\) by a vector \(\mathbf{c}=(a_{1},...,a_{|\mathscr{C}_{k}|})^{\top}\). Further, we endow each space \(\mathcal{C}_{k}\) with the standard \(\ell_{2}\) inner product \(\langle\mathbf{c}_{1},\mathbf{c}_{2}\rangle=\mathbf{c}_{1}^{\top}\mathbf{c}_{2}\), and thus give \(\mathcal{C}_{k}\) the structure of a finite-dimensional Hilbert space. The space of \(k\)_-cochains_ is the dual space of the space of \(k\)-chains and denoted as \(\mathcal{C}^{k}:=\mathcal{C}^{*}_{k}\). In the finite case, these spaces are isomorphic and so we will not distinguish between those two spaces in the following for simplicity. (Co-)chains may also be thought of as assigning a scalar value to each cell, representing a signal supported on the cells. In the following, we concentrate on edge-signals on CCs, which we will think of as flows. These can be conveniently described by cochains and represented by a vector. **Boundary maps.** Chains of different dimensions can be related via boundary maps \(\partial_{k}:\mathcal{C}_{k}\rightarrow\mathcal{C}_{k-1}\), which map a chain to a sum of its boundary components. In terms of their action on the respective basis elements, these maps are defined as: \(\partial_{1}(\vec{e})=\partial([v_{i},v_{j}])=v_{i}-v_{j}\) and \(\partial_{2}(\vec{\theta})=\partial_{2}([\vec{e}_{i_{1}},\ldots,\vec{e}_{i_{ m}}])=\sum_{j=1}^{m}\vec{e}_{i_{j}}\). Since all the spaces involved are finite dimensional we can represent these boundary maps via matrices \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\), respectively, which act on the corresponding vector representations of the chains. Figure 7 shows a simple CC and its boundary maps. The dual co-boundary maps \(\partial_{k}^{\top}:\mathcal{C}^{k-1}\rightarrow\mathcal{C}^{k}\), map cochains of lower to higher-dimensions. Given the inner-product structure of \(\mathcal{C}_{k}\) defined above, these are simply the adjoint maps to \(\partial_{k}\) and their matrix representation is thus given by \(\mathbf{B}_{1}^{\top}\) and \(\mathbf{B}_{2}^{\top}\), respectively. **The Hodge Laplacian and the Hodge decomposition** Given a regular CC \(\mathscr{C}\) with boundary matrices as defined above, we define the \(k\)-th combinatorial Hodge Laplacian [12, 13, 15] by: \[\mathbf{L}_{k}=\mathbf{B}_{k}^{\top}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_ {k+1}^{\top} \tag{1}\] Specifically, the \(0\)-th Hodge Laplacian operator, is simply the graph Laplacian \(\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{\top}\) of the graph corresponding to the \(1\)-skeleton of the CC (note that \(\mathbf{B}_{0}:=0\) by convention). Using the fact that the boundary of a boundary is empty, i.e., \(\partial_{k}\circ\partial_{k+1}=0\) and the definition of \(L_{k}\), it can be shown that the space of \(k\)-cochains on \(\mathscr{C}\) admits a so-called Hodge-decomposition [12, 13, 15]: \[\mathcal{C}^{k}=\operatorname{Im}(\partial_{k+1})\oplus\operatorname{Im}( \partial_{k}^{\top})\oplus\ker(L_{k}). \tag{2}\] In the context of \(1\)-cochains, i.e., flows, this decomposition is the discrete equivalent of the celebrated Helmholtz decomposition for a continuous vector fields [15]. Specifically, we can create any gradient signal via a \(0\)-cochain \(\phi\in\mathcal{C}^{0}\) assigning a potential \(\phi_{i}\) to each node \(i\) in the complex, and then applying the co-boundary map \(\partial_{1}^{\top}\). Likewise, any curl flow can be created by applying the boundary map \(\partial_{2}\) to a \(2\)-chain \(\eta\in\mathcal{C}^{2}\) of \(2\)-cell potentials. Importantly, it can be shown that each of the above discussed three subspaces is spanned by a set of eigenvectors of the Hodge Laplacian. Namely, the eigenvectors of the _lower Laplacian_\(\mathbf{L}_{k}^{low}=\mathbf{B}_{k}^{\top}\mathbf{B}_{k}\) precisely span \(\operatorname{Im}(\mathbf{B}_{k}^{\top})\) (the gradient space); the eigenvectors of the _upper Laplacian_\(\mathbf{L}_{k}^{up}=\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{\top}\) span \(\operatorname{Im}(\mathbf{B}_{k+1})\) (curl space), and the eigenvectors associated to zero eigenvalues span the harmonic subspace. We denote the projection of any edge flow \(\mathbf{f}\in\mathcal{C}^{1}\) into the gradient, curl, or harmonic subspace of \(\mathscr{C}\) by \(\text{grad}_{\mathscr{C}}(\mathbf{f})=\mathbf{B}_{1}^{T}(\mathbf{B}_{1}^{T}) ^{\dagger}\mathbf{f}\), \(\text{curl}_{\mathscr{C}}(\mathbf{f})=\mathbf{B}_{2}(\mathbf{B}_{2})^{\dagger }\mathbf{f}\), or \(\text{harm}_{\mathscr{C}}(\mathbf{f})=(\mathbf{I}-\mathbf{L}_{1}\mathbf{L}_{ 1}^{\dagger})\mathbf{f}\) respectively. Here \((\cdot)^{\dagger}\) denotes the Moore-Penrose Pseudoinverse. ## 3 Problem Formulation Consider a given a Graph \(\mathcal{G}\) with \(N\in\mathbb{N}\) nodes and \(E\in\mathbb{N}\) edges, which are each endowed with an (arbitrary but fixed) reference orientation, as encoded in an node-to-edge incidence matrix \(\mathbf{B}_{1}\). We assume that we can observe \(s\in\mathbb{N}\) sampled flow vectors \(\mathbf{f}_{i}\), for \(i=1,\ldots,s\) defined on the edges. We assemble these vectors into the matrix \(\mathbf{F}=[\mathbf{f}_{1},\ldots,\mathbf{f}_{s}]\in\mathbb{R}^{E\times s}\). Our task is now to find a good approximation of \(\mathbf{F}\) in terms of a (sparse) set of gradient and curl flows, respectively. Leveraging the Hodge-decomposition, this problem can be decomposed into two orthogonal problems. The problem of finding a suitably sparse set of gradient flows can be formulated as a (sparse) regression problem, that aims to find a suitable set of node potentials \(\mathbf{\phi}\) such that \(\mathbf{B}_{1}^{\top}\mathbf{\phi}\) approximates the observed flows under a suitably chosen norm (or more general cost function). This type of problem has been considered in the literature in various forms [40]. We will thus focus here on the second aspect of the problem, i.e., we aim to find a sparse set of circular flows that approximate the observed flows \(\mathbf{F}\). Without loss of generality we will thus assume in the following that \(\mathbf{f}_{i}\) are gradient free flows (otherwise, we may simply project out the gradient component using \(\mathbf{B}_{1}\)). This task may be phrased in terms of the following dictionary learning problem: \[\min_{\mathbf{\xi},\mathbf{B}_{2}}\sum_{i=1}^{s}\|\mathbf{f}_{i}-\mathbf{B}_{2} \mathbf{\xi}\|_{2}^{2}\quad\text{s.t.}\quad\|\mathbf{\xi}\|_{0}<k_{1},\|\mathbf{B}_{2 }\|_{0}<k_{2}\text{ and }\mathbf{B}_{2}\in\mathcal{B}_{2}, \tag{3}\] where \(\mathcal{B}_{2}\) is the set of valid edge-to-cell incidence matrices of cell complexes \(\mathscr{C}\) whose \(1\)-skeleton is equivalent to \(\mathcal{G}\), and \(k_{1},k_{2}\) are some positive chosen integers. Note that the above problem may be seen as trying to infer a cellular complex with a sparse set of polygonal cells, such that the orginally observed flows have a small projection into the harmonic space of the cell complex -- in other words, we want to infer a cellular complex, that leads to a good sparse representation of the edge flows. In the following, we thus adopt a problem in which are concerned with the following loss function \[\text{loss}(\mathscr{C},\mathbf{F})=\left\|\text{harm}_{\mathscr{C}}(\mathbf{ F})\right\|_{F}=\sqrt{\sum_{i=1}^{s}\left\|\text{harm}_{\mathscr{C}}(\mathbf{f}_{i}) \right\|_{2}^{2}},\quad\text{s.t.}\quad\mathscr{C}\text{ has }\mathcal{G}\text{ as }1\text{-skeleton} \tag{4}\] There are two variants of the optimization problem we look at. First, we investigate a variant with a constraint on the approximation loss: \[\min_{\mathscr{C}}|\mathscr{C}_{2}|\qquad\text{s.t.}\qquad\text{loss}(\mathscr{ C},\mathbf{F})<\varepsilon\] ( \[\mathscr{P}_{1}\] ) Second, we consider a variant with a sparsity constraint on the number of \(2\)-cells: \[\min_{\mathscr{C}}\text{loss}(\mathscr{C},\mathbf{F})\qquad\text{s.t.}\qquad| \mathscr{C}_{2}|\leq n\] ( \[\mathscr{P}_{2}\] ) Finally, to assess the computational complexity of the problem, we introduce the decision problem: Given a graph and edge flows, can it be augmented with \(n\) cells s.t. the loss is below a threshold \(\varepsilon\)? \[\text{DCS}(\mathcal{G},\mathbf{F},n,\varepsilon):=\exists\mathcal{C}\supseteq \mathcal{G}:|\mathcal{C}_{2}|\leq n,\quad\text{loss}(\mathcal{C},\mathbf{F})<\varepsilon?\] (DCS) ## 4 Algorithmic Approach We now present a greedy algorithm that approximates a solution for both minimization problems (see Figure 1). It starts with a cell complex \(\mathcal{C}^{(0)}\) equivalent to \(\mathcal{G}\) and iteratively adds a new \(2\)-cell \(\theta_{i}\): \[\mathcal{C}^{(i)}:=\mathcal{C}^{(i-1)}\cup\{\theta_{i}\},\quad\theta_{i}:= \min_{\theta\in\text{cs}(\mathcal{C}^{(i-1)},\mathbf{F},m)}\text{loss}( \mathcal{C}^{(i-1)}\cup\{\theta\},\mathbf{F})\] until \(i=n\) or \(\text{loss}(\mathcal{C}^{(i)},\mathbf{F})<\varepsilon\), respectively. Here, \(\text{cs}(\cdot)\) denotes a _candidate search heuristic_, a function that, given a CC and corresponding flows, returns a set of up to \(m\in\mathbb{N}\) cell candidates. ### Candidate search heuristics Our algorithm requires a heuristic to select cell candidates because the number of valid cells, i.e., simple cycles, can be in \(\Omega(e^{\left\lvert\mathcal{C}_{0}\right\rvert})\) in the worst case (see appendix B.1). To reduce the number of cells considered, the heuristics we introduce here consider one or a small number of cycle bases instead of all cycles. Since each cycle basis has a size of \(|\mathcal{C}_{1}|-|\mathcal{C}_{0}|+1\), it would be inefficient to construct and evaluate all cycles in the cycle basis. Instead, we approximate the change in loss via the harmonic flow around a cycle, normalized by the length of the cycle. Recall that a cycle basis can be constructed from any spanning tree \(T\): Each edge \((u,v)\notin T\) induces a simple cycle by closing the path from \(u\) to \(v\) through \(T\). Both heuristics efficiently calculate the flow using the spanning tree and select the \(m\) cycles with the largest flow as candidates. Since the selection of spanning trees is so crucial, we introduce two different criteria as discussed below. Appendix B.2 discusses the heuristics in more detail and provides an example of one iteration for each heuristic. **Maximum spanning tree.** The maximum spanning tree heuristic is based on the idea that cycles with large overall flows also have large flows on most edges (when projected into the harmonic subspace). Since harmonic flows are Figure 1: Overview of cell complex inference: Our approach starts with a graph and flows supported on its edges. It iteratively adds 2-cells until it fulfills the termination condition. In each iteration, the algorithm projects the flows into the (new) harmonic subspace, selecting cell candidates using a spanning-tree-based heuristic, and adding the cell that minimizes the harmonic flow. cyclic flows, the directions tend to be consistent. However, there may be variations in the signs of the sampled flows \(\mathbf{f}_{i}\). Therefore, the maximum spanning tree heuristic constructs a spanning tree that is maximal w.r.t. the sum of absolute harmonic flows. See Algorithm 1 for pseudocode. **Similarity spanning trees.** The maximum spanning tree heuristic does not account for the fact that there might be similar pattern within the different samples. Given flows \(\mathbf{F}\in\mathbb{R}^{E\times s}\), we can represent an edge \(e\) using its corresponding row vector \(\mathbf{F}_{e_{-}}\). To account for orientation, we insert an edge in both orientations, i.e., both \(\mathbf{F}_{e_{-}}\) and \(-\mathbf{F}_{e_{-}}\). This makes it possible to detect common patterns using \(k\)-means clustering. Our similarity spanning trees heuristic exploits this by constructing one spanning tree per cluster center, using the most similar edges. See Algorithm 2 for pseudocode. ## 5 Theoretical Considerations ### NP-Hardness of Cell Selection **Theorem 1**.: _The decision variant of cell selection is NP-hard._ We give a quick sketch of the proof here; you can find the complete proof in appendix D. For the proof, we reduce 1-in-3-SAT to DCS. 1-in-3-SAT is a variant of the satisfiability problem in which all clauses have three literals, and exactly one of these literals must be true. The high-level idea is to represent each clause \(c_{j}\) with a cycle \(\gamma_{j}\), and each variable \(x_{i}\) with two possible cells \(\chi_{i}\) and \(\overline{\chi_{i}}\) containing a long path \(\pi_{i}\) and the clauses that contain \(x_{i}\) and \(\overline{x_{i}}\) respectively. Through constructed flows, we ensure that every solution with an approximation error below \(\varepsilon\) has to 1. add either \(\chi_{i}\) or \(\overline{\chi_{i}}\) for every \(x_{i}\), and 2. contain cells that, combined, cover all clauses exactly once. This is possible if and only if there is a valid truth value assignment for the 1-in-3-SAT instance. Consequently, if an algorithm can decide DCS, it can be used to decide 1-in-3-SAT. Theorem 1 follows with the NP-Hardness [47] of 1-in-3-SAT. ### Worst-case time complexity of our approach The time complexity of one maximum spanning tree candidate search is \(O(m\log m)\); for a detailed analysis see appendix E. For the similarity spanning trees, having \(k\) spanning trees multiplies the time complexity by \(k\). \(k\)-means also adds an additive component that depends on the number of iterations required for convergence, but is otherwise in \(O(nk)\). Furthermore, \(k\)-means is efficient in practical applications. To select a cell from given candidates, we construct \(\mathbf{B}_{2}\) and project the flows into the harmonic subspace. This computation can be efficiently performed by LSMR [48] since the matrix is sparse. However, due to its iterative and numerical nature, a uniform upper bound for its runtime complexity is difficult to obtain. Instead, we examine the runtime empirically in Section 6.4. ## 6 Numerical Experiments We evaluated our approach on both synthetic and real-world data sets. To compare our approach to previous work, we adapt the simplicial-complex-based approach from [14]. For this, we exchanged our heuristic based on spanning trees with a heuristic that returns the most significant triangles according to the circular flow around its edges. Wherever used, this approach is labeled _triangles_. All code for the evaluation and plotting is available at [https://github.com/josefiboppe/edge-flow-cell-complexes](https://github.com/josefiboppe/edge-flow-cell-complexes). When evaluating the sparsity of an approximation, there are conflicting metrics. Our algorithm optimizes for the definition used in Section 3, i.e., for a small number of 2-cells, \(|\mathcal{C}_{2}|\). However, cells with more edges have an inherent advantage over cells with fewer edges simply because the corresponding column in the incidence matrix \(\mathbf{B}_{2}\) has more non-zero entries. Therefore, we also consider \(\|\mathbf{B}_{2}\|_{0}\), the number of non-zero entries in \(\mathbf{B}_{2}\), where appropriate. On synthetic data sets, we also have a ground truth of cells. We use this information to create a third heuristic (_true_cells_) that always returns all ground-truth cells as candidates. Since our approach aims to recover ground-truth cells, we expect true_cells to outperform it. If our approach works the way we intend, the difference between it and true_cells should be relatively small. For the cell inference problem, we use the ground-truth cells to measure the accuracy of recovering cells. We construct the cell complexes for the synthetic dataset the following way: 1. Draw a two-dimensional point cloud uniformly at random 2. Construct the Delauney triangulation to get a graph of triangles 3. Add 2-cells according to parameters by finding cycles of appropriate length 4. Select edges and nodes that do not belong to any 2-cell uniformly at random and delete them We construct edge flows \(f_{i}=X_{i}+\mathbf{B}_{2}Y_{i}\) from cell flows \(X_{i}\in\mathcal{C}_{2}\sim\mathcal{N}_{\mathscr{C}_{2}}(0,I\sigma_{c})\) and edge noise \(Y_{i}\in\mathcal{C}_{1}\sim\mathcal{N}_{\mathscr{C}_{2}}(0,I\sigma_{n})\) sampled i.i.d. from multivariate normal distributions with mean \(\mu=0\), standard deviation \(\sigma_{c}=1\), and varying standard deviation \(\sigma_{n}\in[0,2]\). ### Evaluation of cell inference heuristic To evaluate the interpretability of results, we compare them to the ground-truth cells we also used to generate the flows: The cells represent underlying patterns we expect to see in real-world applications. Before looking at the inference performance of the complete algorithm, we will check that our heuristic works as expected. Figure 2 shows that, unsurprisingly, the heuristics work better for shorter cells and if more flows are available. However, it is not necessary to detect all cells at once as adding one cell results in a new projection into the harmonic space, making it easier to detect further cells. To evaluate the inference accuracy of the complete algorithm, we determined the percentage of cells detected after \(5\) iterations (with five ground-truth \(2\)-cells to detect). Figure 3 confirms that the overall algorithm works significantly better than the heuristic. Even for noise with \(\sigma_{n}=1\), the similarity spanning tree heuristic detects the vast majority of ground-truth cells. Overall, the experiments on synthetic data indicate that our approach detects underlying patterns, leading to a meaningful and interpretable cell complex. ### Evaluation of flow approximation quality As explained before, the triangles heuristic serves as a benchmark representing previous work whereas true_cells is an idealized version of our approach. Figure 3(a) shows that our approach with the similarity spanning trees heuristic performs close to true_cells, slightly outperforming the maximum spanning trees, with both significantly outperforming triangles. Notably, triangles cannot form a complete cycle basis, so only our approach reaches an approximation error of 0. However, since we are interested in sparse representations, retrieving a complete cycle basis is not our goal. Instead, we will focus on the behavior for greater sparsity, where the qualitative results depend on the parameter selection. In general, the longer the cells are, the more significant the difference between the three heuristics becomes. The approach tends to detect cells with fewer edges than the correct ones in this experiment. However, smaller cells can be combined to explain the data well for the approximation. We argue that this is the case with the cells that are found by the algorithm when using the max heuristic: Compared to true_cells and similarity, it requires a larger number of cells, but the resulting incidence matrix \(\mathbf{B}_{2}\) has a similar sparsity. However, it still outperforms the triangle heuristic, likely because it may take many triangles to approximate a 2-cell. The amount of noise fundamentally changes the observed behavior, as shown in Figure 3(b), especially when the incidence matrix \(\mathbf{B}_{2}\) is less sparse. To explain this, we need to look at both the sparsity and dimension of flows. The vector space of the (harmonic) edge noise has the same dimension as the harmonic space. Since our approach results in cells with longer boundaries, it reaches the same sparsity with fewer cells than the triangles approach. With its higher-dimensional approximation, the triangles approach is able to approximate even high-dimensional noise. If we instead consider the dimension of the approximation \(|\mathcal{C}_{2}|\), our approach outperforms triangles in nearly any configuration with either heuristic (compare Figure 14). In conclusion, with both sparsity measures, our approach has an advantage for sparse representations. This observation is consistent with our expectation that the approach can detect the 2-cells of the original cell complex1. After detecting the ground-truth cells, the error decreases at a significantly lower rate. We also expected this change in behavior as the approach now starts to approximate the patterns in the noise, which is bound to be less effective. Footnote 1: Or at least similar cells if the noise makes those more relevant. ### Experiments on real-world data For our evaluation on real-world data, we considered traffic patterns from TransportationNetworks [3], where we extract a single flow per network by calculating the net flow along a link. For an experiment with multiple flows, we grouped trips of New York City taxis [49, 50] and counted the difference in transitions between neighborhoods. We observe a similar, but less pronounced behavior as in synthetic data. On the Anaheim network in Figure 4(a), we see that our approach consistently outperforms the triangle-based simplicial complex inference. For the taxi dataset, Figure 4(b) shows that, like on synthetic data, the triangle based inference performs well as the sparsity decreases. Note that the apparent effect that more cell candidates lead to a worse performance only exists when measuring sparsity by \(\|\mathbf{B}_{2}\|_{0}\) whereas a comparison based on \(|\mathcal{C}_{2}|\) shows a significantly smaller error when considering more candidates in all experiments on real-world data. Similarly, our approach significantly outperforms a triangle based cell-search heuristic when considering \(|\mathcal{C}_{2}|\). In addition to its better performance, we believe that general cell-based representations are easier to interpret when analyzing patterns. Indeed, the relative success in recovering the correct cells in synthetic data (for real data we don't have a ground truth) and the general good approximation of the flows, may be seen as an indication that cells detected by Figure 4: Comparison of our approach, triangles, and ground-truth cells. Figure 5: Comparison of our approach and our approach with triangles heuristic. See Figure 15 in the appendix for more examples. our approach are more representative of real underlying patterns. For the taxi example, at 300 entries in \(\mathbf{B}_{2}\), our heuristic has added \(55\) polygonal \(2\)-cells in the best case, whereas a triangles based inference approach adds one-hundred \(2\)-cells. Similar to what we observed on synthetic data, the triangles heuristic can lead to a higher-dimensional approximation that is also inherently better at approximating noise. Conversely, our approximation is lower-dimensional which may also make it more suitable for de-noising data. ### Runtime complexity Finally, we considered the runtime of our algorithm on graphs of different size and generation methods. Firstly, we randomly generated cell complexes, with four 2-cells each, as described before (_triangulation_). Secondly, we also generated cell complexes similar to the Watts-Strogatz small-world network construction [51], but with a fixed probability of \(1\%\) for any additional edge and without removing edges on the circle (_smallworld_). For the recovery, we generated five synthetic flows and let the algorithm run until it had detected four 2-cells, with five candidates considered in each step. From our theoretical analysis, we expect the runtime to grow in \(O(m\log m)\) for the number of edges \(m\). Figure 6 indicates a slightly superlinear time complexity. We hypothesize that this stems from the runtime complexity of LSMR, which is hard to assess due to its iterative nature. In the triangulation graphs, the number of edges is linear in the number of vertices. In the small-world graphs, the number of edges grows quadratically in the number of vertices, corresponding to faster growth in execution time. Our algorithm took less than \(100s\) for a small-world graph with \(10000\) vertices and a triangulation graph with \(100000\) vertices, respectively. ## 7 Conclusion We formally introduced the flow representation learning problem and showed that the inherent cell selection problem is NP-hard. Therefore, we proposed a greedy algorithm to efficiently approximate it. Our evaluation showed that our approaches surpasses current state of the art on both synthetic and real-world data while being computationally feasible on large graphs. Apart from further investigation of the inference process and improvements of its accuracy, we see multiple avenues for future research. A current limitation is that our approach infers cells with a shorter boundary with reasonable accuracy, while cells with a longer boundary have lower inference accuracy. We may improve this for example by introducing another spanning-tree-based heuristic or de-noising the flows before running it a second time. On a higher level, the algorithm could be adapted to optimize for sparsity of the boundary map \(||\mathbf{B}_{2}||_{0}\) instead of the number of two cells \(|\mathcal{C}_{2}|\). Finally, an analysis of the expressivity of the results on real-world data warrants further investigation. Given our improvement in approximation over the state of the art, we also expect a better expressivity. However, since such an analysis does not currently exist, the actual applicability is hard to assess. The downstream tasks and improvements to related methods discussed in the introduction could serve as a proxy for this, showing the usefulness of the inferred cells beyond the filtered flow representation. ## Acknowledgements Funded by the European Union (ERC, HIGH-HOPeS, 101039827). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
2307.12484
Use and evaluation of simulation for software process education: a case study
Software Engineering is an applied discipline and concepts are difficult to grasp only at a theoretical level alone. In the context of a project management course, we introduced and evaluated the use of software process simulation (SPS) based games for improving students' understanding of software development processes. The effects of the intervention were measured by evaluating the students' arguments for choosing a particular development process. The arguments were assessed with the Evidence-Based Reasoning framework, which was extended to assess the strength of an argument. The results indicate that students generally have difficulty providing strong arguments for their choice of process models. Nevertheless, the assessment indicates that the intervention of the SPS game had a positive impact on the students' arguments. Even though the illustrated argument assessment approach can be used to provide formative feedback to students, its use is rather costly and cannot be considered a replacement for traditional assessments.
Nauman bin Ali, Michael Unterkalmsteiner
2023-07-24T02:33:22Z
http://arxiv.org/abs/2307.12484v1
# Use and evaluation of simulation for software process education: a case study ###### Abstract Software Engineering is an applied discipline and its concepts are difficult to grasp only at a theoretical level alone. In the context of a project management course, we introduced and evaluated the use of software process simulation (SPS) based games for improving students' understanding of software development processes. The effects of the intervention were measured by evaluating the students' arguments for choosing a particular development process. The arguments were assessed with the Evidence-Based Reasoning framework, which was extended to assess the strength of an argument. The results indicate that students generally have difficulty providing strong arguments for their choice of process models. Nevertheless, the assessment indicates that the intervention of the SPS game had a positive impact on the students' arguments. Even though the illustrated argument assessment approach can be used to provide formative feedback to students, its use is rather costly and cannot be considered a replacement for traditional assessments. Keywords:Software Engineering education, Software process simulation, project management, argument evaluation ## 1 Introduction The Software Engineering (SE) discipline spans from technical aspects, such as developing techniques for automated software testing, over defining new processes for software development improvement, to people-related and organizational aspects, such as team management and leadership. This is evident in the software development process, which is _"the coherent set of policies, organizational structures, technologies, procedures, and artifacts that are needed to conceive, develop, deploy, and maintain a software product"_[11]. This breadth of topics encompassed here makes education in SE challenging as the interaction of the different disciplines cannot be exclusively taught on a theoretical level, but must also be experienced in practice. As such, SE education needs to identify means to prepare students better for their tasks in industry [16]. However, the complexity and dynamism of software processes makes it difficult to illustrate the implications of the chosen development process on the outcomes of a project. Students will have to undertake multiple iterations of developing the same project using different software development processes to understand the various processes and their implication on the project attributes [26]. Such repetitions are however impractical because of the time and cost involved. To overcome this shortcoming software process simulation (SPS) has been proposed as a means of SE education. SPS is the numerical evaluation of a computerized-mathematical model that imitates the real-world software development process behavior [13]. It has been found to be useful in SE education as a complement to other teaching methods e.g. in combination with lectures, lab sessions and projects [17, 26]. In this paper we motivate, illustrate and evaluate how a targeted change was introduced in the graduate-level _Applied Software Project Management (ASPM)_ course. The course aims to convey to students in a hands-on manner how to prepare, execute and finalize a software project. In previous instances of the course, we have observed that students encounter difficulties in choosing an appropriate software development process and in motivating their choice. We hypothesize that the students lack experience of different software development processes, and lack therefore the analytical insight required to choose a process appropriate for the characteristics of the course project. We study our hypothesis by exposing students to software process simulations (SPS) and by evaluating thereafter the argumentative strength for choosing/discarding a particular process. There are three major contributions in this paper. First, a review of frameworks for evaluating argumentative reasoning was updated to cover more recent research. Secondly the framework relevant for evaluating arguments in the context of SE was selected and adapted. Thirdly, independent of the creators of SimSE, we used it in the context of an active course instead of a purely experimental setting, and evaluated its effect indirectly, in terms of students' understanding of software development processes. The remainder of the paper is structured as follows: Section 2 summarizes the relevant work on the topic of SPS in SE education. Section 3 presents the context of the study, research questions, data collection and analysis methods. Section 4 presents the results, Section 5 revisits the research questions based on the findings and Section 6 concludes the paper. ## 2 Background and Related Work In this section, we briefly discuss the two overarching themes in this study: SPS based education and evaluation on scientific argumentation. ### SPS in SE education SPS provides an alternative to manipulation of the actual software process by providing a test-bed for experimentation with realistic considerations. Compared to static and analytical models, SPS achieves this because of its ability to capture the underlying complexity in software development by representing uncertainty, dynamic behavior and feedback/feed-forward mechanisms [13]. Since the initial proposal of SPS its potential as a means of education and training was recognized [13]. Some of the claimed benefits of SPS for SE education include: increased interest in SE project management [18], motivation of students [8], and effective learning [20]. It can facilitate understanding by experiencing different processes with certain roles (e.g. as a as a software manager making decisions in software development, which would not have been possible in an academic context without SPS [27]). Navarro and Hoek [17] evaluated the experience of students playing SPS based games for SE education. They found that the SPS based teaching is applicable for various types of learners as it aligns well with objectives of a multitude of learning theories. For example, it encourages exploratory learning by experimenting, emphasizes learning by doing and through failure, and by embedding in a context that resembles the real-world use of the phenomenon of interest. Wangenheim and Shull [26], in a systematic literature review of studies using SPS for SE education, found that the two most frequent aims in such studies are _"SE Project Management"_ and _"SE process"_ knowledge [26]. They also found that in most of the existing research, subjective feedback was collected after the students had used the game [26]. Similarly, they reported that it was difficult to evaluate the effectiveness of SPS interventions because a majority of the articles do not report the _"expected learning outcome and the environment in which students used the game"_[26]. These findings motivated our choice to have a simulation based intervention in the course as the two major learning objectives for the course are related to project and process management. The context is described in Section 3.1. Furthermore, adhering to the recommendation that is based on empirical studies [26], we used SPS to target a _"specific learning need"_ of the students, i.e. to improve the understanding and implications of a software development lifecycle process. SimSE was the chosen platform due to a stable release, good graphical user-interface and good feedback from earlier evaluations [17]. Unlike the existing evaluations of SimSE, in this study, we took an indirect approach to see if the simulation based intervention had the desired impact. We looked at the quality of arguments for the choice of the lifecycle process in the student reports without explicitly asking them to reflect on the SPS game. ### Evaluating scientific argumentation Argumentation is a fundamental driver of the scientific discourse, through which theories are constructed, justified, challenged and refuted [10]. However, scientific argumentation has also cognitive values in education, as the process of externalizing one's thinking fosters the development of knowledge [10]. As students mature and develop competence in a subject, they pass through the levels of understanding described in the SOLO taxonomy [5]. In the taxonomy's hierarchy, the quantitative phase (unistructural and multistructural levels) is where students increase their knowledge, whereas in the qualitative phase (relational and extended abstract levels) students deepen their knowledge [4]. The quality of scientific argumentation, which comprises skills residing in higher levels of the SOLO taxonomy, is therefore a reflection of the degree of understanding and competence in a subject. As argumentation capability and subject competence are intrinsically related, it is important to find means by which scientific argumentation in the context of education can be evaluated. Sampson and Clark [21] provide a review of frameworks developed for the purpose of assessing the nature and quality of arguments. They analyze the studied frameworks along three dimensions of an argument [21]: 1. Structure (i.e., the components of an argument) 2. Content (i.e., the accuracy/adequacy of an arguments components when evaluated from a scientific perspective) 3. Justification (i.e., how ideas/claims are supported/validated within an argument) We used the same criteria to update their review with newer frameworks for argument evaluation. This analysis was used to select the framework appropriate for use in this study. ## 3 Research design ### Context The objective of the Applied Software Project Management (ASPM) course is to provide students with an opportunity to apply and sharpen their project management skills in a sheltered but still realistic environment. Students participating in ASPM typically1 have completed a theory-building course on software project management, which includes an introduction to product management, practical project management guided by the Project Management Body of Knowledge [1], and an excursion to leadership in project teams [12]. Footnote 1: ASPM is also an optional course in the curriculum for students from computer science and civil engineering programs Figure 1 shows the student characteristics of the two course instances that were studied. In 2012, without SPS intervention, 16 students participated in total, having accumulated on average 18 ECTS points at the start of the course. In 2013, with the SPS intervention, 15 students participated in total, having accumulated on average 84 ECTS points at the start of the course. In both course instances, three students did not take the theory course on software project management (Advanced SPM). The major difference between the two student groups is that in 2013, considerably more students did not successfully complete the Advanced SPM course. The higher ECTS average in 2013 can be explained by the participation of three Civil Engineering students who chose Applied SPM at the end of their study career while SE and Computer Science students chose the course early in their studies. The course follows the three months schedule shown in Figure 2, which illustrates also the planned interactions between students and instructors. The introduced modifications are shown in italics and further discussed in Section 3.3. Students are expected to work 200 hours for this course, corresponding to a 20 hours/week commitment. The course has five assignments but Assignment 1 and 5 are important for this study (see Figure 2). Assignment 1 consists of delivering a project management plan (PMP) where students also report the choice and rationale for a software process they will use. The teams receive oral feedback and clarifications on the PMP during the same week. The course concludes with a presentation where project teams demo their developed products. In Assignment 5, the students are asked to individually answer a set of questions that, among other aspects, inquiry their experience with the used software process in the project. ### Research questions The posed research questions in this study are: * RQ1: How can improvement in software development process understanding be assessed? RQ2: To what extent can process simulation improve students' understanding of software development processes? With RQ2, we investigate whether process simulation has a positive impact on students' understanding of development processes. Even though studies with Figure 1: Student demographics from 2012 (without intervention) and 2013 (with SPS intervention) of the Applied SPM course Figure 2: ASPM course time-line with events a similar aim have already been conducted (c.f. [18]), experiments in general are prone to the Hawthorne effect [7], where subjects under study modify their behavior knowing that they are being observed. Similar limitations can be observed in earlier evaluations of SimSE where _"the students were given the assignment to play three SimSE models and answer a set of questions concerning the concepts the models are designed to teach"_[17]. Hence we introduce process simulation as an additional teaching and learning activity into a course whose main purpose is _not_ to teach software development processes. Furthermore, we do not modify the requirements for the graded deliverables. Formally, we stated the following hypotheses: * There is no difference in the students' understanding of process models in course instances 2012 and 2013. * There is a difference in the students' understanding of process models in course instances 2012 and 2013. Due to the subtle modifications in the course, we needed to identify new means to evaluate the intervention, measuring the impact of introducing process simulation on students' understanding of development processes. In order to answer RQ1, we update the review by Sampson and Clark [21] with two more recent frameworks proposed by Reznitskaya et al. [19] and Brown et al. [6], select the framework that provides the strongest argument evaluation capabilities, and adapt it to the software engineering context. In order to answer RQ2, we apply the chosen argument evaluation framework on artifacts delivered by project teams and individual students which did receive the treatments shown in Figure 2 and on artifacts delivered in the previous year. ### Instrumentation and data collection _Assignment 5: Post mortem_, as shown in Figure 2, is an individual assignment where students had to motivate and reflect on their choice of software process model selected in their projects. This assignment is used to evaluate the influence of SimSE on the student's understanding of the software processes. A baseline for typical process understanding of students from the course was established by evaluating _Assignment 5_ from year 2012 and it was compared to the evaluation results of _Assignment 5_ from year 2013. To supplement the analysis we also used _Assignment 1: Project Management Plan (PMP)_ (which is a group assignment) from both years. The design for the study is shown in Figure 3. Where deltas _'a'_ and _'b'_ are changes in understanding between the Assignments 1 and 5 within a year. While deltas _'c'_ and _'d'_ represent changes across the years for Assignment 1 and 5 respectively. For the evaluation of assignments, we used the EBR framework [6]. Other frameworks considered and the reasons for this choice are summarised in Section 4.2. Once the framework had been adapted, first it was applied on one assignment using _"Think-aloud protocol"_ where the authors expressed their thought process while applying the evaluation instrument. This helped to identify ambiguities in the instrument and also helped to develop a shared understanding of it. A pilot of the instrument was done on two assignments where the authors applied it separately and then compared and discussed the results. Both authors individually coded all the assignments and then the codes were consolidated with consensus. The results of this process are presented in Section 4.3. ### Limitations The assignments were retrieved from the Learning Management System and personal identification of students was replaced with a unique identifier to hide their identity from the authors. This was done to avoid any personal bias that the authors may have towards the students as their teachers, in this and other courses. Furthermore, to ensure an unbiased assessment both the overall grades of students and their grades in the assignments were hidden from the authors when the assessment instrument was applied in this study. To avoid any bias introduced by asking questions directly about the intervention of process simulation, and to have a relative baseline for assignments from 2012, we did not change the assignment descriptors for the year 2013. Thus we tried to measure the effect of the intervention indirectly by observing the quality of argumentation without explicitly asking students to reflect on the experience from simulation based games. The intervention was applied in a post-graduate course, rendering experiment-like control of settings and variables impossible. Among other factors, any difference in results could purely be because of the different set of students taking the course in the years 2012 and 2013. However, as discussed in Section 3.1 the groups of students were fairly similar thus the results are comparable. Small number of students is also a limitation of this study especially to draw any statistical inference with high confidence. Similarly, by having the students fill out questions about the various simulation based games we tried to ensure that students have indeed played the games. However, we have no way of ensuring that the students did indeed play the games individually and not share the answers with each other. This limitation could weaken the observable effect of the intervention. ## 4 Results In this section we report the two main results of our study. In Section 4.1 we review two argument evaluation approaches and classify them according to the Figure 3: Design to evaluate the impact of the SPS intervention framework presented in Section 2.2. Then we choose one argument evaluation approach and adapt it to our goals (Section 4.2), and apply it to students arguments on choosing a particular process model for their project (Section 4.3). The data (argument coding and quantification) is available in the supplementary material to this paper, available at [http://www.bth.se/com/mun.nsf/pages/simeval](http://www.bth.se/com/mun.nsf/pages/simeval). ### Review update In Table 1 we summarize Sampson and Clark's [21] analysis w.r.t. the support various frameworks provide to assess structure, content and justification of an argument. In the rest of the section we report the classification of two newer frameworks as an extension to their review. Brown et al. [6] propose with the Evidence-based Reasoning (EBR) framework an approach to evaluate scientific reasoning that combines Toulmin's argumentation pattern [25] with Duschl's framework of scientific inquiry [9]. This combination proposes scientific reasoning as a two-step process in which a scientific approach to gather and interpret data results in rules that are applied within a general framework of argumentation [6]. Figure 3(a) shows the structure of the framework, consisting of components that should be present in a strong scientific argument. The strength of an argument can be characterized by the components present in the argument. For example, in an unsupported claim (Figure 3(b)), there are no rules, evidences or data that objectively support the claim. An analogy (Figure 3(c)) limits the argumentation on supporting a claim only with instances of data, without analyzing and interpreting the data. In an overgeneralization (Figure 3(d)), analysis and formation of a body of evidence is omitted and data is interpreted directly to formulate rules (theories, relationships) that are not supported by evidence. The EBR was not designed for a specific scientific context [6] and we classify it therefore as a domain-general framework. It provides strong support for evaluating arguments along structure (i.e. the components of an argument) and justification (i.e. how claims are supported within an argument) dimensions. \begin{table} \begin{tabular}{l c c c} \hline Framework & Structure & Content & Justification \\ \hline \hline Domain-general & & & \\ Toulmin [25] & strong & weak & weak \\ Schwarz et al. [23] & strong & moderate & moderate \\ \hline Domain-specific & & & \\ Zohar and Nemet [28] & weak & moderate & strong \\ Kelly and Takao [14] & strong & weak & strong \\ Lawson [15] & strong & weak & strong \\ Sandoval [22] & weak & strong & strong \\ \hline \hline \end{tabular} \end{table} Table 1: Strengths and weaknesses of argument assessment frameworks We use and evaluation of simulation for software process education: a case study However, solely identifying rules and evidences components in an argument does not provide an assessment of the arguments content, i.e. the adequacy of argument components. As such, the framework provides the tools to identify components of content (rules and evidences), but no direct means to assess the content's quality. Therefore, we rate the framework's support for the content dimension as moderate. Reznitskaya et al. [19] propose a domain-specific framework for evaluation of written argumentation. They proposed and compared two methods to implement this framework: 1. Analytical method, which is a data driven approach where individual statements are coded and their relevance to the main topic is judged, categories are derived from these codes (deciding about these categories will be based on the theoretical and practical significance in the domain). Next the report is evaluated on five sub-scales which cover aspects from: number of arguments made, types of previously identified categories of arguments covered in the report, opposing perspectives considered, number of irrelevant arguments and the use of formal aspects of discourse. 2. Holistic method takes a rubric based approach attempting to provide a macro level assessment of the arguments. In essence, the framework has no explicit focus on the components of an argument and only indirectly covers the aspects of structure while creating the instrument. The fundamental building block of the evaluation framework is the analytical coding process where both the content and justification are considered. Content (accuracy and adequacy) is only assessed by identifying the relevance of the argument to the topic. Justification is covered indirectly in the variety of argument categories identified in the reports. However, all argument categories are given equal weight in scoring. Therefore, we rate the framework support for the structure as weak, and for content and justification as moderate. ### Selection and adaptation of an argument evaluation framework Based on the analysis of the reviewed frameworks, summarized in Table 1, and the updated review presented in Section 4.1, we decided to use the EBR frame Figure 4: The Evidence-Based Reasoning framework (a) and different degrees of argument sophistication (b-d) (adapted from Brown et al. [6]) work [6]. We chose a domain-general over a domain-specific framework due to our preference of customizing generic principles to a specific context. The alternative, to construct an assessment instrument completely inductively from the particular domain and data, as for example in Reznitskaya et al. [19], would imply a relative assessment approach, weakening the evaluation of the intervention. Furthermore, a domain-generic approach allows us to re-use the assessment instrument with minor adaptations, lowering the application cost by keeping the general approach intact. Table 2 shows an example argument with the EBR components one can identify in written statements. This example illustrates what we would expect in a strong argument: a general rule on a process model is applied on a premise that refers to the specific circumstances of the students' project, justifying their claim that the selected model was the best choice. The rule is supported by evidence (a reference to a scientific study) and by an experience from the project that creates a relationship between short development cycles and late changes. The evidence is supported by data from the project. The EBR framework enables a fine-grained deconstruction of arguments into components. The price for this strength on the structural and justification dimension is a moderate support for assessing content (see Section 4.1). Even though the overall argument content can be judged to some extent by the interplay between the argument components, domain-specific knowledge is nevertheless required to judge whether individual statements are accurate. Looking at Table 2, the rule component in particular requires knowledge on XP in order to decide whether the statement is accurate or not. Hence we assess the argument content by qualifying a rule as sound/unsound, given the stated premise, claim, evidence and data, based on our domain knowledge on process models. Concretely, we declare an argument for a claim as: * If the stated rule is backed by evidence/data _and_ is pertinent for the premise (strong). \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{Component Statement} \\ \hline \hline Premise & The requirements for our product are not that clear and likely to change. \\ Claim & eXtreme Programming (XP) was the best choice for our project. \\ Rule & XP embraces continuous change by having multiple short development cycles. \\ Evidence & Reference to Beck [3]; customer changed user interaction requirements six weeks before the delivery deadline but we still delivered a working base product; \\ Data & Seven change requests to initially stated requirements; four requirements were dropped since customer was satisfied already; \\ \hline \hline \end{tabular} \end{table} Table 2: Example application of the EBR on an ideal argument * In arguments with no corroborative evidence (weak): If the stated rule is in compliance with literature and/or the assessors understanding of the topic _and_ is pertinent for the premise. * If an argument does not fulfill either of the above two criteria. ### Application of the chosen framework In this section we illustrate the results of applying the EBR framework on the students' arguments for choosing/rejecting a particular process model for their project. We coded statements according to the EBR frameworks' components of an argument: a premise (P), rule (R), evidence (E), data (D). We also noted when components are used atomically or are combined into pairs or triples of components to form a coherent argument. Based on this codification, we evaluated the overall content of the argument (unsound / weak / strong) by following the rules established in Section 4.2. The claim of the argument, in principle constant and only changing in sign, was that a particular process model is / is not the best choice for the project. Table 3 shows the results in terms of the argument component frequencies encountered in the students' project plans from 2012 (without intervention) and 2013 (with SPS intervention). For example, in Plan #1 we identified 8 premises (P), 4 premise-rule (PR) pairs and 1 premise-evidence (PE) pair. We expected to find some premise-rule-evidence (PRE) triples as they would indicate that students can motivate their choice by examples, e.g. by referring to scientific literature, to experience from previous projects or from playing SimSE. However, the results clearly indicate a tendency for students to create overgeneralizing arguments (premise-rule pairs). Looking at the argument content, we identified no strong arguments (lack of evidence component) and, in proportion to weak arguments, a rather large number of unsound arguments, indicating a lack of understanding of process models and their properties. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Year & Plan\# & P & PR & PE & PRE & RE & R & Unsound & Weak & Strong \\ \hline \hline \multirow{4}{*}{2013} & 1 & 8 & 4 & 1 & 0 & 0 & 0 & 2 & 2 & 0 \\ & 2 & 9 & 7 & 0 & 0 & 0 & 3 & 4 & 6 & 0 \\ & 3 & 3 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ & **Sum** & **20** & **12** & **1** & **0** & **1** & **3** & **7** & **9** & **0** \\ \hline \multirow{4}{*}{2012} & 4 & 10 & 3 & 0 & 0 & 0 & 1 & 1 & 3 & 0 \\ & 5 & 5 & 7 & 0 & 0 & 1 & 1 & 3 & 6 & 0 \\ \cline{1-1} & 6 & 6 & 3 & 0 & 0 & 0 & 0 & 4 & 0 \\ \cline{1-1} & 7 & 2 & 1 & 0 & 0 & 0 & 2 & 1 & 1 & 0 \\ \cline{1-1} & **Sum** & **23** & **14** & **0** & **0** & **1** & **4** & **5** & **14** & **0** \\ \hline \hline \end{tabular} \end{table} Table 3: Frequencies of identified argument components and argument content strength in project plans for choosing a particular process model After assessing the project plans, which were a group assignment, we applied the EBR framework on the project post-mortems that were handed in individually by the students (13 in 2013 and 12 in 2012). Table 4 illustrates the results, showing the frequencies of identified argument components and component combinations. Observe that, in contrast to the post-mortem, we identified more combinations of components, e.g. premise-data (PD) or premise-evidence-data (PED) triples. For both years it is evident that students reported more justification components (evidence and data) in the post-mortem than in the project plan. This is expected as we explicitly asked to provide supporting experience from the conducted project. ## 5 Analysis and revisiting research questions ### Assessing software development process understanding (RQ1) With support from the EBR framework we decomposed students' arguments to a degree that allowed us to pinpoint the weaknesses of their development process model understanding. Looking at the frequencies in Table 4, we can observe that: * For both years, a relatively large number of standalone argument components were identified (e.g. single premise, rule and data components in 2013 and evidence components in 2012). A standalone argument component indicates a lack of a coherent discussion, a concatenation of information pieces that does not create a valid argument for a specific claim. There are exceptions, e.g. PM#8 and PM#24 (see supplementary material), which is also expressed in a strong argument content rating. * Looking at the argument component combinations that indicate a strong argument (i.e. that contain a premise, a rule, and evidence or data), we can observe that assignments containing weak arguments outnumber assignments with strong arguments in both years. \begin{table} \begin{tabular}{l The detailed analysis of arguments with the EBR framework could also help to create formative feedback to students. For example, in PM#5, the student reported 7 data points on the use of SCRUM but failed to analyze this data, creating evidence (describing relationships of observations) and connecting them to rules. On the other hand, we identified several assignments with the premise that the requirements in the project are unknown or change constantly (using this premise to motivate the selection of an Agile process). However, none of the assignment reports data on change frequency or volatility of requirements, weakening therefore the argument for choosing an Agile process. Given these detailed observations one can make by applying the EBR framework we think it is a valuable tool for both assessing arguments and to provide feedback to students. However, this power comes at a price. We recorded coding times between 10 and 30 minutes per question, excluding any feedback formulation that the student could use for improvement. Even if coding and feedback formulation efficiency could be increased by tools and routine, one has to consider the larger effort this type of assessment requires compared to other means, e.g. rubrics [2]. ### Impact of SPS on students' understanding of software development processes (RQ2) The only difference in how the course was conducted in 2013 compared to 2012 was the use of SimSE simulation based software process games. Besides the limitation of this study (as discussed in Section 3.4) improvements in the students' understanding can be seen as indications of usefulness of SimSE based games for software process education. In order to evaluate students' understanding, we measured the quality of argumentation for the choice of the software process. Concretely, we evaluated the content of the student reports by using the strength (classified as strong, weak and unsound) of an argument as an indicator. To test the hypotheses stated in Section 3.2, we used the chi-square test of independence [24] and rejected \(H_{0}\) at a confidence level of \(\alpha<0.05\). For the project management plan (Table 3), which was a group assignment and was delivered at the beginning of the course, the observed frequency of strong, weak and unsound arguments did not differ significantly between 2012 and 2013. Hence we cannot reject \(H_{0}\) for the project management plan. For the project post-mortem (Table 4), which was an individual assignment and was delivered at the end of the course, the observed frequency of strong, weak and unsound arguments did differ significantly between 2012 and 2013 (\(chi-squared=8.608\), \(df=2\), \(p-value=0.009\)). Hence we can reject \(H_{0}\) for the project post-mortem and accept that there is a difference in process model understanding of students in course instances 2012 and 2013. However, since this difference only materialized at the end of the course, after the project has been conducted, the improved understanding cannot be attributed to the software process simulation game alone as discussed in the limitations of this study (Section 3.4). Another indirect observation that shows a better understanding of the process model is reflected in the choice of the process model in the context of the course project (with small collocated teams, short fixed duration for project etc.). Compared to 2012 where most of the groups took plan driven, document intensive process models (two groups chose an incremental development model, one group chose Waterfall and only one chose Scrum), in the year 2013 all groups chose a light-weight, people centric process model that is more pertinent to the given context. ## 6 Conclusion The EBR framework enabled decomposition of arguments into distinct parts, which ensured an objective evaluation of the strength of the arguments in student reports. This assessment allowed us to gauge students' understanding of software development processes. The indications reported in this study (from use of software process simulation in an active course) adds to the confidence in evidence reported in earlier empirical studies in controlled settings. Given the potential gains as seen in this study, and relative maturity, user interface and decent documentation of SimSE, the minor additional cost of including it in a course to reinforce concepts already learned was well justified. As future work, we intend to do a longitudinal study where more data is collected over the next few instances of the course.
2310.18200
Invariants of Vanishing Brauer Classes
A specialization of a K3 surface with Picard rank one to a K3 with rank two defines a vanishing class of order two in the Brauer group of the general K3 surface. We give the B-field invariants of this class. We apply this to the K3 double plane defined by a cubic fourfold with a plane. The specialization of such a cubic fourfold whose group of codimension two cycles has rank two to one which has rank three induces such a specialization of the double planes. We determine the Picard lattice of the specialized double plane as well as the vanishing Brauer class and its relation to the natural "Clifford" Brauer class. This provides more insight in the specializations. It allows us to explicitly determine the K3 surfaces associated to infinitely many of the conjecturally rational cubic fourfolds obtained as such specializations.
Federica Galluzzi, Bert van Geemen
2023-10-27T15:20:04Z
http://arxiv.org/abs/2310.18200v2
# Invariants of vanishing Brauer classes ###### Abstract. A specialization of a \(K3\) surface with Picard rank one to a \(K3\) with rank two defines a vanishing class of order two in the Brauer group of the general \(K3\) surface. We give the \(B\)-field invariants of this class. We apply this to the \(K3\) double plane defined by a cubic fourfold with a plane. The specialization of such a cubic fourfold whose group of codimension two cycles has rank two to one which has rank three induces such a specialization of the double planes. We determine the Picard lattice of the specialized double plane as well as the vanishing Brauer class and its relation to the natural 'Clifford' Brauer class. This provides more insight in the specializations. It allows us to explicitly determine the \(K3\) surfaces associated to infinitely many of the conjecturally rational cubic fourfolds obtained as such specializations. ## Introduction In this paper, \(S\) will be a complex projective \(K3\) surface. An element \(\alpha\) in the Brauer group \(\operatorname{Br}(S)\) defines \(\alpha\)-twisted sheaves on \(S\) which generate a twisted derived category ([11]). A locally free \(\alpha\)-twisted sheaf of rank \(n\) defines a projective space bundle over \(S\). Conversely, a projective bundle over \(S\) is the projectivization of an \(\alpha\)-twisted locally free sheaf. A class \(\alpha\in\operatorname{Br}(S)\) also defines a Hodge substructure \(T_{\alpha}(S)\) of the transcendental lattice \(T(S)\). The two-torsion subgroup \(\operatorname{Br}(S)_{2}\) allows one to describe the conic bundles over \(S\) that are the exceptional divisors in \(K3^{[2]}\)-type hyperkahler manifolds ([12]). In the case that \(S\) is a general \(K3\) surface of degree two and \(\alpha\in\operatorname{Br}(S)_{2}\) is a non-trivial class, the Hodge substructure \(T_{\alpha}(S)\) is Hodge isometric to either the transcendental lattice of a general cubic fourfold with a plane or a general \(K3\) surface of degree eight or to neither of these. In this paper we will be particularly concerned with specializations of a \(K3\) surface \(S\) and their impact on \(\operatorname{Br}(S)_{2}\). That is, we consider a family of \(K3\) surfaces over a disc with general fiber \(S\) and special fiber \(S_{2d}\). We focus on the case where the rank of the Picard groups are one and two respectively, see [14] for more general cases. Here the index \(2d\) refers to the degree of the generator of \(\operatorname{Pic}(S)\). There are then natural identifications of the second cohomology groups of the general and the special fiber. This easily implies that there is a restriction map of Brauer groups from \(\operatorname{Br}(S)_{2}\) to \(\operatorname{Br}(S_{2d})_{2}\) which has a kernel of order two. The generator of this kernel is called the vanishing Brauer class (of the specialization) and we denote it by \(\alpha_{van}\in\operatorname{Br}(S)_{2}\), \[\langle\alpha_{van}\rangle\,=\,\ker\left(\operatorname{Br}(S)_{2}\,\longrightarrow \,\operatorname{Br}(S_{2d})_{2}\right)\,.\] We work out the invariants of this Brauer class. In the case of a \(K3\) of degree two these invariants determine whether the Brauer class corresponds to a point of order two in \(J(C)\), where \(C\) is the ramification curve defined by \(S\), or to an even or odd theta characteristic on \(C\). These results were in in a sense anticipated in the papers [1], [20] where the restriction map \(\operatorname{Pic}(S)\to\operatorname{Pic}(C)\) is studied in relation to \(\operatorname{Br}(S)_{2}\). In the remainder of the introduction we discuss an application of vanishing Brauer classes to cubic fourfolds. A well-known conjecture states that a (complex) cubic fourfold \(X\) is rational if and only if it has an associated \(K3\) surface \(S\), that is the transcendental lattice \(T(X)\) of \(X\) is Hodge isometric to \(T(S)(-1)\), the transcendental lattice of \(S\) with the opposite intersection form ([14],[15],[16]). If it exists, \(S\) is called a \(K3\) surface associated to \(X\). The general cubic fourfold \(X^{\prime}\) does not have an associated \(K3\) surface since \(T(X^{\prime})\) then has rank \(22\) whereas \(T(S)\) has rank at most \(21\). A much studied case is the one of a cubic fourfold \(X\) containing a plane \(P\). In that case \(X\) defines a \(K3\) double plane \(S=S_{X}\) with an odd theta characteristic, corresponding to a Brauer class \(\alpha_{X}\in\operatorname{Br}(S_{X})_{2}\). In case \((X,P)\) is general, with group of codimension two cycles generated by the square of the hyperplane class and \(P\), the \(K3\) surface is _not_ an associated \(K3\) surface since \(T(X)\not\cong T(S_{X})(-1)\). Instead there is a Hodge isometry \(T(X)\cong T_{\alpha_{X}}(S_{X})(-1)\), where \(T_{\alpha_{X}}(S_{X})\) is the index two sublattice defined by the class \(\alpha_{X}\) ([13]). We consider now a specialization of a general \((X,P)\) to a fourfold where the group of algebraic codimension two cycles \(N^{2}(X)\) has rank three. These rank three lattices have been classified and they are isomorphic to lattices \(M_{\tau,n}\) for a pair \((\tau,n)\) of integers, with \(\tau\in\{0,\dots,4\}\), \(n\geq 2\), with a few cases that actually do not occur ([11]). We denote the specialization of \(X\) by \(X_{\tau,n}\) and the double plane it defines by \(S_{\tau,n}\). The \((\tau,n)\) such that \(X_{\tau,n}\) has an associated \(K3\) surface are given in ([11, Cor. 8.14]). The \(K3\) surface \(S_{\tau,n}\) is a specializiation of \(S_{X}\) with Picard rank of \(S_{\tau,n}\) equal to two, hence this specialization defines a vanishing Brauer class \(\alpha_{van}\in\operatorname{Br}(S_{X})_{2}\). In \(\operatorname{Br}(S_{X})_{2}\) we now have two Brauer classes, \(\alpha_{X}\) and \(\alpha_{van}\). A complete description of the specialization from \(S_{X}\) to \(S_{\tau,n}\) requires taking into account not only the Picard lattice of \(S_{\tau,n}\) and invariants of \(\alpha_{van}\) but also the relation between \(\alpha_{X}\) and \(\alpha_{van}\). Our main application of vanishing Brauer classes, Theorem 4.4.1, gives all this information. The rather long proof consists of explicit computations with lattices. A well-known case that we recover is the case that \(\alpha_{X}=\alpha_{van}\) which was studied by Hassett ([14]). Then \(\tau=1,3\) and \(S_{\tau,n}\)_is_ a \(K3\) surface associated to \(X_{\tau,n}\), and these are the only cases in which \(S_{\tau,n}\) is associated to \(X_{\tau,n}\). The quadratic surface bundle over \(\mathbb{P}^{2}\) defined by \((X_{\tau,n},P)\) then has a rational section and this implies that it is a rational fourfold. Since this fourfold is birational to \(X_{\tau,n}\), also \(X_{\tau,n}\) is rational, thus verifying the conjecture. In case \(\tau=0,4\), the vanishing Brauer class corresponds to a point of order two on the ramification curve \(C\) of the double plane \(S_{X}\). The sum \(\beta_{X}:=\alpha_{X}+\alpha_{van}\) corresponds to a theta characteristic on \(C\), which is even if and only if \(n\) is odd. A cubic fourfold \(X_{\tau,n}\) has an associated \(K3\) surface if and only if \(n\) is odd. So it is of some interest to have a concrete description of this associated \(K3\) surface. Let \(S_{\tau,n}\) be the \(K3\) double plane defined by \(X_{\tau,n}\) and let \(C_{\tau,n}\) be the branch curve of the double cover \(S_{\tau,n}\to\mathbb{P}^{2}\). Let \(\beta\) be the even theta characteristic on the branch curve \(C_{\tau,n}\) which is the specialization of \(\beta_{X}\) on \(C\). Then \(\beta\) defines a \(K3\) surface \(S_{\beta}\) which has a natural degree eight polarization and there is a Hodge isometry \(T(S_{\beta})\cong T_{\beta}(S_{\tau,n})\). In Proposition 5.1.4 we show that \(S_{\beta}\) is a \(K3\) surface associated to \(X_{\tau,n}\). We also discuss the example in [1], which has \((\tau,n)=(4,5)\), in our context in the Section 5.2. ## 1. Brauer Groups of \(K3\) surfaces ### Brauer classes and B-fields Let \(S\) be a \(K3\) surface. The _Brauer group_ of \(S\) is (cf. [12, 18.1]) \[\operatorname{Br}(S)=H^{2}(S,\mathcal{O}_{S}^{*})_{tors}\.\] The exponential sequence in this case gives \[0\longrightarrow H^{2}(S,\mathbb{Z})/\mathrm{Pic}(S)\longrightarrow H^{2}(S, \mathcal{O}_{S})\longrightarrow H^{2}(S,\mathcal{O}_{S}^{*})\longrightarrow 0\.\] A two-torsion class \(\alpha\in\operatorname{Br}(S)_{2}\) has a lift \(\tilde{\alpha}\) to the one dimensional complex vector space \(H^{2}(S,\mathcal{O}_{S})\) with \(2\tilde{\alpha}\in H^{2}(S,\mathbb{Z})/\mathrm{Pic}(S)\). Any class \(B=B_{\alpha}\in\frac{1}{2}H^{2}(S,\mathbb{Z})\subset H^{2}(S,\mathbb{Q})\) mapping to \(\tilde{\alpha}\) is called a \(B\)-field representative of \(\alpha\) (see [12]). A \(B\)-field \(B_{\alpha}\) is unique up to \((1/2)\mathrm{Pic}(S)+H^{2}(S,\mathbb{Z})\) : \[B_{\alpha}^{\prime}=B_{\alpha}+\frac{1}{2}p+c,\qquad p\in\mathrm{Pic}(S),\quad c \in H^{2}(S,\mathbb{Z})\,\] (see [12, SS4], see [13, SS6]). Assume now that \(S\) is a general polarized \(K3\) surface. There is the following **Lemma 1.2.1**.: _([13, Lemma 6.1], [14, Lemma 2.1]) Let \(S\) be a \(K3\) surface such that \(\mathrm{Pic}(S)=\mathbb{Z}h\), \(h^{2}=2d>0\). Let \(\alpha\in\operatorname{Br}(S)_{2}\) and \(B_{\alpha}\in\frac{1}{2}H^{2}(S,\mathbb{Z})\subset H^{2}(S,\mathbb{Q})\) a \(B\)-field representing \(\alpha\). The intersection numbers_ 1. \(B_{\alpha}h\mod\mathbb{Z}\)_,_ 2. \(B_{\alpha}^{2}\mod\mathbb{Z}\)_, only in the case that_ \(4B_{\alpha}h+h^{2}\equiv 0\mod 4\)_,_ _are invariants of \(\alpha\)._ ### Brauer Groups and \(K3\) Lattices There is an isomorphism, with \(\rho(S)\) the Picard number of \(S\), \[\operatorname{Br}(S)\cong\big{(}H^{2}(S,\mathbb{Z})/\mathrm{Pic}(S)\big{)} \otimes\mathbb{Q}/\mathbb{Z}\,\cong\,(\mathbb{Q}/\mathbb{Z})^{22-\rho(S)}\.\] The lattice \(H^{2}(S,\mathbb{Z})\) is selfdual since it has a unimodular intersection form. A class \(\alpha\in\operatorname{Br}(S)_{2}\) can thus be identified with a homomorphism \[\alpha:\,T(S)\,\longrightarrow\,\mathbb{Z}/2\mathbb{Z}\.\] If \(B_{\alpha}\) represents \(\alpha\), this homomorphism is given by \[\alpha:\,x\,\longmapsto\,x\cdot B_{\alpha}\mod\mathbb{Z}\quad(\text{in }\tfrac{1}{2}\mathbb{Z}/\mathbb{Z})\.\] The class \(\alpha\) defines a sublattice \(T_{\alpha}(S):=\ker(\alpha)\) in \(T(S).\) For a \(K3\) surface with Picard rank one the invariants of \(\alpha\) given in Lemma 1.2.1 are invariants of the lattice \(T_{\alpha}(S)\). In fact, index two sublattices \(T_{\alpha},T_{\beta}\) of \(T(S)\) are isometric if and only if \(\alpha\) and \(\beta\) have the same invariants ([12, Thm.2.3]). ## 2. Vanishing Brauer Classes and Invariants **Definition 2.1.1**.: Let \((S,h)\) be a general polarized \(K3\) surface with \(\mathrm{Pic}(S)=\mathbb{Z}h\), \(h^{2}=2d>0.\) Consider a specialization \(S_{2d}\) of \(S\) where the Picard rank of \(S_{2d}\) is two, so \(\mathrm{Pic}(S_{2d})=\mathbb{Z}h\oplus\mathbb{Z}k\), for some divisor class \(k\) which is primitive in \(H^{2}(S_{2d},\mathbb{Z})\). (By a specialization of \(S\) we mean a family of quasi-polarized \(K3\)'s over a complex disc \(\Delta\) such that the fiber over \(0\in\Delta\) is \(S_{2d}\) and the fiber over some non-zero \(a\in\Delta\) is \(S\)). We may then identify \[H^{2}(S,\mathbb{Z})\,=\,H^{2}(S_{2d},\mathbb{Z})\,\] and we have inclusions \[\mathrm{Pic}(S)\,\subset\,\mathrm{Pic}(S_{2d}),\qquad T(S)\,\supset\,T(S_{2d}).\] Thus there is a restriction map \(\mathrm{Br}(S)\to\mathrm{Br}(S_{2d})\) given by restriction of the homomorphism \(\alpha\) to \(T(S_{2d})\). Since \(\mathrm{Br}(S)_{2}\cong(\mathbb{Z}/2\mathbb{Z})^{21}\) and \(\mathrm{Br}(S_{2d})_{2}\cong(\mathbb{Z}/2\mathbb{Z})^{20}\), there is a unique order two Brauer class that becomes trivial in \(\mathrm{Br}(S_{2d})\), that is, \(\alpha\) generates the kernel of the restriction map. This class \(\alpha_{van}\) is the _vanishing Brauer class_ (in this specialization). **Proposition 2.1.2**.: _A \(B\)-field representative of \(\alpha_{van}\) is provided by \(B=k/2\ (\in\tfrac{1}{2}H^{2}(S,\mathbb{Z}))\)._ Proof.: Since \(k/2\not\in(1/2)\mathrm{Pic}(S)+H^{2}(S,\mathbb{Z})\), it defines a non-trivial class in \(\mathrm{Br}(S)_{2}\). On the other hand, obviously \(k/2\in(1/2)\mathrm{Pic}(S_{2d})\), hence \(k/2\) defines the trivial class in \(\mathrm{Br}(S_{2d})_{2}\). Therefore \(B=k/2\) is a B-field representative of \(\alpha_{van}\). This allows us to read off the invariants of \(\alpha_{van}\) from the intersection matrix of \(\mathrm{Pic}(S_{2d})=\mathbb{Z}h\oplus\mathbb{Z}k\) which can be written as \[\begin{pmatrix}h^{2}&hk\\ hk&k^{2}\end{pmatrix}=\begin{pmatrix}2d&b\\ b&2c\end{pmatrix}\quad\text{for some }\ b,c\in\mathbb{Z}\.\] Using the B-field representative \(k/2\) of \(\alpha_{van}\) given in Proposition 2.1.2 one finds the corollary below. **Corollary 2.1.3**.: _The invariants of \(\alpha_{van}\in\operatorname{Br}(S)\) are_ \[B_{van}h\,\equiv\,(1/2)b\mod\mathbb{Z},\quad B_{van}^{2}\,\equiv\,(1/2)c\mod \mathbb{Z},\] _where \(B_{van}\) is any B-field representing \(\alpha_{van}\). In particular, \(2B_{van}h\equiv\operatorname{disc}(\operatorname{Pic}(S_{2d}))\mod 2\)._ ### Vanishing Brauer Classes of Double Planes We now consider the case where we specialize a double plane \(S\), that is a double cover of the plane branched over a smooth sextic curve \(C_{6}\) with Picard rank one, to a \(K3\) surface \(S_{2}\). In this case \[\operatorname{Pic}(S_{2})=\left(\mathbb{Z}h\oplus\mathbb{Z}k,\begin{pmatrix}2& b\\ b&2c\end{pmatrix}\right). \tag{1}\] The change of basis which fixes \(h\) and maps \(k\mapsto k-mh\) where \(b=2m,2m+1\), shows that we may assume \(b=0,1\). The two-torsion Brauer classes on a double plane with Picard rank one correspond to the points of order two and the theta characteristics on the genus \(10\) branch curve \(C_{6}\). We find the following characterization of the class \(\alpha_{van}\) in terms of these line bundles on \(C_{6}\). **Proposition 2.2.1**.: _Let \((S,h)\) be a general double plane specializing to \(S_{2}\) with \(\operatorname{Pic}(S_{2})=\mathbb{Z}h\oplus\mathbb{Z}k\) and intersection matrix of the form (1). Then,_ 1. _If_ \(B_{\alpha_{van}}h\equiv 0\) _(b is even),_ \(\alpha_{van}\) _corresponds to a point of order two_ \(p\in Jac(C_{6})\)_._ 2. _If_ \(B_{\alpha_{van}}h\equiv\frac{1}{2}\) _and_ \(B_{\alpha_{van}}^{2}\equiv\frac{1}{2}\)_, (_\(0,c\) _odd),_ \(\alpha_{van}\) _corresponds to an odd theta characteristic in_ \(C_{6}\)_._ 3. _If_ \(B_{\alpha_{van}}h\equiv\frac{1}{2}\) _and_ \(B_{\alpha_{van}}^{2}\equiv 0\) _(_\(0\) _odd and_ \(c\) _even),_ \(\alpha_{van}\) _corresponds to an even theta characteristic in_ \(C_{6}\)_._ Proof.: This follows from Corollary 2.1.3 and [11], [10]. Notice that [10, Theorem 1.1] shows that the vanishing Brauer class is obtained from the restriction of a line bundle on \(S\) to the ramification curve \(C_{6}\subset\mathbb{P}^{2}\). ## 3. Cubic fourfolds containing a plane ### Cubics with a plane and \(K3\) double planes Let \(X\) be a smooth cubic hypersurface in \(\mathbb{P}^{5}(\mathbb{C})\) containing a plane \(P\). Consider the projection from the plane \(P\) onto a plane in \(\mathbb{P}^{5}\) disjoint from \(P\). Blowing up \(X\) along \(P\), one obtains a quadric surface bundle \(\pi\ :\ Y\longrightarrow\mathbb{P}^{2}\). The rulings of the quadrics define a double cover \(S=S_{X}\) of \(\mathbb{P}^{2}\) branched over a degree six curve \(C_{6}\), the discriminant sextic. If \(X\) does not contain a second plane intersecting \(P\), the curve \(C_{6}\) smooth and \(S\) is a \(K3\) surface (see [11, SS1 Lemme 2]). (2) The rulings of the quadrics of the bundle also define a \(\mathbb{P}^{1}\)-bundle \(F\) over \(S\) which gives a Brauer class \(\alpha_{X}\in\operatorname{Br}(S)_{2}\), also known as the Clifford class. This class \(\alpha_{X}\) corresponds to an odd theta characteristic \(L\) on \(C_{6}\) with \(h^{0}(L)=1\) (see [10, SS2]). Conversely, a smooth plane sextic with such an odd theta characteristic defines a cubic fourfold with a plane which is obtained as in loc. cit. and also from the minimal resolution of the push-forward of \(L\) to \(\mathbb{P}^{2}\) as in [1]. ### Lattices The cohomology group \(H^{4}(X,\mathbb{Z})\) with the intersection form is a rank \(23\) odd, unimodular, lattice of signature \((2+,21-)\). It is also a Hodge structure with Hodge numbers \(h^{3,1}=1\), \(h^{2,2}=21\). Let \(h_{3}\in H^{2}(X,\mathbb{Z})\) be the class of a hyperplane section and let \(h_{3}^{2}\in H^{4}(X,\mathbb{Z})\) be its square. Denote with \(N^{2}(X)\subset H^{4}(X,\mathbb{Z})\) the odd, positive definite, lattice of classes of codimension two algebraic cycles. The transcendental lattice of \(X\) is the even lattice defined as \[T(X)\,:=\,N^{2}(X)^{\perp}\,\subset H^{4}(X,\mathbb{Z})\.\] The following proposition follows from [10]. **Proposition 3.2.1**.: _Let \(X\) be a smooth cubic fourfold with a plane and let \(S\) be the \(K3\) double plane defined by \(X\). Then there is a Hodge isometry:_ \[T(X)\,\cong\,T_{\alpha_{X}}(S)(-1)\] _with \(T_{\alpha_{X}}(S):=\ker\alpha_{X}:T(S)\to\mathbb{Z}/2\mathbb{Z}\) (it is a sublattice of index two in \(T(S)\) if \(\alpha_{X}\) is non-trivial). It follows that_ \[rank(N^{2}(X))\,=\,rank(Pic(S))\,+\,1.\] The general cubic fourfold \(X\) with a plane \(P\) has \[N^{2}(X)\,=\,\left(\mathbb{Z}h_{3}^{2}\oplus\mathbb{Z}P,\,K_{8}:=\,\begin{pmatrix} 3&1\\ 1&3\end{pmatrix}\right)\.\] ## 4. Noether-Lefschetz divisors in \(\mathcal{C}_{8}\) and double planes ### The divisors \(\mathcal{C}_{d}\) in \(\mathcal{C}\) Hassett determined all Noether-Lefschetz divisors in the moduli space \(\mathcal{C}\) of cubic fourfolds. These (irreducible) divisors are denoted by \(\mathcal{C}_{d}\) and \(d>6\), \(d\equiv 0,2\mod 6\). The divisor \(\mathcal{C}_{d}\) parametrizes fourfolds \(X\) with a certain rank two sublattice, containing \(h_{3}^{2}\), denoted by \(K_{d}\subset N^{2}(X)\) where \(d=\operatorname{disc}(K_{d})\). Thus \(\mathcal{C}_{8}\) parametrizes the cubic fourfolds with a plane since then \(K_{8}\subset N^{2}(X)\). ### The divisors \(\mathcal{C}_{M}\) in \(\mathcal{C}_{8}\) Yang and Yu give a classification of all Noether-Lefschetz divisors in \(\mathcal{C}_{8}\), that is the divisors that parametrize cubics \(X\) with a plane and \(\operatorname{rank}\nolimits N^{2}(X)>2\). They correspond to positive definite saturated sublattices of rank three \(M\subset H^{4}(X,\mathbb{Z})\) with \(K_{8}\subset M\), up to isometry. Denote by \(\mathcal{C}_{M}\subset\mathcal{C}_{8}\) the divisor of smooth cubic fourfolds \(X\) with such an isometry class of embeddings \(M\hookrightarrow H^{4}(X,\mathbb{Z})\). **Proposition 4.2.1**.: _([14, Corollary 8.14]) Consider the pairs of integers \((\tau,n)\) such that \(\tau=0,...,4,\,n\geq 2\) and \((\tau,n)\neq(3,2),(4,2),(4,3)\). Let \(M_{\tau,n}\) be the rank three positive definite lattice with intersection matrix given by_ \[A_{\tau,n}=\begin{pmatrix}3&1&0\\ 1&3&\tau\\ 0&\tau&2n\end{pmatrix}\,\qquad M_{\tau,n}\,:=\,(\,\mathbb{Z}^{3},\,A_{\tau,n}\,)\.\] 1. _If_ \(M\) _is a positive definite rank_ \(3\) _lattice such that_ \(K_{8}\subset M\) _and such that_ \(M\) _has a saturated embedding in_ \(H^{4}(X,\mathbb{Z})\) _then_ \(M\cong M_{\tau,n}\) _with_ \((\tau,n)\) _as above._ 2. _Up to isometry, there is a unique embedding_ \(M_{\tau,n}\hookrightarrow L\cong H^{4}(X,\mathbb{Z})\) _such that the first basis vector maps to_ \(h_{3}^{2}\)_._ 3. _The divisor_ \(\mathcal{C}_{M_{\tau,n}}\) _in_ \(\mathcal{C}_{8}\) _is non-empty and irreducible. Moreover_ \(\mathcal{C}_{M_{\tau,n}}=\mathcal{C}_{M_{\tau^{\prime},n^{\prime}}}\) _if and only if_ \((\tau,n)=(\tau^{\prime},n^{\prime})\)_._ ### The group \((\operatorname{Pic}\nolimits(C)/\langle K_{C}\rangle)_{2}\) Let \(C\) be a smooth curve of genus \(g\). Recall that \(x\in\operatorname{Pic}\nolimits(C)\) is a two-torsion point if \(2x=0\) and it is a theta characteristic if \(2x=K_{C}\), the canonical class of \(C\). A theta characteristic \(L\) is called even/odd if \(h^{0}(L)\) is even/odd, there are \(2^{g-1}(2^{g}+1)\) even and \(2^{g-1}(2^{g}-1)\) odd theta characteristics. The parity of a theta characteristic does not change under a deformation of \(C\). The two-torsion points are a group, denoted by \(J(C)_{2}\,(=\operatorname{Pic}\nolimits(C)_{2})\); the sum \(p+L\) of a point of order two \(p\) with a theta characteristic \(L\) is again a theta characteristic, but the parity may change; the sum \(L+M\) of two theta characteristics can be written as \(K_{C}+p\) for a unique two-torsion point \(p\). The union of the sets of two torsion points and theta characteristics thus has a group structure, this group can be identified with the two-torsion group \((\operatorname{Pic}\nolimits(C)/\langle K_{C}\rangle)_{2}\), which has order \(2^{2g+1}\). For any double plane \(S\) with smooth branch curve \(C\), there is a surjective map \(\mathcal{A}:(\operatorname{Pic}\nolimits(C)/\langle K_{C}\rangle)_{2}\to \operatorname{Br}\nolimits(S)_{2}\) which is thus an isomorphism if \(\operatorname{rank}\nolimits(\operatorname{Pic}\nolimits(S))=1\) ([11, Thm 1.1]). The kernel of this map is given by restrictions of certain line bundles on \(S\) to \(C\). Let \(\mathcal{U}\subset\mathbb{P}(H^{0}(\mathbb{P}^{2},\mathcal{O}(6))\cong\mathbb{P }^{27}\) be the open subset whose points define smooth sextic curves, such a curve has genus \(10\). There is a finite unramified covering \(\tilde{\mathcal{U}}\to\mathcal{U}\) of degree \(2^{21}\) whose fiber over \(C\) is the group \((\operatorname{Pic}\nolimits(C)/\langle K_{C}\rangle)_{2}\). It follows from [1] that this covering has four connected components defined by the subsets: \(\{0\}\), \(J(C)_{2}\) and the sets of even theta and odd theta characteristics respectively. In particular, the monodromy group of this covering, which is \(Sp(20,\mathbb{F}_{2})\), acts transitively on the odd theta characteristics. **Lemma 4.3.1**.: _The stabilizer of an odd theta characteristic \(\alpha\) is an orthogonal subgroup \(O^{-}(20,\mathbb{F}_{2})\subset Sp(20,\mathbb{F}_{2})\). The orbits of this stabilizer on a fiber are well known:_ 1. \(\{0\}\)_,_ 2. \(\{p\in J(C)_{2}-\{0\}:\,p+\alpha\,\text{is odd}\}\)_,_ 3. \(\{p\in J(C)_{2}-\{0\}:\,p+\alpha\,\text{is even}\}\)_,_ 4. \(\{\alpha\}\)_,_ 5. \(\{\)_odd theta characteristics distinct from_ \(\alpha\}\)_,_ 6. \(\{\)_the even theta characteristics_\(\}\)_._ Proof.: This can be deduced for example from the results of Igusa [10, V.6]. The group \(J(C)_{2}\), with the Weil pairing, can be identified with his \(P\) and the bilinear alternating map \(e:P\times P\to\{\pm 1\}\). For a theta characteristic \(L\) on \(C\) define the quadratic form \(q_{L}:J(C_{2})\to\mathbb{F}_{2}\) by \(q_{L}(p):=h^{0}(L+p)-h^{0}(L)\) mod 2 ([11, Theorem 1.13]). The theta characteristics are then identified with the set \(T\) of maps \(c:P\to\mu\) such that \(c(r+s)=c(r)c(s)e(r,s)\) where \(c(r)=(-1)^{q_{L}(r)}\). Then [10, Corollary p. 213] states that the symplectic group defined by \(e\) on \(P\) is doubly transitive on both the even and on the odd theta characteristics. Thus the stabilizer of an odd theta characteristic is transitive on the set of the remaining odd theta characteristics. From [10, Proposition 2] one deduces that the stabilizer of an odd theta characteristic is transitive on the even theta characteristics. Since the second and third orbits (in \(J(C)_{2}\)) are in bijection with the fifth and sixth orbits (in \(T\)) respectively, the Lemma follows. ### The specializations of \(X\) and \(S\) Let \(X\) be a cubic fourfold with a plane with \(N^{2}(X)=K_{8}\), which has rank two. Let \(S\) be the \(K3\) double plane defined by \((X,P)\) with branch locus \(C\) and Brauer class \(\alpha_{X}\) which we identify with an odd theta characteristic on \(C\). We specialize \(X\) to \(X_{\tau,n}\) when \(N^{2}(X_{\tau,n})=M_{\tau,n}\), which has rank three. Let \(S_{2}=S_{\tau,n}\) be the \(K3\) double plane defined by \(X_{\tau,n}\), it has Picard rank two. The specialization of cubic fourfolds defines a specialization of \(K3\) surface \(S\) to \(S_{\tau,n}\). Hence it defines a vanishing Brauer class \(\alpha_{van}\in\operatorname{Br}(S)\). This vanishing Brauer class is non-trivial and thus lies in exactly one of the orbits \((2)\dots(6)\) of the stabilizer of \(\alpha_{X}\). In the following theorem we determine the Picard lattice of the specialization \(S_{\tau,n}=S_{2}\) of \(S\) and we also determine the orbit of \(\alpha_{van}\). **Theorem 4.4.1**.: _Let \(X\) be a general cubic fourfold with a plane, so with \(\text{rank}\,N^{2}(X)=2\). Let \(S\) be the \(K3\) double plane defined by \(X\), let \(C\) be the branch curve and let \(\alpha_{X}\in\operatorname{Br}(S)_{2}\) be the Clifford class. Let \(X_{\tau,n}\) be a specialization of \(X\) such that_ \[N^{2}(X)\,\cong\,M_{\tau,n}\.\] _Let \(S_{\tau,n}\) be the \(K3\) double plane defined by \(X_{\tau,n}\). Then the specialization of \(K3\) double planes from \(S\) to \(S_{2}=S_{\tau,n}\) has the following properties._ 1. \(\tau=0\) _: The Picard lattice of_ \(S_{0,n}\) _is_ \[\operatorname{Pic}(S_{0,n})\,\cong\,\begin{pmatrix}2&0\\ 0&-2n\end{pmatrix}\.\] _The Brauer class_ \(\alpha_{van}\) _corresponds to a point of order two_ \(p\in Jac(C_{6})\)_. Moreover, the theta characteristic_ \(p+\alpha_{X}\) _is even/odd exactly when_ \(n\) _is odd/even,_ 2. \(\tau=1\) _: The Picard lattice of_ \(S_{1,n}\) _is_ \[\operatorname{Pic}(S_{1,n})\,\cong\,\begin{pmatrix}2&1\\ 1&2-8n\end{pmatrix}\.\] _Moreover_ \(\alpha_{X}=\alpha_{van}\)_, so these two classes coincide._ 3. \(\tau=2\) _: The Picard lattice of_ \(S_{2,n}\) _is_ \[\operatorname{Pic}(S_{2,n})\,\cong\,\begin{pmatrix}2&1\\ 1&2-2n\end{pmatrix}\.\] _Moreover,_ \(\alpha_{X}\neq\alpha_{van}\) _and_ \(\alpha_{van}\) _corresponds to a theta characteristic which is even/odd when_ \(n\) _is odd/even._ 4. \(\tau=3\)_: The Picard lattice of_ \(S_{3,n}\) _is_ \[\operatorname{Pic}(S_{3,n})\,\cong\,\begin{pmatrix}2&1\\ 1&14-8n\end{pmatrix}\.\] _Moreover,_ \(\alpha_{X}=\alpha_{van}\)_, so the two classes coincide._ 5. \(\tau=4\) _:The Picard lattice of_ \(S_{4,n}\) _is_ \[\operatorname{Pic}(S_{4,n})\,\cong\,\begin{pmatrix}2&0\\ 0&6-2n\end{pmatrix}\.\] _The Brauer class_ \(\alpha_{van}\) _corresponds to a point of order two_ \(p\in Jac(C_{6})\)_. The theta characteristic_ \(p+\alpha_{X}\) _is even/odd when_ \(n\) _is odd/even._ ### Remark Consider a \(K3\) double plane with a Picard lattice \(diag(2,2c)\) for some \(c<0\). Choose an odd theta characteristic with \(h^{0}=1\) on the branch curve and let \(N^{2}\) be the rank three lattice of algebraic codimension two cycles on the associated cubic fourfold. Then the theorem above shows that \(N^{2}\) is isometric to either \(M_{0,c}\) or \(M_{4,3-c}\). One needs information on the orbit of the vanishing Brauer class of the specialization of a general double plane with Picard rank one to the \(K3\) under consideration to determine which of the two is the correct one. ### The proof of the main result The remainder of this section is devoted to the proof of Theorem 4.4.1. First of all, for an \(\alpha\in\operatorname{Br}(S)_{2}\) corresponding to an odd theta characteristic, we work out an explicit inclusion \(T_{\alpha}(S)\subset\Lambda\) where \(\Lambda\) is a lattice isometric to \(H^{2}(S,\mathbb{Z})\) in 4.7. Let \(X\) be the cubic fourfold with a plane such that \(\alpha_{X}=\alpha\). There is an isometry \(T(X)(-1)\cong T_{\alpha}(S)\). Therefore \(K_{8}\oplus T_{\alpha}(S)(-1)\) has an overlattice \(L\cong H^{4}(X,\mathbb{Z})\), which is in fact unique. We determine \(L\) explicitly in 4.8. Next, for \(\tau,n\) as in Proposition 4.2.1, we choose an explicit primitive embedding \(M_{\tau,n}\subset L\), compatible with \(K_{8}\hookrightarrow L\). Then the perpendicular \(M_{\tau,n}^{\perp}\) in \(L\) is isometric to \(T(X_{\tau,n})=T_{\alpha}(S_{\tau,n})(-1)\). Since this lattice is contained in \(T(X)=T_{\alpha}(S)(-1)\subset\Lambda(-1)\), we have found the sublattice \(T_{\alpha}(S_{\tau,n})\subset T_{\alpha}(S)\subset\Lambda\). In the diagram below, the lattices in the first row are in \(\Lambda\cong H^{2}(S,\mathbb{Z})\), those in the second row are in \(L\cong H^{4}(X,\mathbb{Z})\). \[T(S)\,\supset\,T(S_{\alpha}) T_{\alpha}(S_{\tau,n})\,\subseteq\,T(S_{\tau,n}) \subset\Lambda\] \[\cong \cong\] \[T(X)(-1) \supset\,M_{\tau,n}(-1)^{\perp}\ =\ T(X_{\tau,n})(-1) \subset\,L\] The perpendicular of \(T_{\alpha}(S_{\tau,n})\), and also of \(T(S_{\tau,n})\), in \(\Lambda\) is then \(\operatorname{Pic}(S_{\tau,n})\). Finally we determine the vanishing Brauer class and the orbit it lies in. ### The lattice \(T_{\alpha}(S)\subset\Lambda\) Let \((S,h)\) be a \(K3\) surface of degree \(2\) with \(\operatorname{Pic}(S)=\mathbb{Z}h\) and let \(\alpha\in\operatorname{Br}(S)_{2}\) be a Brauer class defined by an odd theta characteristic on \(C_{6}\), the branch curve of \(\phi_{h}:S\to\mathbb{P}^{2}\). There is an isomorphism \[H^{2}(S,\mathbb{Z})\ \stackrel{{\cong}}{{\longrightarrow}}\, \Lambda\,:=\,U^{3}\oplus E_{8}(-1)^{2}\,=\,U\oplus U\oplus\Lambda^{\prime}\] where \(U=\bigl{(}\mathbb{Z}^{2},\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\bigr{)}\). Under the isomorphism, we may assume that \[h\,=\,\bigl{(}\,\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),0\bigr{)},\qquad B\,=\,B_{\alpha}\,=\,(1/2)\bigl{(} \,\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0\bigr{)}\,\] here we use that any two odd theta characteristics are in the same orbit of the monodromy group and that for the B-field representative \(B\) of an odd theta characteristic one has \(Bh=B^{2}=1/2\). The transcendental lattice of \(S\) is then \[T(S)\,=\,h^{\perp}\,=\,\bigl{\langle}\,\bigl{(}\begin{smallmatrix}1\\ -1\end{smallmatrix}\bigr{)}\,\bigr{\rangle}\,\oplus\,U\,\oplus\,\Lambda^{ \prime}\.\] The Brauer class \(\alpha\) corresponds to the homomorphism, again denoted by \(\alpha\): \[\alpha:\,T(S)\,\longrightarrow\,\mathbb{Z}/2\mathbb{Z},\qquad t\,\longmapsto (t,2B)\mod 2.\] Since the image of \(((p,-p),(q,r),v)\in T(S)\) is \(p+q+r\mod 2\), the index two (non-primitive) sublattice \(T_{\alpha}(S)=\ker(\alpha)\) of \(T(S)\) is \[T_{\alpha}\,:=\,T_{\alpha}(S)\,:=\,\ker(\alpha)\,=\,\langle\gamma_{1},\; \gamma_{2},\;\gamma_{3}\,\rangle\oplus\Lambda^{\prime}\,\] where \[\gamma_{1}\,:=\,\bigl{(}\begin{smallmatrix}1\\ -1\end{smallmatrix}\bigr{)},\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),0\bigr{)},\qquad\gamma_{2}\,:=\,\bigl{(}\,\left( \begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0\bigr{)},\qquad\gamma_{3}\,:=\,\bigl{(}\,\left( \begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 2\end{smallmatrix}\right),0\bigr{)}\.\] Hence \(T_{\alpha}\) is the lattice \[T_{\alpha}(S)\,=\,\left(\oplus_{i=1}^{3}\mathbb{Z}\gamma_{i},\,\begin{pmatrix}-2&1 &0\\ 1&2&2\\ 0&2&0\end{pmatrix}\right)\,\oplus\Lambda^{\prime}\.\] To glue \(T_{\alpha}(S)\) to \(K_{8}\) we need to know its discriminant group. Let \[\gamma_{\alpha}^{*}\,:=\,\tfrac{1}{8}(2\gamma_{1}+4\gamma_{2}-5\gamma_{3})\,= \,\tfrac{1}{8}\big{(}\left(\begin{smallmatrix}2\\ -2\end{smallmatrix}\right),\left(\begin{smallmatrix}4\\ -4\end{smallmatrix}\right),0\big{)}\.\] Since \(\det(T_{\alpha})=8\), the discriminant group of \(T_{\alpha}\) has order eight. Notice that \((\gamma_{\alpha}^{*},\sum a_{i}\gamma_{i})=a_{3}\) and thus \(\gamma_{\alpha}^{*}\in T_{\alpha}^{*}\), the dual lattice. Since \(\gamma_{\alpha}^{*}\) has order eight in the discriminant group \(T_{\alpha}^{*}/T_{\alpha}\), we conclude that it is a generator. So the discriminant group is cyclic of order \(8\) and \((\gamma_{\alpha}^{*},\gamma_{\alpha}^{*})=-5/8\). ### The overlattice \(L\) of \(K_{8}\oplus T_{\alpha}(S)(-1)\) A general cubic fourfold \(X\) with a plane has \(N^{2}(X)\cong K_{8}\) where \[K_{8}\,:=\,\left(\mathbb{Z}h_{3}^{2}\oplus\mathbb{Z}P,\,\begin{pmatrix}3&1\\ 1&3\end{pmatrix}\right),\qquad\text{let}\quad\gamma_{8}^{*}\,=\,(1/8)(3h_{3} ^{2}-P)\.\] Notice that the intersection form on \(K_{8}\) has determinant \(8\) and, since \((\gamma_{8}^{*},ah_{3}+bP)=b\), the discriminant group is generated by \(\gamma_{8}^{*}\). As \((\gamma_{8}^{*},\gamma_{8}^{*})=3/8\) and \((\gamma_{\alpha}^{*},\gamma_{\alpha}^{*})=5/8\) in \(T_{\alpha}(-1)_{\mathbb{Q}}\) we can glue the lattices \(K_{8}\) and \(T_{\alpha}(-1)\) by adding \(\gamma_{8}^{*}+\gamma_{\alpha}^{*}\) to their direct sum. Let \[L\,:=\,\mathbb{Z}(\gamma_{8}^{*}+\gamma_{\alpha}^{*})\,+\,\left(K_{8}\oplus T _{\alpha}(S)(-1)\right)\,,\] it is a unimodular overlattice of \(K_{8}\oplus T_{\alpha}(S)(-1)\) and \((\gamma_{8}^{*}+\gamma_{\alpha}^{*})^{2}=3/8+5/8=1\). The lattice \(L\) is well-known to be unique, with unique sublattice \(K_{8}\), all up to isometry. The lattice \(L\) is odd (because \(K_{8}\) is an odd sublattice) and has signature \((2+19,2)\). As it is unimodular, it must be isometric to \(<1>^{21}\oplus<-1>^{2}\cong H^{4}(X,\mathbb{Z})\). For our computations it is convenient to write \(L\) as \[L\,=\,\mathbb{Z}h_{3}\oplus\,\mathbb{Z}P\oplus\,\mathbb{Z}(\gamma_{8}^{*}+ \gamma_{\alpha}^{*})\oplus\,\mathbb{Z}\gamma_{1}\oplus\,\mathbb{Z}(\gamma_{2 }-\gamma_{3})\oplus\,\Lambda^{\prime}(-1)\.\] Since we work in \(L\), the sublattice \(T(X)=T_{\alpha}(S)(-1)\) which has opposite intersection form. The Gram matrix of the first five summands is \[M\,:=\,\begin{pmatrix}3&1&1&0&0\\ 1&3&0&0&0\\ 1&0&1&0&1\\ 0&0&0&2&-1\\ 0&0&1&-1&2\end{pmatrix},\qquad\text{let}\quad S\,:=\,\begin{pmatrix}1&0&0&0&2 \\ -1&0&0&0&-1\\ 3&-1&-1&1&-6\\ -1&1&0&0&2\\ -2&1&1&0&4\end{pmatrix}\.\] Then we have \[{}^{t}SMS=\text{diag}(1,1,1,1,-1),\quad\text{hence}\quad L\,\cong\,<1>^{\oplus 4 }\,\oplus\,<-1>\,\oplus\Lambda^{\prime}(-1)\.\] Finally we observe that \(h_{3}^{2}=3\) and \((h_{3}^{2})^{\perp}\) is an even lattice, which is all that is required of the class that corresponds to the square of the hyperplane section. The embedding \(K_{8}\hookrightarrow H^{2}(X,\mathbb{Z})\) is unique up to isometry and thus we may assume that the second generator corresponds to the class of a plane \(P\) in \(X\). ### An embedding \(M_{\tau,n}\hookrightarrow L\) For each \(\tau=0,1,2,3,4\) we explicitly find an \(m_{\tau,n}\in L\) such that \[h_{3}^{2}m_{\tau,n}\,=\,0,\quad Pm_{\tau,n}\,=\tau,\quad m_{\tau,n}^{2}\,=\,2n,\] and such that, with the lattice \(M_{\tau,n}\) defined in Proposition 4.2.1, \[M_{\tau,n}\,\stackrel{{\cong}}{{\longrightarrow}}\,\langle\,h_ {3},P,m_{\tau,n}\,\rangle\subset\,(\mathbb{Z}^{5},M)\,\oplus\,\Lambda^{\prime} (-1)\,=\,L\] is a saturated embedding. We use the basis of \(\mathbb{Z}^{5}\) as above so that \[(a_{1},\ldots,a_{5})\,=\,a_{1}h_{3}+a_{2}P+a_{3}(\gamma_{8}^{*}+\gamma_{ \alpha}^{*})+a_{4}\gamma_{1}+a_{5}(\gamma_{2}-\gamma_{3}).\] In the list below \(v_{\tau,n}\in\Lambda^{\prime}(-1)\) is such that \(m_{\tau,n}^{2}=2n\), such a \(v_{\tau,n}\) exists for any \(n,\tau\) as one can choose a suitable \(v_{\tau,n}\) in the summand \(U\subset\Lambda^{\prime}\). \[m_{0,n} = (0,0,0,1,0)+v_{0,n}\qquad\qquad 2n\,\,\,=\,\,\,\,m_{0,n}^{2}\,\,\, =\,\,\,\,2+v_{0,n}^{2}\,\,,\] \[m_{1,n} = (1,0,-3,1,1)+v_{1,n}\qquad\qquad 2n\,\,\,=\,\,\,\,m_{1,n}^{2}\,\,\, =\,\,\,\,2+v_{1,n}^{2}\,\,,\] \[m_{2,n} = (2,0,-6,1,3)+v_{2,n}\qquad\qquad 2n\,\,\,=\,\,\,\,m_{2,n}^{2}\,\,\, =\,\,\,\,2+v_{2,n}^{2}\,\,,\] \[m_{3,n} = (0,1,-1,1,1)+v_{3,n}\qquad\qquad 2n\,\,\,=\,\,\,\,m_{3,n}^{2}\,\,\, =\,\,\,\,4+v_{3,n}^{2}\,\,,\] \[m_{4,n} = (1,1,-4,1,2)+v_{4,n}\qquad\qquad 2n\,\,\,=\,\,\,\,m_{4,n}^{2}\,\,\, =\,\,\,\,6+v_{4,n}^{2}\,\,.\] Since \(h_{3}=(1,0,0,0,0)\), \(P=(0,1,0,0,0)\) and the coefficient of \(\gamma_{1}\) in each of these \(m_{\tau,n}\) is equal to one, the sublattice generated by \(h_{3},P,m_{\tau,n}\) is primitive. Next we determine a primitive vector \(t_{\tau,n}\in L\) (unique up to sign) such that \[A_{\tau,n}\cap K_{8}^{\perp}\,=\,A_{\tau,n}\cap\langle h_{3},P\rangle^{\perp} \,=\,\mathbb{Z}t_{\tau,n}\,\,.\] We found: \[t_{0,n} = 0\cdot h_{3}+0\cdot P+1\cdot m_{0,n}\,\,\,=\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Next we use the definition of \(\gamma_{\alpha}^{*}\), to avoid unnecessary fractions we let \(a_{3}=8a_{3}^{\prime}\): \[t_{\tau,n}\,=\,(2a_{3}^{\prime}+a_{4})\gamma_{1}\,+\,(4a_{3}^{\prime}+a_{5}) \gamma_{2}\,+\,(-5a_{3}^{\prime}-a_{5})\gamma_{3},\qquad(a_{3}=8a_{3}^{\prime})\.\] Finally we use the definition of the \(\gamma_{i}\in U^{2}\oplus\Lambda^{\prime}\) and notice that the sign of the bilinear form changes again since we are back in \(T_{\alpha}\subset\Lambda\). \[\begin{array}{rclrcl}t_{0,n}&=&\left(\left(\begin{smallmatrix}1\\ -1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{0,n}\right),&v_{0,n}^{2}&=&-2n+2,\\ t_{1,n}&=&\left(\left(\begin{smallmatrix}2\\ -2\end{smallmatrix}\right),\left(\begin{smallmatrix}-4\\ 12\end{smallmatrix}\right),8v_{1,n}\right),&v_{1,n}^{2}&=&-2n+2,\\ t_{2,n}&=&\left(\left(\begin{smallmatrix}-2\\ 2\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 4\end{smallmatrix}\right),4v_{2,n}\right),&v_{2,n}^{2}&=&-2n+2,\\ t_{3,n}&=&\left(\left(\begin{smallmatrix}6\\ -6\end{smallmatrix}\right),\left(\begin{smallmatrix}4\\ 4\end{smallmatrix}\right),8v_{3,n}\right),&v_{3,n}^{2}&=&-2n+4,\\ t_{4,n}&=&\left(\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 2\end{smallmatrix}\right),2v_{4,n}\right),&v_{4,n}^{2}&=&-2n+6.\end{array}\] The class \(t_{\tau,n}\) is transcendental in \(H^{2}(S,\mathbb{Z})\), but in the specialization under consideration it becomes algebraic, so \(t_{\tau,n}\in\operatorname{Pic}(S_{\tau,n})\). ### The Picard lattice \(\operatorname{Pic}(S_{\tau,n})\) and the Brauer class \(\alpha_{van}\) We compute the Picard group of \(S_{\tau,n}\), the \(K3\) surface associated to a cubic fourfold \(X_{\tau,n}\) with \(N^{2}(X)=M_{\tau,n}\) as: \[\operatorname{Pic}(S_{\tau,n})\,=\,\langle h,\,t_{\tau,n}\rangle_{sat},\] We will do so for each of the five cases for \(\tau\) in the next sections. We determine the vanishing Brauer class \(\alpha_{van}\in\operatorname{Br}(S)\) for the specialization of \(S\) to \(S_{\tau,n}\) induced by the one of a general cubic fourfold with a plane to one with \(N^{2}(X)=M_{\tau,n}\). The finer classification of \(\alpha_{van}\) in terms of the orbits of the stabilizer of the Brauer class (the Clifford invariant) \(\alpha=\alpha_{X}\in\operatorname{Br}(S)\) defined by the cubic fourfold \(X\) is also given. Recall from SS4.7 that \(\alpha\) has B-field representative \[B_{\alpha}\,=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0\big{)}\,\] and that \(\operatorname{Pic}(S)=\mathbb{Z}h\), with \[h\,=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),0\big{)}\.\] ### The case \(\tau=0\) In this case the sublattice generated by \(h,t_{0,n}\) is primitive, hence \[\operatorname{Pic}(S_{0,n})\,=\,\langle h,\,t_{0,n}\,=\,\left(\left(\begin{smallmatrix} 1\\ -1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{0,n}\right)\,\rangle_{sat}\,=\,\langle h,t_{0,n} \rangle\,=\,\begin{pmatrix}2&0\\ 0&-2n\end{pmatrix}\.\] Notice that \(\det(\operatorname{Pic}(S_{0,n}))=-4n\) whereas \(\det(A_{0,n})=16\cdot n-3\cdot 0^{2}=16n\). The invariants of the vanishing Brauer class are determined from Corollary 2.1.3. The Gram matrix of \(\operatorname{Pic}(S_{0,n})\) has \(b=0\), \(2c=-2n\). Hence the vanishing Brauer class has invariants \(B_{van}h=0\) and \(B_{van}^{2}=0\). By Proposition 2.2.1 it corresponds to a point of order two (as \(b\) is even). A B-field representing the vanishing Brauer class is obtained from the second basis vector of \(\mathrm{Pic}(S_{0,n})\): \[B_{van}\,:=\,\tfrac{1}{2}t_{0,n}\,=\,\tfrac{1}{2}\left(\left(\begin{smallmatrix} 1\\ -1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{0,n}\right)\.\] Now we use the addition in the Brauer group. The sum of the Clifford class and the vanishing Brauer class has a B-field representative given by \[B_{\alpha}+B_{van}\,=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}1\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 2\end{smallmatrix}\right),v_{0,n}\big{)}\.\] This \(B\)-field has invariant \(h\cdot(B_{\alpha}+B_{van})=\tfrac{1}{2}\), so the sum of the Brauer classes corresponds to a theta characteristic. Using \(v_{0,n}^{2}=-2n+2\) one finds \[(B_{\alpha}+B_{van})^{2}\,=\,\tfrac{1}{4}(0+4+v_{0,n}^{2})=\tfrac{1}{2}(-n+3)\.\] Using Proposition 2.2.1 again, we find that the theta characteristic is even when \(n\equiv 1\mod 2\) and it is odd otherwise. ### The case \(\tau=1\) The Picard lattice is (notice that \(2h+t_{1,n}\) is divisible by \(4\)): \[\mathrm{Pic}(S_{1,n})\,=\,\langle h,\,t_{1,n}\,=\,\big{(}\left(\begin{smallmatrix} 2\\ -2\end{smallmatrix}\right),\left(\begin{smallmatrix}-4\\ 12\end{smallmatrix}\right),8v_{1,\tau}\big{)}\,\rangle_{sat}\,=\,\langle h,\, \left(\left(\begin{smallmatrix}1\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}-1\\ 3\end{smallmatrix}\right),2v_{1,\tau}\right)\,\rangle\,.\] The last generator has norm \(-6+4v_{1,\tau}^{2}=2-8n\) and the Gram matrix of the Picard lattice w.r.t. this basis is \[\mathrm{Pic}(S_{1,n})\,=\,\begin{pmatrix}2&1\\ 1&2-8n\end{pmatrix}\.\] Notice that \(\det(\mathrm{Pic}(S_{1,n}))=3-16n\) which is the opposite of \(\det(A_{1,n})\). The Gram matrix has \(b=1\), \(2c=2-8n\). Hence the vanishing Brauer class is defined by a theta characteristic (as \(b\) is odd) which is odd since \(c=1-4n\) is odd. A B-field representing the vanishing Brauer class is obtained from the second basis vector of \(\mathrm{Pic}(S_{1,n})\): \[B_{van}\,:=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}1\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}-1\\ 3\end{smallmatrix}\right),2v\big{)}\,\equiv\,\tfrac{1}{2}\big{(}\left( \begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0\big{)}\,\] where the congruence is modulo \(\tfrac{1}{2}\mathrm{Pic}(S_{1,n})\,+\,H^{2}(S_{1,n},\mathbb{Z})\), hence the vanishing Brauer class coincides with the one defined by \(B_{\alpha}\), the Clifford class \(\alpha_{X}\). ### The case \(\tau=2\) The Picard lattice is \((2h+t_{2,n}\) is divisible by \(4\)), hence: \[\mathrm{Pic}(S_{2,n})\,=\,\big{\langle}h,\,t_{2,n}\,=\,\big{(}\left(\begin{smallmatrix -2}\\ 2\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 4\end{smallmatrix}\right),4v\,\big{)}\big{\rangle}_{sat}\,=\,\langle h,\, \big{(}\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v\big{)}\,\rangle\] so the Gram matrix is: \[\mathrm{Pic}(S_{2,n})\,=\,\begin{pmatrix}2&1\\ 1&2-2n\end{pmatrix}\.\] Notice that \(\det(\mathrm{Pic}(S_{2,n}))=3-4n\) whereas \(\det(A_{2,n})=16\cdot n-3\cdot 2^{2}=4(4n-3)\). The Gram matrix of \(\mathrm{Pic}(S_{2,n})\) has \(b=1\), \(2c=2-2n\). Hence the vanishing Brauer class is defined by a theta characteristic (as \(b\) is odd) which is even/odd iff \(c=1-n\) is even/odd iff \(n\) is odd/even. A B-field representing the vanishing Brauer class is obtained from the second basis vector of \(\operatorname{Pic}(S_{2,n})\): \[B_{van}\,:=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v\big{)}\.\] The transcendental lattice contains the vector \[t\,:=\,\big{(}\left(\begin{smallmatrix}-1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),0\big{)}\in T(S_{2,n})\,=\,\operatorname{Pic}(S_{2,n })^{\perp}\.\] The restriction of the Brauer class \(\alpha_{X}\) to \(T(S_{2,n})\) is not trivial since \(B_{\alpha}\cdot t=\tfrac{1}{2}(-1+2)\not\equiv 0\mod\mathbb{Z}\). Hence \(\alpha_{X}\neq\alpha_{van}\). ### The case \(\tau=3\) The Picard lattice is (notice that \(2h+t_{3,n}\) is divisible by \(4\)): \[\operatorname{Pic}(S_{3,n})\,=\,\langle h,\,t_{3,n}\,=\,\big{(}\left( \begin{smallmatrix}6\\ -6\end{smallmatrix}\right),\left(\begin{smallmatrix}4\\ 4\end{smallmatrix}\right),8v\big{)}\,\rangle_{sat}\,=\,\langle\big{(}h,\, \left(\begin{smallmatrix}2\\ -1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),2v\big{)}\,\rangle\,\] as the last vector has length \(-4+2-4v_{3,n}^{2}=14-8n\) we find the Gram matrix: \[\operatorname{Pic}(S_{3,n})\,=\,\begin{pmatrix}2&1\\ 1&14-8n\end{pmatrix}\.\] Notice that \(\det(\operatorname{Pic}(S_{3,n}))=27-16n\) which is the opposite of \(\det(A_{3,n})\). The Gram matrix of \(\operatorname{Pic}(S_{3,n})\) has \(b=1,2c=14-8n\). Hence the vanishing Brauer class is defined by a theta characteristic (as \(b\) is odd) which is odd since \(c=7-4n\) is odd. A B-field representing the vanishing Brauer class is obtained from the second basis vector of \(\operatorname{Pic}(S_{3,n})\): \[B_{van}\,:=\,\tfrac{1}{2}\,\big{(}\left(\begin{smallmatrix}2\\ -1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),2v\big{)}\,\equiv\,\tfrac{1}{2}\,(\left( \begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0)\,\equiv\,B_{\alpha}\mod H^{2}(S_{3,n},\mathbb{Z})\,\] hence the vanishing Brauer class coincides, as in the case \(\tau=1\), with \(\alpha_{X}\). ### The case \(\tau=4\) The Picard lattice is (notice that \(t_{4,n}\) is divisible by \(2\) in \(H^{2}(S_{4,n},\mathbb{Z})\)): \[\operatorname{Pic}(S_{2})\,=\,\langle h,\,t_{4,n}\,=\,\big{(}\left(\begin{smallmatrix} 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 2\end{smallmatrix}\right),2v_{4,n}\big{)}\,\rangle_{sat}\,=\,\langle h,\, \big{(}\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{4,n}\big{)}\,\rangle\.\] Hence \[\operatorname{Pic}(S_{4,n})\,=\,\begin{pmatrix}2&0\\ 0&6-2n\end{pmatrix}\.\] Notice that \(\det(\operatorname{Pic}(S_{4,n}))=12-4n\) whereas \(\det(A_{4,n})=16\cdot n-3\cdot 4^{2}=4\cdot(4n-12)\). Therefore the vanishing Brauer class is defined by a point of order two (as \(b\) is even). A B-field representing the vanishing Brauer class is obtained from the second basis vector of \(\operatorname{Pic}(S_{4,n})\): \[B_{van}\,:=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{4,n}\big{)}\.\] Notice that \[B_{\alpha}+B_{van}\,=\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right),0\big{)}\,+\,\tfrac{1}{2}\big{(}\left(\begin{smallmatrix} 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right),v_{4,n}\big{)}\,=\,\tfrac{1}{2}\big{(}\left( \begin{smallmatrix}0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 2\end{smallmatrix}\right),v_{4,n}\big{)}\] is a B-field with invariants \(h\cdot(B_{\alpha}+B_{van})=\frac{1}{2}\), so it corresponds to a theta characteristic, and \[(B_{\alpha}+B_{van})^{2}\,=\,\tfrac{1}{4}(0+4+v^{2})\equiv\tfrac{1}{4}(10-2n) \equiv\tfrac{1}{2}(1-n)\mod\,\mathbb{Z}\;.\] The theta characteristic corresponding to \(\alpha_{X}+\alpha_{van}\) is thus even iff \(n\) is odd. ### Remark In 4.11,...,4.15 of the proof of Theorem 4.4.1 we observed that \(-4\det(Pic(S_{\tau,n}))=\det(M_{\tau,n})\) for \(\tau=0,2,4\) whereas \(-\det(Pic(S_{\tau,n}))=\det(M_{\tau,n})\) if \(\tau=1,3\). This relation was already observed in [1, Proposition 1]. Notice also that \(\det(Pic(S_{\tau,n}))\) is even iff \(\tau=0,4\) and that in these cases \(\alpha_{X}\neq\alpha_{van}\), so \(\alpha_{X}\) restricts to a non-trivial Brauer class on \(S_{\tau,n}\). This was already shown in [1, Proposition 2]. See also [1, Theorem 4.8 and Proposition 4.10]. ## 5. Associated \(K3\) surfaces and the divisors \(\mathcal{C}_{M}\) ### Classification of admissible Lattices in \(\mathcal{C}_{8}\) In [10] there is also a lattice-theoretic characterization of the cubic fourfolds in \(\mathcal{C}_{8}\) with an associated \(K3\) surface, that is, of those that are conjecturally rational. **Definition 5.1.1**.: ([10, Definition 8.1]) A lattice \(M_{\tau,n}\) is admissible if \(T(X_{\tau,n})(-1)\) is Hodge isometric to the transcendental lattice of a \(K3\) surface. _Remark 5.1.2_.: The definition in [10] is different, but it is equivalent. **Proposition 5.1.3**.: _([10, Corollary 8.14]) The lattice \(M_{\tau,n}\) is admissible if and only if one of the following conditions is true_ 1. _(a)_ \(\tau=1,3\) _;_ 2. _(b)_ \(\tau=0,2,4\) _and_ \(n\) _is odd._ We already discussed the cases \(\tau=1,3\) in the introduction. In these cases \(\alpha_{van}=\alpha_{X}\), where \(X\) is a cubic fourfold with \(N^{2}(X)\cong K_{8}\) and \(S_{\tau,n}\), the \(K3\) double plane defined by \(X_{\tau,n}\), is a \(K3\) surface associated to \(X_{\tau,n}\). Moreover, Hassett proved the rationality of these cubic fourfolds in [11]. The case \(\tau=2\), \(n\) odd, is still under investigation. In the remaining cases we have identified an associated \(K3\) surface, see the proposition below. We are investigating its geometry in relation to the cubic fourfolds. **Proposition 5.1.4**.: _Let \(X_{\tau,n}\) be a cubic fourfold with \(N^{2}(X_{\tau,n})\cong M_{\tau,n}\). Let \(S_{\tau,n}\) be the \(K3\) double plane defined by \(X_{\tau,n}\) and let \(C_{\tau,n}\) be the branch curve of the double cover \(S_{\tau,n}\to\mathbb{P}^{2}\). Let \(\tau=0,4\) and \(n\) odd, and let \(\beta\) be the even theta characteristic \(\beta\) on \(C_{\tau,n}\) which is the specialization of the even theta characteristic \(\beta_{X}:=\alpha_{van}+\alpha_{X}\) on the branch curve of \(S_{X}\). Then the \(K3\) surface \(S_{\beta}\), a degree \(8\) surface in \(\mathbb{P}^{5}\), defined by the even theta characteristic \(\beta\) is a \(K3\) surface associated to \(X_{\tau,n}\)._ Proof.: Let \(X\) be a general cubic fourfold with a plane and let \(C_{6}\) be the branch curve of \(S_{X}\to\mathbb{P}^{2}\). We proved that for \(\tau=0,4\) the vanishing Brauer class \(\alpha_{van}\) corresponds to a point of order two in \(J(C_{6})\) and that \(\beta_{X}:=\alpha_{van}+\alpha_{X}\) corresponds to an even theta characteristic on \(C_{6}\). Specializing \(C_{6}\) to \(C_{\tau,n}\), we obtain an even theta characteristic \(\beta\) on \(C_{\tau,n}\). Then (the push forward to \(\mathbb{P}^{2}\) of) \(\beta\) admits a minimal resolution \[0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-2)^{6}\,\stackrel{{ M}}{{\longrightarrow}}\mathcal{O}_{\mathbb{P}^{2}}(-1)^{6}\longrightarrow \beta\longrightarrow 0\] where \(M\) is a \(6\times 6\) matrix of linear forms on \(\mathbb{P}^{2}\) and \(\det M=F\) where \(F=0\) is an equation defining \(C_{\tau,n}\) ([1, Proposition 4.2]). The base locus of these quadrics is a \(K3\) surface \(S_{\beta}\) of degree \(8\) in \(\mathbb{P}^{5}\). The transcendental lattice of \(S_{\beta}\) is \(T_{\beta}(S_{\tau,n})\), see [13], [14] for the case of a \(K3\) with Picard rank one, and by specialization it also holds for \(S_{\tau,n}\). Since \(\alpha_{van}\) is trivial on \(T(S_{\tau,n})\), the homomorphisms \(\beta\) and \(\alpha_{X}\) have the same restriction to \(T(S_{\tau,n})\), hence \[T(S_{\beta})\,\cong\,\ker(\beta:T(S_{\tau,n})\to\mathbb{Z}/2\mathbb{Z})\,=\, \ker(\alpha_{X}:T(S_{\tau,n})\to\mathbb{Z}/2\mathbb{Z})\,=\,T(X_{\tau,n})(-1)\,.\] Therefore the \(K3\) surface \(S_{\beta}\) is associated to \(X\). ### Pfaffian cubic fourfolds with a plane An interesting example of a pfaffian, hence rational, cubic fourfold \(X\) with a plane and \(\operatorname{rank}(N^{2}(X))=3\), but with \(\alpha_{X}\neq 0\), is given in [1, SS4]. They determine \(\tau,n\) explicitly (but notice that they use a different convention for writing the lattices \(M_{\tau,n}\)). We verify this here, using Theorem 4.4.1. The double plane \(S=S_{X}\) is branched along a smooth sextic \(C=C_{6}\subset\mathbb{P}^{2}\) with a tangent conic. Since the inverse image of this conic consists of two smooth rational curves \(n,n^{\prime}\) in \(S\), the Picard lattice of \(S\) is \[\operatorname{Pic}(S)\,=\,\left(\mathbb{Z}h\oplus\mathbb{Z}n,\,\begin{pmatrix} 2&2\\ 2&-2\end{pmatrix}\right)\,\cong\,\left(\mathbb{Z}h\oplus\mathbb{Z}(h-n), \begin{pmatrix}2&0\\ 0&-4\end{pmatrix}\right)\.\] So we are in case \(\tau=0,n=2\) or in the case \(\tau=4,\,n=5\). The vanishing Brauer class is thus a point of order two in \(J(C)_{2}\) and \(\alpha_{X}+\alpha_{van}\) is an odd/even theta characteristic in the first/second second case respectively. The tangent conic cuts out a divisor \(2D\) on \(C\subset\mathbb{P}^{2}\) and the rational curves \(n,n^{\prime}\) each cut out \(D\) on \(C\subset S\). The intersection of \(h\) with \(C\subset S\) is a divisor class \(D_{l}\) such that \(2D_{l}\equiv 2D\in\operatorname{Pic}(C)\). The image of \(h-n\in\operatorname{Pic}(S)\) is then \(p:=D_{l}-D\in\operatorname{Pic}(C)\), which a point of order two, and \(p\) must correspond to the vanishing Brauer class by [14, Theorem 1.1]. The Clifford class \(\alpha_{X}\) corresponds to a theta characteristic \(L\) on \(C\) with \(h^{0}(L)=1\) and \(2L\in|K_{C}=3D_{l}|\) is cut out by a cubic curve \(C_{3}\) tangent to \(C\). Following [13] and using the description of \(X\) from [1, SS4], one finds that \[C_{3}:\quad 14x^{3}+15x^{2}y+4x^{2}z+9xyz+14xz^{2}+16y^{3}+11y^{2}z+8yz^{2}+z^{ 3}\,=\,0\] is the determinant of the \(3\times 3\) submatrix of linear forms in the \(4\times 4\) matrix (obtained from the minimal resolution of \(L\)) defining the quadric bundle on the cubic fourfold ([1, Proposition 4.2b]). So if (with \(x=x_{0},\dots,w=u_{2}\)) \[\begin{array}{l}F(x_{0},x_{1},x_{2},u_{0},u_{1},u_{2})\,=\\ (x_{0}-4x_{1}-x_{2})u_{0}^{2}\,+\,\dots\,-\,x_{2}^{3}\,=\,\sum_{0\leq i,j\leq 2 }F_{ij}(x_{0},x_{1},x_{2})u_{i}u_{j}\,+\,\dots\end{array}\] is the defining (pfaffian) cubic polynomial for \(X\), where the remaining terms are of degree at most one in the \(u_{i}\), then \(C_{3}\) is defined by \(\det(F_{ij})_{i,j=0,1,2}=0\). The theta characteristic \(L+p=L+D_{l}-D=L+D-D_{l}\) has, using Serre duality on \(C\): \[h^{0}(L+p)\,=\,h^{0}(K_{C}-(L+D-D_{l}))\,=\,h^{0}(4D_{l}-(L+D))\.\] Since \(|4D_{l}|\) is cut out by degree four curves on \(C\) and an explicit (Magma) computation shows that there are no such curves passing through the support of \(L+D\), we conclude that \(h^{0}(L+p)=0\), so \(L+p\) is an even theta characteristic. Comparing with Theorem 4.4.1 we see that we are indeed in the case that \((\tau,n)=(4,5)\), that is \(N^{2}(X)\cong M_{4,5}\). In particular, there is a \(K3\) surface associated to \(X=X_{4,5}\) which is identified in [1] with the pfaffian \(K3\) surface associated by Beauville and Donagi in [1] to the pfaffian fourfold \(X\). This \(K3\) surface has a natural embedding of degree \(14\) in \(G(2,6)\). Instead, we associated the \(K3\) surface \(S_{\beta}\) to \(X\) in Proposition 5.1.4. In a sequel to this paper we intend to investigate \(S_{\beta}\) in this particular case as well as for \(\tau=4\) and any odd \(n\).
2308.02012
Real-time two-axis control of a spin qubit
Optimal control of qubits requires the ability to adapt continuously to their ever-changing environment. We demonstrate a real-time control protocol for a two-electron singlet-triplet qubit with two fluctuating Hamiltonian parameters. Our approach leverages single-shot readout classification and dynamic waveform generation, allowing full Hamiltonian estimation to dynamically stabilize and optimize the qubit performance. Powered by a field-programmable gate array (FPGA), the quantum control electronics estimates the Overhauser field gradient between the two electrons in real time, enabling controlled Overhauser-driven spin rotations and thus bypassing the need for micromagnets or nuclear polarization protocols. It also estimates the exchange interaction between the two electrons and adjusts their detuning, resulting in extended coherence of Hadamard rotations when correcting for fluctuations of both qubit axes. Our study emphasizes the critical role of feedback in enhancing the performance and stability of quantum devices affected by quasistatic noise. Feedback will play an essential role in improving performance in various qubit implementations that go beyond spin qubits, helping realize the full potential of quantum devices for quantum technology applications.
Fabrizio Berritta, Torbjørn Rasmussen, Jan A. Krzywda, Joost van der Heijden, Federico Fedele, Saeed Fallahi, Geoffrey C. Gardner, Michael J. Manfra, Evert van Nieuwenburg, Jeroen Danon, Anasua Chatterjee, Ferdinand Kuemmeth
2023-08-03T20:07:00Z
http://arxiv.org/abs/2308.02012v2
# Real-time two-axis control of a spin qubit ###### Abstract Optimal control of qubits requires the ability to adapt continuously to their ever-changing environment. We demonstrate a real-time control protocol for a two-electron singlet-triplet qubit with two fluctuating Hamiltonian parameters. Our approach leverages single-shot readout classification and dynamic waveform generation, allowing full Hamiltonian estimation to dynamically stabilize and optimize the qubit performance. Powered by a field-programmable gate array (FPGA), the quantum control electronics estimates the Overhauser field gradient between the two electrons in real time, enabling controlled Overhauser-driven spin rotations and thus bypassing the need for micromagnets or nuclear polarization protocols. It also estimates the exchange interaction between the two electrons and adjusts their detuning, resulting in extended coherence of Hadamard rotations when correcting for fluctuations of both qubit axes. Our study emphasizes the critical role of feedback in enhancing the performance and stability of quantum devices affected by quasistatic noise. Feedback will play an essential role in improving performance in various qubit implementations that go beyond spin qubits, helping realize the full potential of quantum devices for quantum technology applications. ## I Introduction Feedback is essential for stabilizing quantum devices and improving their performance. Real-time monitoring and control of quantum systems allows for precise manipulation of their quantum states [1; 2]. In this way, it can help mitigate the effects of quantum decoherence and extend the lifetime of quantum systems for quantum computing and quantum sensing applications [3], for example in superconducting qubits [4; 5; 6; 7; 8], spins in diamond [9; 10; 11; 12; 13; 14], trapped atoms [15; 16], and other platforms [17; 18; 19; 20; 21; 22]. Among the various quantum-information processing platforms, semiconductor spin qubits [23; 24] are promising for quantum computing because of their long coherence times [25] and foundry compatibility [26]. Focusing on spin qubits hosted in gate-controlled quantum dots (QDs), two-qubit gate fidelities of 99.5% and single-qubit gate fidelities of 99.8% have recently been achieved in silicon [27]. In germanium, a four-qubit quantum processor based on hole spins enabled all-electric qubit logic and the generation of a four-qubit Greenberger-Horne-Zeilinger state [28]. In gallium arsenide, simultaneous coherent exchange rotations and four-qubit measurements in a 2\(\times\)2 array of singlet-triplets were demonstrated without feedback, revealing site-specific fluctuations of nuclear spin polarizations [29]. In silicon, a six-qubit processor was operated with high fidelities enabling universal operation, reliable state preparation and measurement [30]. Achieving precise control of gated qubits can be challenging due to their sensitivity to environmental fluctuations, making feedback necessary to stabilize and optimize their performance. Because feedback-based corrections must be performed within the correlation time of the relevant fluctuations, real-time control is essential. Continuous feedback then allows to calibrate the qubit environment and to tune the qubit in real time to maintain high-fidelity gates and improved coherence, for instance by suppressing low-frequency noise and improving \(\pi\)-flip gate fidelity [31]. An active reset of a silicon spin qubit using feedback control was demonstrated based on quantum non-demolition readout [32]. Real-time operation of a charge sensor in a feedback loop [33] maintained the sensor sensitivity for fast charge sensing in a Si/SiGe double quantum dot, compensating for disturbances due to gate-voltage variation and \(1/f\) charge fluctuations. A quantum state with higher confidence than what is achievable through traditional thermal methods was initialized by real-time monitoring and negative-result measurements [34]. This study implements real-time two-axis control of a qubit with two fluctuating Hamiltonian parameters that couple to the qubit along different directions on its Bloch sphere. The protocol involves two key steps: first, rapid estimation of the instantaneous magnitude of one of the fluctuating fields (nuclear field gradient) effectively creates one qubit control axis. This control axis is then exploited to probe in real time the qubit frequency (Heisenberg exchange coupling) across different operating points (detuning voltages). Our procedure allows for counteracting fluctuations along both axes, resulting in an improved quality factor of coherent qubit rotations. Our protocol integrates a singlet-triplet (ST\({}_{0}\)) spin qubit implemented in a gallium arsenide double quantum dot (DQD) [29] with Bayesian Hamiltonian estimation [35; 36; 37; 38; 39]. Specifically, an FPGA-powered quantum orchestration platform (OPX [40]) repeatedly separates singlet-correlated electron pairs using voltages pulses and performs single-shot readout classifications to estimate on-the-fly the fluctuating nuclear field gradient within the double dot [41]. Knowledge of the field gradient in turn enables the OPX to coherently rotate the qubit between S and T\({}_{0}\) by arbitrary, user-defined target angles. Differently from previous works, we let the gradient freely fluctuate, without pumping the nuclear field [42], and instead program the OPX to adjust the baseband control pulses accordingly. Finally, in conjunction with exchange interaction estimation by exchange-based free induction decays (FIDs), fluctuations of both qubit axes are corrected and observed to extend also the coherence of Hadamard rotations. Our approach can be applied to other materials and qubit platforms with sufficiently slow fluctuating axes compared to the qubit manipulation time, enhancing the control and stability of qubits in quantum computing applications. ## Results ### Device and Bayesian estimation We use a top-gated GaAs DQD array from [29] and tune up one of its ST\({}_{0}\) qubits using the gate electrodes shown in Figure 1(c), at 200 mT in-plane magnetic field in a dilution refrigerator with a mixing-chamber plate below 30 mK. Radio-frequency reflectometry off the sensor dot's ohmic contact distinguishes the relevant charge configurations of the DQD [43]. Low-frequency tuning voltages (high-frequency baseband waveforms) are applied by a QDAC [44] (OPX) via a QBoard high-bandwidth sample holder [45]. The qubit operates in the (1,1) and (0,2) charge configuration of the DQD. (Integers indicate the number of electrons in the left and right dot.) The electrical detuning \(\varepsilon\) quantifies the difference in the electrochemical potentials of the two dots, which in turn sets the qubit's spectrum as shown in Fig. 1(a). We do not plot the fully spin-polarized triplet states, which are independent of \(\varepsilon\) and detuned in energy by the applied magnetic field. We define \(\varepsilon=0\) at the measurement point close to the interdot (1,1)-(0,2) transition, with negative \(\varepsilon\) in the (1,1) region. In the ST\({}_{0}\) basis, we model the time-dependent Hamiltonian by \[\mathcal{H}(t)=J(\varepsilon(t))\,\frac{\sigma_{z}}{2}+g^{*}\mu_{\text{B}} \Delta B_{z}(t)\,\frac{\sigma_{x}}{2}, \tag{1}\] which depends on the detuning \(\varepsilon\) that controls the exchange interaction between the two electrons, \(J(\varepsilon(t))\), and the component of the Overhauser gradient parallel to the applied magnetic field between the two dots, \(\Delta B_{z}(t)\). \(\sigma_{i}\) are the Pauli operators, \(g^{*}\) is the effective g-factor, and \(\mu_{\text{B}}\) is the Bohr magneton. In the following, we drop the time dependence of the Hamiltonian parameters for ease of notation. On the Bloch sphere of the qubit (Fig. 1(d)), Figure 1: **A singlet-triplet (ST\({}_{0}\)) qubit with two fluctuating control axes.****(a)** The dots’ electrical detuning \(\varepsilon\) tunes from a regime of low qubit frequency, \(\Omega_{\text{L}}\), to a regime of high frequency, \(\Omega_{\text{H}}\). States outside the computational space are not plotted. **(b)** In the first (second) regime, the Overhauser gradient \(\left|\Delta B_{z}\right|\) (the exchange coupling \(J\)) dominates the qubit frequency \(\Omega_{\text{L}}\) (\(\Omega_{\text{H}}\)) and the polar angle \(\phi\) of the qubit rotation axis. **(c)** SEM image of the GaAs device [29], implementing a two-electron double quantum dot (black circles) next to its sensor dot (SD) for qubit readout. **(d)**\(J\) and \(\Delta B_{z}\) drive rotations of the qubit around two orthogonal axes, providing universal qubit control, as depicted in the Bloch sphere. **(e)** Uncontrolled fluctuations of the Larmor frequencies \(\Omega_{\text{L}}\) and \(\Omega_{\text{H}}\), measured in real time on the OPX and plotted with a 30 ms moving average. eigenstates of the exchange interaction, \(|\mathrm{S}\rangle\) and \(|\mathrm{T}_{0}\rangle\), are oriented along \(Z\), while \(\Delta B_{z}\) enables rotations along \(X\). The qubit is manipulated by voltage pulses applied to the plunger gates of the DQD, and measured near the interdot (1,1)-(0,2) transition by projecting the unknown spin state of (1,1) onto either the (1,1) charge state (\(|\mathrm{T}_{0}\rangle\)) or the (0,2) charge state (\(|\mathrm{S}\rangle\)). Each single-shot readout of the DQD charge configuration involves generation, demodulation, and thresholding of a few-microsecond-long radio-frequency burst on the OPX (see Supplemental Figure S1[46]). The OPX allows for real-time calculation of the qubit Larmor frequency \(\Omega(\varepsilon)=\sqrt{\Delta B_{z}^{2}+J(\varepsilon)^{2}}\) at different detunings, based on real-time estimates of \(\Delta B_{z}\) and \(J(\varepsilon)\). Inspecting the exchange coupling in a simplified Fermi-Hubbard hopping model [23] and inserting \(J(\varepsilon)\) into Eq. 1 suggests two physically distinct regimes [Fig. 1(b)]: At low detuning, in the (1,1) charge state configuration, the Overhauser gradient dominates the qubit dynamics. In this regime, the qubit frequency reads \(\Omega_{\mathrm{L}}\equiv\sqrt{\Delta B_{z}^{2}+J_{\mathrm{res}}^{2}}\), where we have added a small phenomenological term \(J_{\mathrm{res}}\) to account for a constant residual exchange between the two electrons at low detuning. Such a term may become relevant when precise knowledge of \(\Delta B_{z}\) is required, for example for the Hadamard protocol in Figure 5(a). At high detuning, close to the (1,1)-(0,2) interdot charge transition, exchange interaction between the two electrons dominates, and the qubit frequency becomes \(\Omega_{\mathrm{H}}(\varepsilon)\equiv\sqrt{\Delta B_{z}^{2}+J(\varepsilon)^ {2}}\). As shown in Figure 1(b), the detuning affects both the Larmor frequency \(\Omega\) and the polar angle \(\phi\) of the qubit rotation axis \(\hat{\mathbf{\omega}}\), with \(\phi\) approaching 0 in the limit \(J(\varepsilon)\gg\Delta B_{z}\) and \(\pi/2\) if \(J(\varepsilon)\ll\Delta B_{z}\). Without the possibility of turning off either \(J\) or \(\Delta B_{z}\), the rotation axes of the singlet-triplet qubit are tilted, meaning that pure \(X\)- and \(Z\)-rotations are unavailable. In their absence, the estimation of the qubit frequency at different operating points is crucial for navigating the whole Bloch sphere of the qubit. Figure 1(e) tracks Larmor frequencies \(\Omega_{\mathrm{H}}\) and \(\Omega_{\mathrm{L}}\), both fluctuating over tens of MHz over a period of several seconds, using a real-time protocol explained later for Fig. 3. The presence of low-frequency variations in time traces of \(\Omega_{\mathrm{H}}\) and \(\Omega_{\mathrm{L}}\) suggests that qubit coherence can be extended by monitoring these uncontrolled fluctuations in real time and appropriately compensating qubit manipulation pulses on-the-fly. To estimate the frequency of the fluctuating Hamiltonian parameters on the OPX, we employ a Bayesian estimation approach based on a series of free-induction-decay experiments [35]. Using \(m_{i}\) to represent the outcome (\(|\mathrm{S}\rangle\) or \(|\mathrm{T}_{0}\rangle\)) of the \(i\)-th measurement after an evolution time \(t_{i}\), the conditional probability \(P(m_{i}|\Omega)\) is defined as the probability of obtaining \(m_{i}\) given a value of \(\Omega\): \[P\left(m_{i}|\Omega\right)=\frac{1}{2}\left[1+r_{i}\left(\alpha+\beta\cos \left(2\pi\Omega t_{i}\right)\right)\right], \tag{2}\] where \(r_{i}\) takes a value of 1 (\(-1\)) if \(m_{i}=|\mathrm{S}\rangle\) (\(|\mathrm{T}_{0}\rangle\)), and \(\alpha\) and \(\beta\) are determined based on the measurement error and axis of rotation on the Bloch sphere. Applying Bayes' rule to estimate \(\Omega\) based on the observed measurements \(m_{N},\dots m_{1}\), which are assumed to be independent of each other, yields the posterior probability distribution \(P\left(\Omega\ |m_{N},\dots m_{1}\right)\) in terms of a prior uniform distribution \(P_{0}\left(\Omega\right)\) and a normalization constant \(\mathcal{N}\): \[\begin{split} P\left(\Omega\ |m_{N},\dots m_{1}\right)=& P_{0}\left(\Omega \right)\mathcal{N}\\ &\times\prod_{i=1}^{N}\left[1+r_{i}\left(\alpha+\beta\cos\left(2 \pi\Omega t_{i}\right)\right)\right].\end{split} \tag{3}\] Based on previous works [35; 38; 39], we fix \(\alpha=0.25\) and \(\beta=\pm 0.5\), with the latter value positive when estimating \(\Omega_{\mathrm{L}}\) and negative when estimating \(\Omega_{\mathrm{H}}\). The expectation value \(\langle\Omega\rangle\), calculated over the posterior distribution after all \(N\) measurements, is then taken as the final estimate of \(\Omega\). ### Controlled Overhauser gradient driven rotations We first implement qubit control using one randomly fluctuating Hamiltonian parameter, through rapid Bayesian estimation of \(\Omega_{\mathrm{L}}\) and demonstration of controlled rotations of a ST\({}_{0}\) qubit driven by the prevailing Overhauser gradient. Notably, this allows coherent control without a micromagnet [47; 48] or nuclear spin pumping [42]. \(\Omega_{\mathrm{L}}\) is estimated from the pulse sequence shown in Figure 2(a): for each repetition a singlet pair is initialized in (0,2) and subsequently detuned deep in the \((1,1)\) region (\(\varepsilon_{\mathrm{L}}\approx$-40\,\mathrm{mV}$\)) for \(N\)=101 linearly spaced separation times \(t_{i}\) up to \(100\,\mathrm{ns}\). After each separation, the qubit state, \(|\mathrm{S}\rangle\) or \(|\mathrm{T}_{0}\rangle\), is assigned by thresholding the demodulated reflectometry signal \(V_{\mathrm{rf}}\) near the (1,1)-(0,2) interdot transition and updating the Bayesian probability distribution of \(\Omega_{\mathrm{L}}\) according to the outcome of the measurement. After measurement \(m_{N}\), the initially uniform distribution has narrowed [inset of Fig. 2(b), with white and black indicating low and high probability], allowing the extraction of \(\langle\Omega_{\mathrm{L}}\rangle\) as the estimate for \(\Omega_{\mathrm{L}}\). For illustrative purposes, we plot in Figure 2(a) the \(N\) single-shot measurements \(m_{i}\) for 10,000 repetitions of this protocol, which span a period of about \(20\,\mathrm{s}\), and in Fig. 2(b) the associated probability distribution \(P(\Omega_{\mathrm{L}})\) of each repetition. The quality of the estimation seems to be lower around a laboratory time of 6 seconds, coinciding with a reduced visibility of the oscillations in panel 2(a). We attribute this to an enhanced relaxation of the triplet state during readout due to the relatively high \(|\Delta B_{z}|\) gradient during those repetitions [49]. The visibility could be improved by a latched or shelved read-out [50; 51] or energy-selective tunneling-based readout [38]. Even though the rotation speed around \(\hat{\mathbf{\omega}}_{\rm L}\) at low detuning is randomly fluctuating in time, knowledge of \(\left\langle\Omega_{\rm L}\right\rangle\) allows controlled rotations by user-defined target angles. To show this, we task the OPX in Fig. 2(d) to adjust the separation times \(\tilde{t}_{j}\) in the pulse sequence to rotate the qubit by \(M\)=80 different angles \(\theta_{j}=\tilde{t}_{j}\left\langle\Omega_{\rm L}\right\rangle\) between 0 and \(8\pi\). In our notation, the tilde in a symbol \(\tilde{x}\) indicates that the waveform parameter \(x\) is computed dynamically on the OPX. To reduce the FPGA memory required for preparing waveforms with nanosecond resolution [52], we perform controlled rotations only if the expected \(\Omega_{\rm L}\) is larger than an arbitrarily chosen minimum of 50 MHz. (The associated IF statement and waveform compilation then takes about 40 us on the FPGA.) Accordingly, the number of rows in Fig. 2(d) (1450) is smaller than in panel (a), and we only label a few selected rows with their repetition number. To show the increased rotation-angle coherence of controlled \(\left|\Delta B_{z}\right|\)-driven rotations, we plot the average of all 1450 repetitions of Fig. 2(d) and compare the associated Figure 2: **Controlled Overhauser gradient driven rotations of a ST\({}_{0}\) qubit by real-time Bayesian estimation.** One loop (solid arrows) represents one repetition of the protocol. **(a)** For each repetition, the OPX estimates \(\Omega_{\rm L}\) by separating a singlet pair for \(N\) linearly spaced probe times \(t_{i}\) and updating the Bayesian estimate (BE) distribution after each measurement, as shown in the inset of (b) for one representative repetition. For illustrative purposes, each single-shot measurements \(m_{i}\) is plotted as a white/black pixel, here for \(N\)=101 \(\Omega_{\rm L}\) probe cycles, and the fraction of singlet outcomes in each column is shown as a red dot. **(b)** Probability distribution \(P(\Omega_{\rm L})\) after completion of each repetition in (a). Extraction of the expected value \(\left\langle\Omega_{\rm L}\right\rangle\) from each row completes \(\Omega_{\rm L}\) estimation. **(c)** For each repetition, unless \(\left\langle\Omega_{\rm L}\right\rangle\) falls below a user-defined minimum (here 50 MHz), the OPX adjusts the separation times \(t_{j}\), using its real-time knowledge of \(\left\langle\Omega_{\rm L}\right\rangle\), to rotate the qubit by user-defined target angles \(\theta_{j}=\tilde{t}_{j}\left\langle\Omega_{\rm L}\right\rangle\). **(d)** To illustrate the increased coherence of Overhauser gradient driven rotations, we task the OPX to perform \(M\)=80 evenly spaced \(\theta_{j}\) rotations. Single-shot measurements \(m_{j}\) are plotted as white/black pixels, and the fraction of singlet outcomes in each column is shown as a red dot. quality factor, \(Q\gtrsim 7\), with that of uncontrolled oscillations, \(Q\sim 1\)[53]. The average of the uncontrolled S-T\({}_{0}\) oscillations in Fig. 2(a) can be fit by a decay with Gaussian envelope (solid line), yielding an inhomogeneous dephasing time \(T_{2}^{*}\approx 30\,\)ns typical for ST\({}_{0}\) qubits in GaAs [54]. We associate the relatively smaller amplitude of stabilized qubit oscillations with the low-visibility region around 6 seconds in Fig. 2(d), discussed earlier. Excluding such regions by post selection increases the visibility and quality factor of oscillations (see Supplemental Figure S4 [46]). Overall, the results presented in this section provide a significant example of how adaptive baseband control pulses can operate a qubit reliably, out of slowly fluctuating environments. ### Real-time two-axis estimation In addition to nuclear spin noise, ST\({}_{0}\) qubits are exposed to electrical noise in their environment, which affects the qubit splitting in particular at higher detunings. It is therefore important to examine and mitigate low-frequency noise at different operating points of the qubit. In the previous section, the qubit frequency \(\Omega_{\mathrm{L}}\) was estimated entirely at low detuning where the Overhauser field gradient dominates over the exchange interaction. In order to probe and stabilize also the second control axis, namely \(J\)-driven rotations corresponding to small \(\phi\) in Figure 1(d), we probe the qubit frequency \(\Omega_{\mathrm{H}}\) at higher detunings, using a similar protocol with a modified qubit initialization. Free evolution of the initial state \(\ket{\mathrm{S}}\) around \(\hat{\mathbf{\omega}}_{\mathrm{L}}\) would result in low-visibility exchange-driven oscillations because of the low value of \(\phi\). To circumvent this problem, we precede the \(\Omega_{\mathrm{H}}\) estimation by one repetition of \(\Omega_{\mathrm{L}}\) estimation, as shown in Fig. 3(a). This way, real-time knowledge of \(\langle\Omega_{\mathrm{L}}\rangle\) allows the initial state \(\ket{\mathrm{S}}\) to be rotated to a state near the equator of the Bloch sphere, before it evolves freely for probing \(\Omega_{\mathrm{H}}\). This rotation is implemented by a diabatic detuning pulse from (0,2) to \(\varepsilon_{\mathrm{L}}\) (diabatic compared to the interdot tunnel coupling) for time \(\tilde{t}_{\pi/2}\), corresponding to a rotation of the qubit around \(\hat{\mathbf{\omega}}_{\mathrm{L}}\) by an angle \(\Omega_{\mathrm{L}}\tilde{t}_{\pi/2}=\pi/2\). After evolution for time \(t_{j}\) under finite exchange, another \(\pi/2\) rotation around \(\hat{\mathbf{\omega}}_{\mathrm{L}}\) rotates the qubit to achieve a high readout contrast in the ST\({}_{0}\) basis, as illustrated on the Bloch sphere in Fig. 3(b). As a side note, we mention that in the absence of knowledge of the Overhauser field gradient, the qubit would traditionally be initialized near the equator by adiabatically reducing detuning from (0,2) to the (1,1) charge configuration, and a reverse ramp for readout. Such adiabatic ramps usually last several microseconds each, while our \(\tilde{t}_{\pi/2}\) pulses typically take less than \(10\,\)ns, thereby significantly shortening each probe cycle. For the estimate of \(\Omega_{\mathrm{H}}\), the Bayesian probability distribution of \(\Omega_{\mathrm{H}}\) is updated after each of the \(M\)=100 single-shot measurement \(m_{j}\), each corresponding to a separation time \(t_{j}\) that is evenly stepped from 0 to \(100\,\)ns. The Bayesian probability distributions of both \(\Omega_{\mathrm{L}}\) and \(\Omega_{\mathrm{H}}\) are shown in Figure 3(c) and (d), respectively, with the latter being conditioned on \(20\,\)MHz \(<\langle\Omega_{\mathrm{L}}\rangle<40\,\)MHz to reduce the required FPGA memory [52]. Overall, this section demonstrated for the first time a real-time baseband control protocol that enables manipulation of a spin qubit on the entire Bloch sphere. ### Controlled exchange-driven rotations Using Bayesian inference to estimate control axes in real-time offers new possibilities for studying and mitigating qubit noise at all detunings. Figure 4(a) describes the real-time controlled exchange-driven rotations protocol aimed at stabilizing frequency fluctuations of the qubit at higher detunings. Following the approach of Figure 3, we first estimate \(\Omega_{\mathrm{L}}\) and \(\Omega_{\mathrm{H}}\) using real-time Bayesian estimation. We then use our knowledge of \(\Omega_{\mathrm{H}}\) to increase the rotation angle coherence of the qubit where the exchange coupling is comparable with the Overhauser field gradient. As illustrated in Fig. 4(a), the qubit control pulses now respond in real time to both qubit frequencies \(\Omega_{\mathrm{L}}\) and \(\Omega_{\mathrm{H}}\). Similar to the previous section, after determining \(\langle\Omega_{\mathrm{L}}\rangle\) and confirming that \(30\,\)MHz \(<\langle\Omega_{\mathrm{L}}\rangle<50\,\)MHz is fulfilled [52], the qubit is initialized near the equator of the Bloch sphere by fast diabatic \(\Omega_{\mathrm{L}}(\pi/2)\) pulses, followed by an exchange-based FID that probes \(\Omega_{\mathrm{H}}\). Based on the resulting \(\langle\Omega_{\mathrm{H}}\rangle\), the OPX adjusts the separation times \(\tilde{t}_{l}\) to rotate the qubit by user-defined target angles \(\theta_{l}=\tilde{t}_{l}\,\langle\Omega_{\mathrm{H}}\rangle\). To show the resulting improvement of coherent exchange oscillations, we plot in Fig. 4(c) the interleaved \(K\)=101 measurements \(m_{l}\) and compare them in Fig. 4(b) to the \(M\)=101 measurements \(m_{j}\). Fitting the average of the uncontrolled rotations by an oscillatory fit with Gaussian envelope decay yields \(T_{\mathrm{el}}^{*}\approx 60\,\)ns and \(Q\approx 3\), presumably limited by electrical noise [54], while the quality factor of the controlled rotations is enhanced by a factor of two, \(Q\approx 6\). The online control of exchange-driven rotations using Bayesian inference stabilizes fluctuations of the qubit frequency at higher detunings, where fluctuations are more sensitive to detuning noise. Indeed, we attribute the slightly smaller quality factor, relative to Overhauser-driven rotations in Fig. 2(d), to an increased sensitivity to charge noise at larger detuning, which, owing to its high-frequency component, is more likely to fluctuate on the estimation timescales [36]. This section established for the first time stabilization of two rotation axes of a spin qubit. This advancement should allow for stabilized control over the entire Bloch sphere, which we demonstrate in the next section. ### Hadamard rotations In this experiment, we demonstrate universal \(\mathrm{ST}_{0}\) control that corrects for fluctuations in all Hamiltonian parameters. We execute controlled Hadamard rotations around \(\hat{\mathbf{\omega}}_{\mathrm{Had}}\), as depicted by the trajectory on the Bloch sphere of Figure 5(d), by selecting the detuning \(\varepsilon_{\mathrm{Had}}\) in real time such that \(J(\varepsilon_{\mathrm{Had}})=|\Delta B_{z}|\). To achieve this, we do not assume that \(\Delta B_{z}=\Omega_{\mathrm{L}}\) (i.e. we allow contributions of \(J_{\mathrm{res}}\) to \(\Omega_{\mathrm{L}}\)) or that \(J=\Omega_{\mathrm{L}}\) (i.e. we allow contributions of \(\Delta B_{z}\) to \(\Omega_{\mathrm{H}}\)). The full protocol is detailed in Supplemental Material [46]. In previous sections, we have shown how to probe the qubit Larmor frequencies \(\Omega_{\mathrm{H}}\) and \(\Omega_{\mathrm{L}}\) at different detunings in real time and correct for their fluctuations. Now, we simultaneously counteract fluctuations in \(J\) and \(|\Delta B_{z}|\) on the OPX in order to perform the Hadamard gate. As we do not measure the sign of \(\Delta B_{z}\), we identify the polar angle of \(\hat{\mathbf{\omega}}_{\mathrm{Had}}\) as either \(\phi=\pi/4\) or \(-\pi/4\). (The sign of Overhauser gradients may become relevant for multi-qubit experiments and can be determined as in Ref. [55].) In other words, starting from the singlet state, the qubit rotates towards \(+X\) on the Bloch sphere for one sign of \(\Delta B_{z}\), and towards \(-X\) for the other sign. The sign of the gradient may change over long time scales due to nuclear spin diffusion, but the measurement outcomes of our protocol are expected to be independent of the sign. In preparation for our protocol, we first extract the time-averaged exchange profile by performing exchange oscillations as a function of evolution time [Figure 5(b)]. Removing contributions of \(\Delta B_{z}\) to \(\Omega_{\mathrm{H}}\) then yields \(J(\varepsilon)\) in Fig. 5(c). A linear approximation in the target range \(40\,\mathrm{MHz}<\langle J\rangle<60\,\mathrm{MHz}\) (dashed blue line) is needed later on the OPX to allow initial detuning guesses when tuning up \(J(\varepsilon)=|\Delta B_{z}|\). We also provide the OPX with a value for the residual exchange at low detuning, \(J_{\mathrm{res}}\approx 20\,\mathrm{MHz}\), determined offline as described in Supplemental Figure S5 [46]. As illustrated in Figure 5(a), the Hadamard rotation protocol starts by estimating \(|\Delta B_{z}|\) from \(\Omega_{\mathrm{L}}\), taking into account a constant residual exchange by solving \(\Delta B_{z}^{2}=\Omega_{\mathrm{L}}^{2}-J_{\mathrm{res}}^{2}\). Next, an initial value of \(\varepsilon_{\mathrm{Had}}\) is chosen based on the linear offline model to fulfill \(J(\varepsilon_{\mathrm{Had}})=|\Delta B_{z}|\) [feedback 1 in panel (a,c)]. To detect any deviations of the Figure 3: **Real-time Bayesian estimation of two control axes.****(a)** One repetition of the two-axis estimation protocol. After estimating \(\Omega_{\mathrm{L}}\) from \(N\)=101 \(t_{i}\) probe cycles (Fig. 2a), the OPX computes on-the-fly the pulse duration \(\tilde{t}_{\pi/2}\) required to initialize the qubit near the equator of the Bloch sphere by a diabatic \(\Omega_{\mathrm{L}}(\pi/2)\) pulse. After the \(\Omega_{\mathrm{L}}(\pi/2)\) pulse, the qubit evolves for time \(t_{j}\) under exchange interaction before another \(\Omega_{\mathrm{L}}(\pi/2)\) pulse initiates readout. After each single-shot measurement \(m_{j}\), the OPX updates the BE distribution of \(\Omega_{\mathrm{H}}\). Similar to \(t_{i}\) in the \(\Omega_{\mathrm{L}}\) estimation, \(t_{j}\) is spaced evenly between 0 and 100 ns across \(M=101\) exchange probe cycles. **(b)** Qubit evolution on the Bloch sphere during one exchange probe cycle. **(c)** Each column plots \(P(\Omega_{\mathrm{L}})\) after completion of the \(\Omega_{\mathrm{L}}\) estimation in each protocol repetition. **(d)** Each column plots \(P(\Omega_{\mathrm{H}})\) after completion of the \(\Omega_{\mathrm{H}}\) estimation in each protocol repetition. prevailing \(J\) from the offline model, an exchange-driven FID is performed at \(\varepsilon_{\mathrm{Had}}\) to estimate \(J\) from \(\Omega_{\mathrm{H}}\), using \(J^{2}=\Omega_{\mathrm{H}}^{2}-\Delta B_{z}^{2}\). Any deviation of \(\left\langle J\right\rangle\) from the target value \(\left|\Delta B_{z}\right|\) is subsequently corrected for by updating \(\varepsilon_{\mathrm{Had}}\) based on the linearized \(J(\varepsilon)\) model [feedback 2 in panels (a,c)]. Matching \(J\) to \(\left|\Delta B_{z}\right|\) in the two detuning feedback steps each takes about \(400\,\mathrm{ns}\) on the OPX. Finally, real-time knowledge of \(\Omega_{\mathrm{Had}}\equiv\sqrt{2}|\Delta B_{z}|\) is used to generate the free evolution times \(\tilde{t}_{i}\), spent at the updated value \(\tilde{\varepsilon}_{\mathrm{Had}}\), in order to perform Hadamard rotations by \(K\) user defined target angles. The resulting Hadamard oscillations are shown in Figure 5(e) and fitted with an exponentially decaying sinusoid, indicating a quality factor \(Q\)\(>\)5. (According to this naive fit, the amplitude drops to 1/e over approximately 40 rotations, although we have not experimentally explored rotation angles beyond \(9\pi\).) To illustrate the crucial role of real-time estimation for this experiment, we also performed rotation experiments that do not involve any real-time estimation and feedback cycles (gray data), as follows. Within minutes after performing the controlled Hadamard rotations (purple data), we executed Hadamard rotations assuming a fixed value of \(\left|\Delta B_{z}\right|=\overline{\left|\Delta B_{z}\right|}\), i.e. by pulsing to a fixed detuning value corresponding to \(J(\varepsilon_{\mathrm{H}})=\overline{\left|\Delta B_{z}\right|}\) according to the offline model. Here, \(\overline{\left|\Delta B_{z}\right|}\approx 40\) MHz is the average Overhauser gradient that we observed just before executing the Hadamard protocol. Not surprisingly, the quality factor of the resulting Hadamard-like oscillations is low and the rotation angle deviates from the intended target angle, likely due to the Overhauser gradient having drifted in time. As a side note, we mention that the purple data in Fig. 5(e) constitutes an average over 5000 repetitions, corresponding to a total acquisition time of 2 minutes including Overhauser and exchange estimation cycles. In contrast, the gray data also constitutes an average over 5000 repetitions, but only required 15 seconds because of the omission of all estimation and feedback cycles. Further evidence for the fluctuating nature of non-stabilized Hadamard rotations is discussed in Supplemental Figure S5 [46]. The stabilized Hadamard rotations demonstrate real-time feedback control based on Bayesian estimation of \(J\) and \(\left|\Delta B_{z}\right|\), and suggest a significant improvement in coherence for \(\mathrm{ST}_{0}\) qubit rotations around a tilted control axis. Despite the presence of fluctuations in all Hamiltonian parameters, we report effectively constant amplitude of Hadamard oscillations, with a reduced visibility that we tentatively attribute to estimation and readout errors. ## IV Discussion Our experiments demonstrate the effectiveness of feedback control in stabilizing and improving the performance of a singlet-triplet spin qubit. The protocols presented showcase two-axis control of a qubit with two fluctuating Hamiltonian parameters, made possible by implementing online Bayesian estimation and feedback on a low-latency FPGA-powered qubit control system. Real-time estimation allows control pulses to counteract fluctuations in the Overhauser gradient, enabling controlled Figure 4: **Real-time-controlled exchange-driven qubit rotations.****(a)** One repetition of the exchange rotation protocol. After estimation of \(\Omega_{\mathrm{L}}\) and \(\Omega_{\mathrm{H}}\) as in Figure 3, the OPX adjusts exchange duration times \(\tilde{t}_{i}\), using real-time knowledge of \(\left\langle\Omega_{\mathrm{H}}\right\rangle\), to rotate the qubit by user-defined target angles \(\theta_{i}=\tilde{t}_{i}\left\langle\Omega_{\mathrm{H}}\right\rangle\). Pulse durations \(\tilde{t}_{x/2}\) for qubit initialization and readout use real-time knowledge of \(\left\langle\Omega_{\mathrm{L}}\right\rangle\). **(b)** Each row plots measurements \(m_{j}\) from one protocol repetition, here \(M\)=101 exchange probe outcomes. **(c)** Each row plots measurement \(m_{l}\) from one protocol repetition, here \(K\)=101 controlled-exchange-rotation outcomes. To illustrate the increased coherence of controlled exchange rotations, we also plot in (b) and (c) the fraction of singlet outcomes of each column. Overhauser-driven rotations without the need for micromagnets or nuclear polarization protocols. The approach is extended to the real-time estimation of the second rotation axis, dominated by exchange interaction, which we then combine with an adaptive feedback loop to generate and stabilize Hadamard rotations. Our protocols assume that \(\Delta B_{z}\) does not depend on the precise dot detuning in the (1,1) configuration and remains constant on the time scale of one estimation. Similarly, stabilization of exchange rotations is only effective for electrical fluctuations that are slow compared to one estimation. Therefore, we expect potential for further improvements by more efficient estimation methods, for example through adaptive schemes [56] for Bayesian estimation from fewer samples, or by taking into account the statistical properties of a time-varying signal described by a Wiener process [57] or a nuclear spin bath [58]. Machine learning could be used to predict the qubit dynamics [59; 60; 61], possibly via long short-term memory artificial neural networks as reported for superconducting qubits [62]. While our current qubit cycle time (approximately \(30\,\mathrm{\SIUnitSymbolMicro s}\)) is dominated by readout and qubit initialization, it can potentially be reduced to a few microseconds through faster qubit state classification, such as enhanced latched readout [51], and faster reset, such as fast exchange of one electron with the reservoir [63]. Our protocol could be modified for real-time non-local noise correlations [64] or in-situ qubit tomography using fast Bayesian tomography [65] to study the underlying physics of the noisy environment, thereby providing qualitatively new insights into processes affecting qubit coherence and multi-qubit error correction. Figure 5: **Real-time universal \(\mathbf{ST}_{0}\) control demonstrated by Hadamard rotations.****(a)** Hadamard rotation protocol. After estimating \(\Omega_{\mathrm{L}}\), \(\varepsilon\) is chosen in real-time such that \(J(\varepsilon)=|\Delta B_{z}|\), based on a linearized offline model from panel (c). If \(40\,\mathrm{MHz}<\langle|\Delta B_{z}|\rangle<60\,\mathrm{MHz}\), the detuning is adjusted to account for deviations of the prevailing \(J\) from the offline model. Real-time knowledge of \(\Omega_{\mathrm{Had}}=\sqrt{2}\,|\Delta B_{z}|\) then dictates \(\bar{t}_{i}\) to achieve a user-defined Hadamard rotation angle. **(b)** Averaged exchange driven FID as a function of detuning and evolution time. Here, a diabatic \(\Omega_{\mathrm{L}}(\pi/2)\) pulse initializes the qubit near the equator of the Bloch sphere, prior to free exchange evolution, and subsequently prepares it for readout. **(c)**\(J\) as a function of \(\varepsilon\) extracted offline from (b), as well as a linearized model (dashed line) used in the two feedback steps of panel (a). **(d)** Hadamard rotation depicted on the Bloch sphere. **(e)** Measurement of Hadamard rotations with (purple) and without (black) the feedback shown in (a). The curve without estimation is offset for clarity by \(-0.12\). This work represents a significant advancement in quantum control by implementing an FPGA-powered technique to stabilize in real time the qubit frequency at different manipulation points. ## Author contributions FB lead the measurements and data analysis, and wrote the manuscript with input from all authors. FB, TR, JvdH, AC and FK performed the experiment with theoretical contributions from JAK, JD and EvN. FF fabricated the device. SF, GCG and MJM supplied the heterostructures. AC and FK supervised the project. ## Acknowledgments This work received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 101017733 (QuantERA II) and 951852 (QLSI), from the Novo Nordisk Foundation under Challenge Programme NNF20OC0060019 (SolidQ), from the Inge Lehmann Programme of the Independent Research Fund Denmark, from the Research Council of Norway (RCN) under INTFELLES-Project No 333990, as well as from the Dutch National Growth Fund (NGF) as part of the Quantum Delta NL programme.
2310.08013
The Newlander-Nirenberg theorem for complex $b$-manifolds
Melrose defined the $b$-tangent bundle of a smooth manifold $M$ with boundary as the vector bundle whose sections are vector fields on $M$ tangent to the boundary. Mendoza defined a complex $b$-manifold as a manifold with boundary together with an involutive splitting of the complexified $b$-tangent bundle into complex conjugate factors. In this article, we prove complex $b$-manifolds have a single local model depending only on dimension. This can be thought of as the Newlander-Nirenberg theorem for complex $b$-manifolds: there are no ``local invariants''. Our proof uses Mendoza's existing result that complex $b$-manifolds do not have ``formal local invariants'' and a singular coordinate change trick to leverage the classical Newlander-Nirenberg theorem.
Tatyana Barron, Michael Francis
2023-10-12T03:29:00Z
http://arxiv.org/abs/2310.08013v1
# The Newlander-Nirenberg theorem for ###### Abstract Melrose defined the \(b\)-tangent bundle of a smooth manifold \(M\) with boundary as the vector bundle whose sections are vector fields on \(M\) tangent to the boundary. Mendoza defined a complex \(b\)-manifold as a manifold with boundary together with an involutive splitting of the complexified \(b\)-tangent bundle into complex conjugate factors. In this article, we prove complex \(b\)-manifolds have a single local model depending only on dimension. This can be thought of as the Newlander-Nirenberg theorem for complex \(b\)-manifolds: there are no "local invariants". Our proof uses Mendoza's existing result that complex \(b\)-manifolds do not have "formal local invariants" and a singular coordinate change trick to leverage the classical Newlander-Nirenberg theorem. ## 1 Introduction Throughout, we conflate \(\mathbb{R}^{2}=\mathbb{C}\) and use coordinates \(z=(x,y)=x+iy\) interchangeably. The word "smooth" means "infinitely real-differentiable", not "holomorphic". Recall that a complex manifold may equivalently defined either by a holomorphic atlas or by an integrable almost-complex structure. The equivalence of these two definitions is given by the Newlander-Nirenberg theorem [8]. In more detail, a complex structure on a smooth manifold \(M\) is an involutive subbundle \(T^{0,1}M\) of the complexified tangent bundle such that \(\mathbb{C}TM=T^{1,0}M\oplus T^{0,1}M\) where \(T^{1,0}M\coloneqq\overline{T^{0,1}M}\). The Newlander-Nirenberg theorem implies that every point belongs to a coordinate chart \((x_{1},y_{1},\ldots,x_{n},y_{n})=(z_{1},\ldots,z_{n})\) in which \(T^{0,1}M\) is spanned by \[\partial_{\overline{z}_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}+i\partial_{y_{j}}) j =1,\ldots,n.\] Accordingly, \(T^{1,0}M\) is spanned by \[\partial_{z_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}-i\partial_{y_{j}}) j =1,\ldots,n.\] In this article, a \(b\)-manifold is a smooth manifold \(M\) equipped with a closed hypersurface \(Z\subseteq M\) referred to as the singular locus. The \(b\)-tangent bundle is the vector bundle \({}^{b}TM\) over \(M\) whose smooth sections are the vector fields on \(M\) that are tangent along \(Z\). These terminologies stem from the \(b\)-geometry of Melrose [6], also called log-geometry. Mendoza [7] defined a complex \(b\)-structure on a \(b\)-manifold \(M\) to be an involutive subbundle \({}^{b}T^{0,1}M\) of the complexified \(b\)-tangent bundle such that \(\mathbb{C}^{b}TM={}^{b}T^{1,0}M\oplus{}^{b}T^{0,1}M\), where \({}^{b}T^{1,0}M\coloneqq{}^{b}\overline{T^{0,1}M}\). Thus, complex \(b\)-structures are defined exactly analogously to complex structures, replacing the tangent bundle by the \(b\)-tangent bundle. To complete the analogy, one would like there to be a Newlander-Nirenberg type result to the effect that all complex \(b\)-structures are locally isomorphic to a standard model depending only dimension. Such a result would empower one to give a second (equivalent) definition of a complex \(b\)-manifold in terms of an appropriately defined \(b\)-holomorphic atlas. A logical candidate model for a complex \(b\)-manifolds is \(M=\mathbb{R}^{2n+2}=\mathbb{C}^{n+1}\), \(n\geq 0\) equipped with coordinates \((x_{0},y_{0},\dots,x_{n},y_{n})=(z_{0},\dots,z_{n})\) with \({}^{b}T^{0,1}M\) spanned by \[{}^{b}\partial_{\overline{z}_{0}} \coloneqq\tfrac{1}{2}(x_{0}\partial_{x_{0}}+i\partial_{y_{0}})\] \[\partial_{\overline{z}_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}+i\partial_{y_{j}}) j=1,\dots,n.\] Accordingly, \({}^{b}T^{1,0}M\) is spanned by \[{}^{b}\partial_{z_{0}} \coloneqq\tfrac{1}{2}(x_{0}\partial_{x_{0}}-i\partial_{y_{0}})\] \[\partial_{z_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}-i\partial_{y_{j}}) j=1,\dots,n.\] Our goal in this article is to show that, locally near points on the singular locus, every complex \(b\)-manifold of dimension \(2n+2\) is indeed isomorphic to the model example above. Mendoza [7] already took up this problem and partially settled it, up to deformation by terms whose Taylor expansions vanish along the singular locus. Indeed, Mendoza's result, which we state below, will play a crucial role. **Theorem 1.1** ([7], Proposition 5.1).: _With \(n\geq 0\), let \(M\) be a \((2n+2)\)-dimensional complex \(b\)-manifold with singular locus \(Z\). Then, every \(p\in Z\) has an open neighbourhood \(U\) on which there are smooth local coordinates \((x_{0},y_{0},\dots,x_{n},y_{n})=(z_{0},\dots,z_{n})\) centered at \(p\) with \(x_{0}\) vanishing on \(Z\) and smooth, complex vector fields \(\Gamma_{0},\dots,\Gamma_{n}\) on \(U\) vanishing to infinite order on \(Z\) such that_ \[L_{0} \coloneqq{}^{b}\partial_{\overline{z}_{0}}+\Gamma_{0}\] \[L_{j} \coloneqq\partial_{\overline{z}_{j}}+\Gamma_{j} j=1,\dots,n\] _is a frame for \({}^{b}T^{0,1}M\) over \(U\). Moreover, each of \(\Gamma_{0},\dots,\Gamma_{n}\) is a linear combination of \({}^{b}\partial_{z_{0}},\partial_{z_{1}},\dots,\partial_{z_{n}}\) whose coefficients are smooth, complex-valued functions on \(U\) vanishing to infinite order on \(Z\)._ The contribution of the present article will be to improve the above result by showing that the deformation terms \(\Gamma_{0},\dots,\Gamma_{n}\) can be got rid of. Thus, our main result is the following. **Theorem 1.2**.: _With \(n\geq 0\), let \(M\) be a \((2n+2)\)-dimensional complex \(b\)-manifold with singular locus \(Z\). Then, every \(p\in Z\) has an open neighbourhood \(U\) on which there are smooth local coordinates \((x_{0},y_{0},\ldots,x_{n},y_{n})=(z_{0},\ldots,z_{n})\) centered at \(p\) with \(x_{0}\) vanishing on \(Z\) such that_ \[{}^{b}\partial_{\overline{z}_{0}} \coloneqq\tfrac{1}{2}(x_{0}\partial_{x_{0}}+i\partial_{y_{0}})\] \[\partial_{\overline{z}_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}+i\partial_{y_{j}}) j=1,\ldots,n\] _is a frame for \({}^{b}T^{0,1}M\) over \(U\)._ Readers familiar with the techniques needed to prove the classical Newlander-Nirenberg theorem will be unsurprised that nonellipticity of the operator \({}^{b}\partial_{\overline{z}}=\tfrac{1}{2}(x\partial_{x}+i\partial_{y})\) is at the heart of the matter. The difficulty is that, without ellipticity, we do not automatically have good existence theory for solutions to deformations of this operator. The main insight of this article is that it is possible to get around this difficulty by leveraging the polar coordinate change \(g:\mathbb{R}^{2}\to\mathbb{R}^{2}\), \(g(x,y)=xe^{iy}\) (negative radii permitted). Notice that \({}^{b}\partial_{\overline{z}}g=0\), i.e. \(g\) is "\(b\)-holomorphic". Let us summarize the contents of this paper. In Section 2, we collect definitions and basic results on complex \(b\)-manifolds. In Section 3, we prove in the two-dimensional case that nontrivial \(b\)-holomorphic functions exist locally. In Section 4, we use the latter existence result to prove the main result Theorem 1.2 in the two-dimensional case. Sections 5 and 6 extend the results of Sections 3 and 4 to the general case. Finally, in Section 7, we situate the singular coordinate change \(g(x,y)=xe^{iy}\) in a more general context which will be relevant for planned future work on complex \(b^{k}\)-manifolds (see [1] for the definition of a complex \(b^{k}\)-manifold). ## 2 Background The language of \(b\)-geometry, also known as log-geometry, was introduced by Melrose [6]. One encounters two superficially different formulations of \(b\)-geometry in the literature. In the original approach, one works on a manifold with boundary. Other authors instead use a manifold without boundary that is equipped with a given hypersurface [2]. We follow the latter approach. **Definition 2.1**.: A **b-manifold** is a smooth manifold \(M\) together with a closed hypersurface \(Z\subseteq M\) which we refer to as the **singular locus**. An **isomorphism** of \(b\)-manifolds is a diffeomorphism that preserves the singular loci. Note ordinary smooth manifolds may be considered as \(b\)-manifolds, taking \(Z=\varnothing\). **Definition 2.2**.: A **b-vector field** on a \(b\)-manifold \(M=(M,Z)\) is a smooth vector field on \(M\) that is tangent to \(Z\). The collection of \(b\)-vector fields on \(M\) is denoted \({}^{b}\mathfrak{X}(M)\subseteq\mathfrak{X}(M)\). **Example 2.3**.: Consider the \(b\)-manifold \(M=\mathbb{R}^{n+1}\) with coordinates \((x_{0},x_{1},\ldots,x_{n})\) and singular locus \(Z=\{0\}\times\mathbb{R}^{n}\). Then, \({}^{b}\mathfrak{X}(M)=\langle x_{0}\partial_{x_{0}},\partial_{x_{1}},\ldots, \partial_{x_{n}}\rangle\), the free \(C^{\infty}(M)\)-module generated by \(x_{0}\partial_{x_{0}},\partial_{x_{1}},\ldots,\partial_{x_{n}}\). The example above completely captures the local structure of \(b\)-manifolds. Accordingly, for any \(b\)-manifold \(M\), \({}^{b}\mathfrak{X}(M)\) is a projective \(C^{\infty}(M)\)-module closed under Lie bracket. Applying Serre-Swan duality to the inclusion \({}^{b}\mathfrak{X}(M)\to\mathfrak{X}(M)\), one has a corresponding Lie algebroid \({}^{b}TM\) whose anchor map, denoted \(\rho:{}^{b}TM\to TM\), induces (abusing notation) an identification \(C^{\infty}(M;{}^{b}TM)={}^{b}\mathfrak{X}(M)\). **Definition 2.4**.: For a \(b\)-manifold \(M\), the Lie algebroid \({}^{b}TM=({}^{b}TM,\rho,[\cdot,\cdot])\) satisfying \(C^{\infty}(M;{}^{b}TM)={}^{b}\mathfrak{X}(M)\) described above is called the **b-tangent bundle** of \(M\). In Example 2.3, \(x_{0}\partial_{x_{0}},\partial_{x_{1}},\dots,\partial_{x_{n}}\) is a global frame for \({}^{b}TM\). The anchor map \(\rho\) descends from evaluation of vector fields. Thus, over \(M\setminus Z\), \(\rho\) is an isomorphism and, over \(Z\), the kernel of \(\rho\) is the line bundle spanned by \(x_{0}\partial_{x_{0}}\). In general, for any \(b\)-manifold \(M\), the bundles \({}^{b}TM\) and \(TM\) are canonically isomorphic over \(M\setminus Z\) via \(\rho\). If \(\theta:M_{1}\to M_{2}\) is an isomorphism of \(b\)-manifolds, it is clear that \(\theta\) preserves the associated modules of \(b\)-vector fields. By Serre-Swan duality, \(\theta\) induces a Lie algebroid isomorphism \[{}^{b}\theta_{*}:{}^{b}TM_{1}\to{}^{b}TM_{2}. \tag{2.1}\] Over \(M_{i}\setminus Z_{i}\), if we apply the aforementioned natural identifications of the \(b\)-tangent bundle and usual tangent bundle, \({}^{b}\theta_{*}\) coincides with \(\theta_{*}:TM_{1}\to TM_{2}\), the usual induced isomorphism given by pushforward of tangent vectors. **Definition 2.5**.: The **b-cotangent bundle**\({}^{b}TM^{*}\) of a \(b\)-manifold \(M\) is the dual of the \(b\)-tangent bundle \({}^{b}TM\). The dual \(\rho^{*}:TM^{*}\to{}^{b}TM^{*}\) of the anchor map \(\rho:{}^{b}TM\to TM\) is likewise an isomorphism when restricted over \(M\setminus Z\). In particular, \(\rho^{*}\) is injective on a dense open set, and so (what is equivalent) induces an injective mapping \(C^{\infty}(M;TM^{*})\to C^{\infty}(M;{}^{b}TM^{*})\). Put in simpler terms, a 1-form \(\omega\in C^{\infty}(M;TM^{*})\) is fully determined by its pairing with \(b\)-vector fields. The **b-exterior derivative** of a smooth function \(f\) on \(M\) is defined as: \[{}^{b}df\coloneqq\rho^{*}(df). \tag{2.2}\] Actually, one may quite legitimately regard \(df\) and \({}^{b}df\) as identical, with the latter notation merely hinting that one only intends to pair the form with \(b\)-vector fields. A general philosophy of \(b\)-calculus is that many classical geometries admit \(b\)-analogues in which the role of tangent bundle is played by the \(b\)-tangent bundle. A notable example is \(b\)-symplectic geometry ([4], [5]). Complex \(b\)-geometry was introduced by Mendoza [7] who furthermore made a systematic study of the \(b\)-Dolbeault complex. We remark that the basic idea of a complex \(b\)-structure is mentioned in passing on pp. 218 of [6]. **Definition 2.6**.: A **complex b-structure** on an (even-dimensional) \(b\)-manifold \(M\) is a complex subbundle \({}^{b}T^{0,1}M\) of the complexified \(b\)-tangent bundle satisfying: 1. \(\mathbb{C}^{b}TM={}^{b}T^{1,0}M\oplus{}^{b}T^{0,1}M\), where \({}^{b}T^{1,0}M\coloneqq\overline{{}^{b}T^{0,1}M}\), 2. \({}^{b}T^{0,1}M\) is involutive. A **complex b-manifold** is a \(b\)-manifold equipped with a complex \(b\)-structure. An **isomorphism**\(\theta:M_{1}\to M_{2}\) of complex \(b\)-manifolds is an isomorphism of their underlying \(b\)-manifolds satisfying \({}^{b}\theta_{*}({}^{b}T^{0,1}M_{1})={}^{b}T^{0,1}M_{2}\). Here, by abuse of notation, \({}^{b}\theta_{*}\) denotes the complexification of the isomorphism \({}^{b}\theta_{*}:{}^{b}TM_{1}\to{}^{b}TM_{2}\) defined above (2.1). Because \({}^{b}TM\) is naturally isomorphic to \(TM\) away from \(Z\), a \(b\)-complex structure on \(M\) in particular gives a complex structure in the usual sense on \(M\setminus Z\). In particular, one may consider ordinary complex manifolds as special cases of complex \(b\)-manifolds with \(Z=\varnothing\). Actually, for any complex \(b\)-manifold \(M\), the restricted complex structure on \(M\setminus Z\) determines the \(b\)-complex structure, as the following proposition shows. Of course, not all complex structures on \(M\setminus Z\) are the restriction of a (unique) \(b\)-complex structure on \(M\), just those that degenerate in a particular way at \(Z\). **Proposition 2.7**.: _Let \(M_{i}\) be a complex \(b\)-manifold with singular locus \(Z_{i}\) for \(i=1,2\). Suppose that \(\theta:M_{1}\to M_{2}\) is a diffeomorphism such that:_ 1. \(\theta(Z_{1})=Z_{2}\) _(i.e._ \(\theta\) _is an isomorphism of_ \(b\)_-manifolds),_ 2. \(\theta\) _restricts to an isomorphism of (ordinary) complex manifolds_ \(M_{1}\setminus Z_{1}\to M_{2}\setminus Z_{2}\)_._ _Then, \(\theta\) is an isomorphism of complex \(b\)-manifolds._ Proof.: From (i), we have an induced isomorphism on the complexified \(b\)-tangent bundles \({}^{b}\theta_{*}:{\mathbb{C}}^{b}TM_{1}\to{\mathbb{C}}^{b}TM_{2}\). From (ii), and applying the natural isomorphisms of \(b\)-tangent bundle and ordinary tangent bundle away from the singular loci, the subbundles \({}^{b}\theta_{*}({}^{b}T^{0,1}M_{1})\) and \({}^{b}T^{0,1}M_{2}\) of \({\mathbb{C}}^{b}TM_{2}\) agree away from \(Z_{2}\). The conclusion follows by a continuity argument. Taking duals, the splitting \({\mathbb{C}}^{b}TM={}^{b}T^{1,0}M\oplus{}^{b}T^{0,1}M\) given by a complex \(b\)-structure induces an associated splitting of the complexified \(b\)-cotangent bundle \({\mathbb{C}}^{b}TM^{*}\) and of the complexified \(b\)-exterior derivative \({}^{b}d:C^{\infty}(M,{\mathbb{C}})\to C^{\infty}(M;{\mathbb{C}}^{b}TM^{*})\), as tabulated below: \[{\mathbb{C}}^{b}TM^{*}={}^{b}T^{1,0}M^{*}\oplus{}^{b}T^{0,1}M^{*}\] \[{}^{b}d={}^{b}\partial+{}^{b}\overline{\partial}\] \[{}^{b}\partial:C^{\infty}(M,{\mathbb{C}})\to C^{\infty}(M;{}^{b}T^ {1,0}M^{*})\] \[{}^{b}\overline{\partial}:C^{\infty}(M,{\mathbb{C}})\to C^{ \infty}(M;{}^{b}T^{0,1}M^{*})\] In effect, \({}^{b}\partial f\), respectively \({}^{b}\overline{\partial}f\), is \(df\) restricted to the \(b\)-\((1,0)\)-vector fields, respectively the \(b\)-\((0,1)\)-vector fields (that is to say, sections of \({}^{b}T^{1,0}M\), respectively sections of \({}^{b}T^{0,1}M\)). **Definition 2.8**.: A \(b\)**-holomorphic** function on a complex \(b\)-manifold \(M\) (or an open subset thereof) is a smooth function \(f:M\to{\mathbb{C}}\) satisfying \({}^{b}\overline{\partial}f=0\). We write \({}^{b}{\mathcal{O}}(M)\) for the collection of \(b\)-holomorphic functions on \(M\). In more concrete terms, \(f\) is \(b\)-holomorphic if \(Xf=0\) for every \(b\)-\((0,1)\)-vector field \(X\) or (what is sufficient), all \(X\) in a given frame for \({}^{b}T^{0,1}M\). Because \({}^{b}T^{0,1}M\) is involutive, \({}^{b}\mathcal{O}(M)\) is a ring with respect to pointwise-multiplication. **Example 2.9**.: Consider the \(b\)-manifold \(M=\mathbb{R}^{2}=\mathbb{C}\) with singular locus \(Z=\{0\}\times\mathbb{R}\). Then, \({}^{b}\mathfrak{X}(M)=\langle x\partial_{x},\partial_{y}\rangle\), the free \(C^{\infty}(\mathbb{R}^{2})\)-module generated by \(x\partial_{x}\) and \(\partial_{y}\). An example of a complex \(b\)-structure for \(M\) has \({}^{b}T^{0,1}M\) spanned by \[{}^{b}\partial_{\overline{z}}\coloneqq\tfrac{1}{2}(x\partial_{x}+i\partial_{ y}).\] and, correspondingly, \({}^{b}T^{1,0}M\) spanned by \[{}^{b}\partial_{z}\coloneqq\tfrac{1}{2}(x\partial_{x}-i\partial_{y}).\] A function \(f\) is \(b\)-holomorphic precisely when \({}^{b}\partial_{\overline{z}}f=0\). An important globally-defined \(b\)-holomorphic function on \(M\) was mentioned in the introduction \[g:\mathbb{R}^{2} \to\mathbb{C} g(x,y)=xe^{iy}.\] More generally, if \(h\) is a usual holomorphic function defined near \(0\in\mathbb{C}\), then \(f=h\circ g\) is a \(b\)-holomorphic function defined near \(0\in M\). There also exist \(b\)-holomorphic functions defined near \(0\) not of the latter form and, indeed, not real-analytic near \(0\). For example, on the domain \(-\tfrac{\pi}{4}<y<\tfrac{\pi}{4}\), one has a \(b\)-holomorphic \(f\) defined by \(f(x,y)=\exp(\tfrac{-1}{g(x,y)})\) for \(x>0\) and \(f(x,y)=0\) for \(x\leq 0\). These examples are also noted in [1], Example 8.1. **Remark 2.10**.: Observe that the \(b\)-holomorphic function \(g(x,y)=xe^{iy}\) in the above example may be conceptualized as \(g(x,y)=\phi_{iy}(x)\), where \(\phi_{t}(x)=e^{t}x\) is the flow of \(x\partial_{x}\). This somewhat cryptic remark will be expanded on in Section 7. **Example 2.11**.: Consider the \(b\)-manifold \(M=\mathbb{R}^{2n+2}=\mathbb{C}^{n+1}\) with \(Z=\{0\}\times\mathbb{R}^{2n+1}\) as its singular locus. Introduce coordinates \((x_{0},y_{0},\ldots,x_{n},y_{n})=(z_{0},\ldots,z_{n})\). Thus, \[{}^{b}\mathfrak{X}(M) =\langle x_{0}\partial_{x_{0}},\partial_{y_{0}},\partial_{x_{1}},\partial_{y_{1}},\ldots,\partial_{x_{n}},\partial_{y_{n}}\rangle.\] An example of a complex \(b\)-structure \({}^{b}T^{0,1}M\) for \(M\) is the one spanned by: \[{}^{b}\partial_{\overline{z}_{0}} \coloneqq\tfrac{1}{2}(x_{0}\partial_{x_{0}}+i\partial_{y_{0}})\] \[\partial_{\overline{z}_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}+i\partial_{y_{j}}) j=1,\ldots,n.\] Correspondingly, a frame for \({}^{b}T^{1,0}M\) is: \[{}^{b}\partial_{z_{0}} \coloneqq\tfrac{1}{2}(x_{0}\partial_{x_{0}}-i\partial_{y_{0}})\] \[\partial_{z_{j}} \coloneqq\tfrac{1}{2}(\partial_{x_{j}}-i\partial_{y_{j}}) j=1,\ldots,n.\] The function \(g(x_{0},y_{0},\ldots,x_{n},y_{n})=x_{0}e^{iy_{0}}\) from Example 2.9 is still \(b\)-holomorphic, as are the complex coordinate functions \(z_{j}=x_{j}+iy_{j}\), \(j=1,\ldots,n\). Additional \(b\)-holomorphic functions may be obtained by applying an \((n+1)\)-variable holomorphic function (in the usual sense) to \(g,z_{1},\ldots,z_{n}\). **Example 2.12**.: Consider the complex \(b\)-manifold \(N=\mathbb{R}^{2}\) with singular locus \(Z\coloneqq\{0\}\times\mathbb{R}\) and with \({}^{b}T^{0,1}N\) spanned by \[L\coloneqq\tfrac{1}{2}\Big{(}x\partial_{x}+i(-xy\partial_{x}+(1+y^{2})\partial_ {y})\Big{)}.\] This complex \(b\)-manifold \(N\) is isomorphic to a neighbourhood of the origin of the complex \(b\)-manifold \(M=\mathbb{R}^{2}\) with \({}^{b}T^{0,1}M\) spanned by \({}^{b}\partial_{\overline{z}}\coloneqq\tfrac{1}{2}(x\partial_{x}+i\partial_{ y})\) from Example 2.9. Indeed, the diffeomorphism \(G:\mathbb{R}\times(-\tfrac{\pi}{2},\tfrac{\pi}{2})\to\mathbb{R}^{2}\) defined by \(G(x,y)=(x\cos y,\tan y)\) satisfies \(G_{*}({}^{b}\partial_{\overline{z}})=L\). Thus, \(b\)-holomorphic functions on \(N\) may be obtained by pushforward through \(G\). For example, the pushforward of \(g(x,y)=xe^{iy}\) by \(G\) is \(p(x,y)=x+ixy\) which indeed satisfies \(Lp=0\). For a general complex \(b\)-manifold \(M\), it is not immediately clear whether, locally near points on the singular locus \(Z\), any nonconstant \(b\)-holomorphic functions must exist at all. This article will confirm that they do (Sections 3 and 5). Note that a \(b\)-holomorphic function on a complex \(b\)-manifold \(M\) restricts to a holomorphic function in the usual sense on the complex manifold \(M\setminus Z\). Indeed, in a similar spirit to Proposition 2.7, if \(f\) is smooth and restricts to a holomorphic function on \(M\setminus Z\), then it is \(b\)-holomorphic on \(M\) by a continuity argument. Recall that, in the case of ordinary complex manifolds (defined using integrable almost-complex structures), the existence of local holomorphic functions is closely-tied to the existence of charts. The proposition below, which will be used in Sections 4 and 6, illustrates this principle. We include its (standard) proof for the sake of completeness. **Proposition 2.13**.: _Let \(M\) be a complex manifold. Let \(\Omega\subseteq M\) and \(\Omega^{\prime}\subseteq\mathbb{C}^{n}\) be open. Suppose \(f_{1},\dots,f_{n}:\Omega\to\mathbb{C}\) are holomorphic functions such that \(\theta:\Omega\to\Omega^{\prime}\), \(\theta(p)=(f_{1}(p),\dots,f_{n}(p))\) defines a diffeomorphism. Then, \(\theta_{*}\) maps \(T^{0,1}M\) onto \(T^{0,1}\mathbb{C}^{n}\) over \(\Omega\) where, by definition, the latter is spanned by \(\partial_{\overline{z}_{j}}\coloneqq\tfrac{1}{2}(\partial_{x_{j}}+i\partial_{ y_{j}})\), \(j=1,\dots,n\)._ Proof.: Let \(L\in C^{\infty}(\Omega;T^{0,1}M)\) be a \((0,1)\)-vector field defined on \(\Omega\). Using \(Lf_{j}=0\) for \(j=1,\dots,n\) and \(\theta_{*}(f_{j})=z_{j}\) (the \(j\)th coordinate function) for \(j=1,\dots,n\), we obtain \[\theta_{*}(L)z_{j}=0.\] If we write \(\theta_{*}(L)=\sum_{j=1}^{n}\alpha_{j}\partial_{x}+\beta_{j}\partial_{y}\), where \(\alpha_{j},\beta_{j}\) are smooth, complex-valued functions on \(\mathbb{C}\), then the above gives \(\alpha_{j}+i\beta_{j}=0\) for \(j=1,\dots,n\). Thus, we obtain \[\theta_{*}(L)=\sum_{j=1}-2i\beta_{j}\partial_{\overline{z}_{j}},\] so that \(\theta_{*}(L)\) is a \((0,1)\) vector field on \(\Omega^{\prime}\). In the case of complex \(b\)-manifolds, \(b\)-holomorphic functions cannot directly play the role of coordinate functions of charts. To see this, note that any solution \(f:\mathbb{R}^{2}\to\mathbb{C}\) of \({}^{b}\partial_{\overline{z}}f=0\) (where \({}^{b}\partial_{\overline{z}}\coloneqq\tfrac{1}{2}(x\partial_{x}+i\partial y)\)) is necessarily constant along \(Z=\{0\}\times\mathbb{R}\). Existence of \(b\)-holomorphic functions: dimension \(2\) In this section, we show that, near any point on the singular locus of a two-dimensional complex \(b\)-manifold, there is a nontrivial \(b\)-holomorphic function. Here, "nontrivial" means the derivative normal to the singular locus is nonzero. From Theorem 1.1, this amounts to the following (as usual \({}^{b}\partial_{z}\coloneqq\frac{1}{2}(x\partial_{x}-i\partial_{y})\), \({}^{b}\partial_{\overline{z}}\coloneqq\frac{1}{2}(x\partial_{x}+i\partial_{y })\)). **Theorem 3.1**.: _Consider a complex \(b\)-manifold \(M=\mathbb{R}^{2}\) with singular locus \(Z=\{0\}\times\mathbb{R}\) and with \({}^{b}T^{0,1}M\) spanned by_ \[L\coloneqq{}^{b}\partial_{\overline{z}}+\gamma\cdot{}^{b}\partial_{z}\] _where \(\gamma:\mathbb{R}^{2}\to\mathbb{C}\) is a smooth function vanishing to infinite order on \(Z\). Then, there is a neighbourhood \(U\subseteq M\) of \((0,0)\) and a \(b\)-holomorphic function \(f:U\to\mathbb{C}\) that vanishes on \(Z\) and satisfies \(\partial_{x}f(0,0)=1\)._ The proof of Theorem 3.1 will rely on the following three elementary lemmas whose proofs we omit. Firstly, we record several vector field pushforward formulae for the singular coordinate change \(g(x,y)=xe^{iy}\). Note \(g\) is simply the polar coordinate transformation (with negative radii permitted). It is a local diffeomorphism away from \(Z=\{0\}\times\mathbb{R}\). We treat a more general type of coordinate change in Section 7, providing additional context for the methods of this section and rendering them somewhat less ad hoc. **Lemma 3.2**.: _The smooth surjection \(g:\mathbb{R}^{2}\to\mathbb{C}\) defined by \(g(x,y)=xe^{iy}\) satisfies:_ \[g_{*}(x\partial_{x}) =x\partial_{x}+y\partial_{y}\] \[g_{*}(\partial_{y}) =-y\partial_{x}+x\partial_{y}\] \[g_{*}({}^{b}\partial_{\overline{z}}) =\overline{z}\partial_{\overline{z}}\] \[g_{*}({}^{b}\partial_{z}) =z\partial_{z}.\] _Here, \({}^{b}\partial_{\overline{z}}\coloneqq\frac{1}{2}(x\partial_{x}+i\partial_{y })\), \({}^{b}\partial_{z}\coloneqq\frac{1}{2}(x\partial_{x}-i\partial_{y})\), \(\partial_{\overline{z}}\coloneqq\frac{1}{2}(\partial_{x}+i\partial_{y})\), \(\partial_{z}\coloneqq\frac{1}{2}(\partial_{x}-i\partial_{y})\). _ Secondly, we need the following fact concerning the expression in polar coordinates of plane functions vanishing to infinite order at the origin. **Lemma 3.3**.: _Define \(g:\mathbb{R}^{2}\to\mathbb{C}\) by \(g(x,y)=xe^{iy}\) and \(\sigma:\mathbb{R}^{2}\to\mathbb{R}^{2}\) by \(\sigma(x,y)=(-x,y+\pi)\). Let \(\gamma:\mathbb{R}^{2}\to\mathbb{C}\) be a \(\sigma\)-invariant (i.e. \(\gamma\circ\sigma=\gamma\)) smooth function that vanishes to infinite order on \(Z\coloneqq\{0\}\times\mathbb{R}\). Then, the pushforward \(g_{*}(\gamma)\) is a well-defined smooth function on \(\mathbb{C}\) that vanishes to infinite order at \(0\). _ Thirdly, we need the following divisibility property of plane functions vanishing to infinite order at the origin. **Lemma 3.4**.: _Let \(\gamma\) be a smooth, complex-valued function on \(\mathbb{C}\) vanishing to infinite order at \(0\). Then, \((1/z)\gamma\) and \((1/\overline{z})\gamma\) extend to smooth, complex-valued functions on \(\mathbb{C}\) vanishing to infinite order at \(0\). _ We now prove the main result of this section. Proof of Theorem 3.1.: Since we are only looking for a local solution, there is no harm in assuming \(\gamma\) satisfies the periodicity assumption \(\gamma\circ\sigma=\gamma\) of Lemma 3.3. Indeed, we may take \(\gamma\) to be compactly-supported in \((-\frac{\pi}{2},\frac{\pi}{2})\times\mathbb{R}\), which is a fundamental domain for \(\sigma\), and extend it periodically. This assumption makes the pushforward of \(L\) by \(g(x,y)=xe^{iy}\) well-defined, as the following computation shows. \[g_{*}(L) =g_{*}(^{b}\partial_{\overline{z}}+\gamma\cdot{}^{b}\partial_{z})\] \[=\overline{z}\partial_{\overline{z}}+g_{*}(\gamma)z\partial_{z} \text{(Lemmas \ref{lem:2} and \ref{lem:3})}.\] Because \(g\) is a local diffeomorphism \(\mathbb{R}^{2}\setminus Z\to\mathbb{C}\setminus\{0\}\), we have that \(g_{*}(L)\) defines a complex structure on \(\mathbb{C}\setminus\{0\}\). Furthermore, using Lemma 3.4, we can write \(g_{*}(\gamma)=\overline{z}\gamma^{\prime}\) where \(\gamma^{\prime}\) is another smooth, complex-valued function on \(\mathbb{C}\) vanishing to infinite order at \(0\). Thus, \[g_{*}(L)=\overline{z}(\partial_{\overline{z}}+\gamma^{\prime}z\partial_{z}).\] But now, note that \(\partial_{\overline{z}}+\gamma^{\prime}z\partial_{z}\) defines an (ordinary) complex structure on all of \(\mathbb{C}\) (because the \(\gamma^{\prime}z\partial_{z}\) term vanishes at \(0\)). So, by the ordinary Newlander-Nirenberg theorem for dimension two, there exists a \(C^{\infty}\), complex-valued function \(h\) defined near \(0\in\mathbb{C}\) such that \[(\partial_{\overline{z}}+\gamma^{\prime}z\partial_{z})h=0\] and also satisfying \(h(0)=0\) and \(\partial_{x}h(0)=1\). Putting \(f=h\circ g\) completes the proof. ## 4 Proof of main result for dimension \(2\) This section is devoted to the proof of Theorem 1.2 in the two-dimensional case (\(n=0\), in the notation of Theorem 1.2). Thanks to Mendoza's Theorem 1.1, we may begin on a complex \(b\)-manifold \(M=\mathbb{R}^{2}\) with singular locus \(Z=\mathbb{R}\times\{0\}\) and \({}^{b}T^{0,1}M\) spanned by \[L\coloneqq{}^{b}\partial_{\overline{z}}+\gamma\cdot{}^{b}\partial_{z}\] where \(\gamma:\mathbb{R}^{2}\to\mathbb{C}\) is a smooth function vanishing to infinite order on \(Z\). Our task is to find new coordinates near \((0,0)\) in which \({}^{b}T^{0,1}M\) is spanned by \({}^{b}\partial_{\overline{z}}\). From Theorem 3.1, there is an open set \(U\subseteq\mathbb{R}^{2}\) containing \((0,0)\) and a smooth function \(f:U\to\mathbb{C}\) such that: 1. \(Lf=0\) (\(b\)-holomorphicity), 2. \(f\) vanishes on \(Z\), 3. \(f_{x}(0,0)=1\) (denoting partial derivatives by subscripts for brevity). We split \(f\) into its real and imaginary parts \[f=u+iv=(u,v).\] (recall \(\mathbb{C}=\mathbb{R}^{2}\) in this article). The key will be the local coordinate change \(F\) defined, roughly speaking, by \((u,\frac{v}{u})\). In more precise terms, from (ii), we may write \[u(x,y)=x\cdot a(x,y) v(x,y)=x\cdot b(x,y)\] where \(a\) and \(b\) are smooth, real-valued functions on \(U\). From (iii), we have \[a(0,0)=u_{x}(0,0)=1 b(0,0)=v_{x}(0,0)=0.\] Shrinking \(U\), we may assume \(a\) is nowhere-vanishing on \(U\) and thus define \[F:U\to\mathbb{R}^{2} F(x,y)=\left(u(x,y),\frac{b(x,y)}{a(x,y)}\right).\] **Claim 4.1**.: After possibly further shrinking \(U\) around \((0,0)\), one has: 1. \(F\) is a diffeomorphism from \(U\) onto an open set \(F(U)\subseteq\mathbb{R}^{2}\), 2. \(F(0,0)=(0,0)\) and \(F(U\cap Z)=F(U)\cap Z\), 3. \(f=\kappa\circ F\), where \(\kappa:\mathbb{R}^{2}\to\mathbb{R}^{2}\) is given by \(\kappa(x,y)=(x,xy)\). Proof.: Statement (c) is a direct consequence of the definition of \(F\). For statement (b), since \(u\) vanishes on \(Z\) and \(u_{x}(0,0)=1\), we may, after shrinking \(U\), assume \(u^{-1}(0)\cap U\subseteq Z\). For statement (a), it suffices to show the Jacobian of \(F\) at \((0,0)\) is invertible. Then, by the inverse function theorem, after possibly shrinking \(U\) again, \(F\) gives a diffeomorphism \(U\to F(U)\). We claim that the Jacobian of \(F\) at \((0,0)\) has the form \(\begin{bmatrix}1&0\\ *&1\end{bmatrix}\). Obviously the top row is correct, so we need only confirm that \(\partial_{y}\Big{(}\frac{b}{a}\Big{)}(0,0)=1\). Away from \(Z\), we have \[\partial_{y}\Big{(}\frac{b}{a}\Big{)}=\partial_{y}\Big{(}\frac{v}{u}\Big{)}= \frac{v_{y}u-vu_{y}}{u^{2}}.\] Collecting real and imaginary parts in \(Lf=0\) we have \[xu_{x}-v_{y}\sim 0 xv_{x}+u_{y}\sim 0,\] where \(\sim\) denotes agreement of Taylor series at \((0,0)\). Thus, \[v_{y}u-vu_{y}\sim(xu_{x})u+v(xv_{x})=x^{2}(u_{x}a+v_{x}b)\] from which it follows that \[\partial_{y}\Big{(}\frac{b}{a}\Big{)}\sim\frac{u_{x}a+v_{x}b}{a^{2}}.\] In particular, \(\partial_{y}\Big{(}\frac{b}{a}\Big{)}(0,0)=\frac{(1)(1)+(0)(0)}{(1)^{2}}=1\), as was claimed. It will be relevant to see what the above constructions amount to in the model case when \(\gamma=0\) and \(L\) is simply \({}^{b}\partial_{\overline{z}}=\frac{1}{2}(x\partial_{x}+i\partial_{y})\). Then, in place of \(f\), we may use our function \[g(x,y)=xe^{iy}=(x\cos y,x\sin y).\] With this replacement, the diffeomorphism \(F\) above becomes the diffeomorphism \(G\) appearing in Example 2.12 \[G:\mathbb{R}\times(-\tfrac{\pi}{2},\tfrac{\pi}{2}) \to\mathbb{R}^{2} G(x,y)=(x\cos y,\tan y).\] Define \(W\coloneqq F(U)\) and \(V\coloneqq G^{-1}(W)\), so that we have \(b\)-manifold isomorphisms Applying the induced maps \({}^{b}F_{*},{}^{b}G_{*}\) on \(b\)-tangent bundles (see (2.1) in Section 2), we arrive at two complex \(b\)-structures on \((W,W\cap Z)\) which can be compared. **Claim 4.2**.: The complex \(b\)-structures on \(W\) spanned \({}^{b}F_{*}(L)\) and \({}^{b}G_{*}({}^{b}\partial_{\overline{z}})\) are equal. Proof.: Referring to Proposition 2.7, it suffices to check this away from the singular loci. Thus, we need only check that the usual pushforwards \(F_{*}(L)\) and \(G_{*}({}^{b}\partial_{\overline{z}})\) coincide up to a smooth rescaling over \(W\setminus Z\). Note \(\kappa(x,y)=(x,xy)\) restricts to a diffeomorphism of \(\mathbb{R}^{2}\setminus Z\) and recall \(f=\kappa\circ F\) on \(U\), \(g=\kappa\circ G\) on \(V\). Thus, we have diffeomorphisms and it suffices to check that \((f|_{U\setminus Z})_{*}(L)\) and \((g|_{V\setminus Z})_{*}({}^{b}\partial_{\overline{z}})\) agree up to a smooth rescaling. Indeed, since \(f\) and \(g\) are holomorphic in the usual sense away from \(Z\), Proposition 2.13 shows that \((f|_{U\setminus Z})_{*}(L)\) and \((g|_{V\setminus Z})_{*}({}^{b}\partial_{\overline{z}})\) are rescalings of \(\partial_{\overline{z}}=\frac{1}{2}(\partial_{x}+i\partial_{y})\) (actually, from Lemma 3.2, we even know \(g_{*}({}^{b}\partial_{\overline{z}})=\overline{z}\partial_{\overline{z}}\)). The preceding claim completes this section's proof of Theorem 1.2 in the two-dimensional case. The desired local coordinates are provided by \(G^{-1}\circ F:U\to V\). ## 5 Existence of \(b\)-holomorphic functions: general case In this section, we extend the results of Section 3 to the general case. That is, we show that nontrivial \(b\)-holomorphic functions exist near points on the singular locus of any complex \(b\)-manifold. By Theorem 1.1, this amounts to the following. **Theorem 5.1**.: _Consider a complex \(b\)-manifold \(M=\mathbb{C}^{n+1}=\mathbb{R}^{2n+2}\) with singular locus \(Z=\{0\}\times\mathbb{R}^{2n+1}\) equipped with coordinates \((x_{0},y_{0},\dots,x_{n},y_{n})=(z_{0},\dots,z_{n})\) with complex \(b\)-structure \({}^{b}T^{0,1}M\) spanned by_ \[L_{0} \coloneqq{}^{b}\partial_{\overline{z}}+\Gamma_{0}\] \[L_{j} \coloneqq\partial_{\overline{z}_{j}}+\Gamma_{j} j j=1,\dots,n\] _where \(\Gamma_{0},\ldots,\Gamma_{n}\) are linear combinations of \({}^{b}\partial_{z_{0}},\partial_{z_{1}},\ldots,\partial_{z_{n}}\) with coefficients in smooth, complex-valued functions on \(M\) vanishing to infinite order on \(Z\). Then, there exists an open set \(U\subseteq M\) containing \(0\) and \(b\)-holomorphic functions \(f_{0},f_{1},\ldots,f_{n}:U\to\mathbb{C}\) such that:_ * \(f_{0}\) _vanishes on_ \(Z\) _and_ \(f_{j}(0)=0\) _for_ \(j=1,\ldots,n\)_,_ * \(\partial_{x_{j}}f_{k}(0)=\delta_{j,k}\) _for_ \(j,k\in\{0,1,\ldots,n\}\)_, using Kronecker delta notation._ The proof is largely the same as Theorem 3.1, so we will be somewhat brief. First, we state straightforward higher-dimensional generalizations of Lemmas 3.3 and 3.4. Again, we omit the proofs of these statements. **Lemma 5.2**.: _Define_ \[g :\mathbb{R}^{2}\to\mathbb{C} g(x,y) =xe^{iy} \widetilde{g} :\mathbb{R}^{2n+2}\to\mathbb{C}^{n+1} \widetilde{g} =g\times\mathrm{id}\] \[\sigma :\mathbb{R}^{2}\to\mathbb{R}^{2} \sigma(x,y) =(-x,y+\pi) \widetilde{\sigma} :\mathbb{R}^{2n+2}\to\mathbb{R}^{2n+2} \widetilde{\sigma} =\sigma\times\mathrm{id}\] _Let \(\gamma:\mathbb{R}^{2n+2}\to\mathbb{C}\) be a \(\widetilde{\sigma}\)-invariant smooth function that vanishes to infinite order on \(Z\coloneqq\{0\}\times\mathbb{R}^{2n+1}\). Then, the pushforward \(\widetilde{g}_{*}(\gamma)\) is a well-defined smooth function on \(\mathbb{C}^{n+1}\) that vanishes to infinite order on \(Z^{\prime}\coloneqq\{0\}\times\mathbb{C}^{n}\). _ **Corollary 5.3**.: _Suppose \(\gamma_{0},\ldots,\gamma_{n}:\mathbb{R}^{2n+2}\to\mathbb{C}\) are \(\widetilde{\sigma}\)-invariant smooth functions vanishing infinite order on \(Z=\{0\}\times\mathbb{R}^{2n+1}\). Define \(\Gamma=\gamma_{0}\cdot{}^{b}\partial_{\overline{z}_{0}}+\sum_{j=1}^{n}\gamma_ {j}\cdot\partial_{\overline{z}_{j}}\) (note \(\Gamma\) is \(\widetilde{\sigma}\)-invariant in the sense that \(\widetilde{\sigma}_{*}(\Gamma)=\Gamma\)). Then, the pushforward \(\widetilde{g}_{*}(\Gamma)\) is a well-defined smooth vector field on \(\mathbb{C}^{n+1}\) vanishing to infinite order on \(Z^{\prime}=\{0\}\times\mathbb{C}^{n}\)._ Proof.: From Lemma 3.2 and Lemma 5.2, we have \(\widetilde{g}_{*}(\Gamma)=\widetilde{g}_{*}(\gamma_{0})\overline{z}_{0} \partial_{\overline{z}_{0}}+\sum_{j=1}^{n}\widetilde{g}_{*}(\gamma_{j}) \partial_{\overline{z}_{j}}\). **Lemma 5.4**.: _Let \(\mathbb{C}^{n+1}\) have coordinates \((z_{0},\ldots,z_{n})\). Suppose \(\gamma\) is a smooth, complex-valued function on \(\mathbb{C}^{n+1}\) vanishing to infinite order on \(Z^{\prime}\coloneqq\{0\}\times\mathbb{C}^{n}\). Then, \((1/z_{0})\gamma\) and \((1/\overline{z}_{0})\gamma\) extend to smooth, complex-valued functions on \(\mathbb{C}^{n+1}\) vanishing to infinite order on \(Z^{\prime}\). _ Having made the above preparations, we now proceed to the proof of this section's main result. Proof of Theorem 5.1.: As in the proof of Lemma 3.1, since we only desire local \(b\)-holomorphic functions, there is no harm assuming the deformations terms \(\Gamma_{j}\) to be \(\widetilde{\sigma}\)-invariant. With this assumption, Lemma 3.2 and Corollary 5.3 together imply that the vector fields \(L_{j}\) and \(\Gamma_{j}\) have well-defined pushforwards by \(\widetilde{g}\). Indeed, \[\widetilde{g}_{*}(L_{0}) =\overline{z}_{0}\partial_{\overline{z}_{0}}+\widetilde{g}_{*}( \Gamma_{0})\] \[\widetilde{g}_{*}(L_{j}) =\partial_{\overline{z}_{j}}+\widetilde{g}_{*}(\Gamma_{j}) j =1,\ldots,n,\] where \(\widetilde{g}_{*}(\Gamma_{j})\) is a smooth vector field on \(\mathbb{C}^{n+1}\) vanishing to infinite order on \(Z^{\prime}\coloneqq\{0\}\times\mathbb{C}^{n}\) for \(j=0,\ldots,n\). Because \(\widetilde{g}\) is a local diffeomorphism \(\mathbb{R}^{2n+2}\setminus Z\to\mathbb{C}^{n+1}\setminus Z^{\prime}\) and bracket commutes with pushforward, \(\widetilde{g}_{*}(L_{j})\), \(j=0,\ldots,n\) at least define a complex structure on \(\mathbb{C}^{n+1}\setminus Z^{\prime}\). Going further, we can use Lemma 5.4 to write \(\widetilde{g}_{*}(\Gamma_{0})=\overline{z}_{0}\Gamma_{0}^{\prime}\), where \(\Gamma_{0}^{\prime}\) is another smooth vector field on \(\mathbb{C}^{n+1}\) vanishing to infinite order on \(Z^{\prime}\). We claim that \[\tfrac{1}{\overline{z}_{0}}\widetilde{g}_{*}(L_{0}) =\partial_{\overline{z}_{0}}+\Gamma_{0}^{\prime} \tag{5.1}\] \[\widetilde{g}_{*}(L_{j}) =\partial_{\overline{z}_{j}}+\widetilde{g}_{*}(\Gamma_{j}) j=1,\dots,n \tag{5.2}\] define an ordinary complex structure on the whole of \(\mathbb{C}^{n+1}\) (extending the aforementioned complex structure on \(\mathbb{C}^{n+1}\setminus Z^{\prime}\)). Indeed, the deformation terms vanish on \(Z^{\prime}\), so the vector fields (5.1) together with their complex conjugates, form a global frame for the complexified tangent bundle. By a simple computation using the derivation property of the Lie bracket, involutivity holds away from \(Z^{\prime}\), hence everywhere by a continuity argument. Now, by an application of the classical Newlander-Nirenberg theorem, there is an open neighbourhood \(U\subseteq\mathbb{C}^{n+1}\) of \(0\) and smooth functions \(h_{0},\dots,h_{n}:U\to\mathbb{C}\) that are holomorphic for the complex structure defined by (5.1) and furthermore satisfy \(h_{j}(0)=0\), \(\partial_{x_{j}}h_{k}(0)=\delta_{j,k}\) for \(j,k\in\{0,1,\dots,n\}\). In fact, again using the vanishing of the deformation terms along \(Z^{\prime}\), the complex structure on \(\mathbb{C}^{n+1}\) defined by (5.1) is such that \(Z^{\prime}=\{0\}\times\mathbb{C}^{n}\) sits as a complex submanifold. Indeed, the inherited complex structure on \(Z^{\prime}\) is its standard one spanned by \(\partial_{\overline{z}_{1}},\dots,\partial_{\overline{z}_{n}}\). Using the implicit function theorem for several complex variables, complex submanifolds of complex codimension-1 can locally be written as preimages of regular values of holomorphic functions. Thus, possibly shrinking \(U\), we may even choose \(h_{0}\) to vanish on \(Z^{\prime}\). Putting \(f_{j}=h_{j}\circ\widetilde{g}\) for \(j=0,1,\dots,n\) completes the proof of the result. ## 6 The \(b\)-Newlander-Nirenberg theorem: general case This section is devoted to the proof of our main result Theorem 1.2. In light of Mendoza's Theorem 1.1, we may begin on a complex \(b\)-manifold \(M=\mathbb{C}^{n+1}=\mathbb{R}^{2n+2}\) with singular locus \(Z=\{0\}\times\mathbb{R}^{2n+1}\) equipped with coordinates \((x_{0},y_{0},\dots,x_{n},y_{n})=(z_{0},\dots,z_{n})\) with complex \(b\)-structure \({}^{b}T^{0,1}M\) spanned by \[L_{0} \coloneqq{}^{b}\partial_{\overline{z}}+\Gamma_{0}\] \[L_{j} \coloneqq\partial_{\overline{z}_{j}}+\Gamma_{j} j=1,\dots,n\] where \(\Gamma_{0},\dots,\Gamma_{n}\) are smooth, complex vector fields on \(M\) vanishing to infinite order on \(Z\). By Theorem 5.1, there is an open set \(U\subseteq\mathbb{R}^{2n+2}\) containing \(0\) and smooth functions \(f_{0},\dots,f_{n}:U\to\mathbb{C}\) satisfying: 1. \(L_{j}f_{k}=0\) for \(j,k\in\{0,1,\dots,n\}\) (\(b\)-holomorphicity), 2. \(f_{0}\) vanishes on \(Z\) and \(f_{j}(0)=0\) for \(j=1,\dots,n\), 3. \(\partial_{x_{j}}f_{k}(0)=\delta_{j,k}\) for \(j,k\in\{0,1,\dots,n\}\). As in Section 4, we decompose \(f_{0}\) into real and imaginary parts \(f_{0}=u+iv\). Since \(u\) and \(v\) vanish on \(Z\), we may write \(u=x_{0}\cdot a\) and \(v=x_{0}\cdot b\), where \(a\) and \(b\) are real-valued smooth functions. As in Section 4, by shrinking \(U\), we may enforce that \(a\) is nowhere vanishing on \(U\) and that: 1. \(F=(a,\frac{b}{a},f_{1},\ldots,f_{n})\) is a diffeomorphism from \(U\) onto an open set \(F(U)\subseteq\mathbb{R}^{2n+2}\), 2. \(F(0)=0\) and \(F(U\cap Z)=F(U)\cap Z\), 3. \(f=\widetilde{\kappa}\circ F\) where \(f=(f_{0},f_{1},\ldots,f_{n})\) and \(\widetilde{\kappa}=\kappa\times\operatorname{id}:\mathbb{R}^{2n+2}\to\mathbb{ R}^{2n+2}\) where \(\kappa:\mathbb{R}^{2}\to\mathbb{R}^{2}\) is \(\kappa(x,y)=(x,xy)\). As in Section 4, we argue that the pushforward by \(F\) of the complex \(b\)-structure on \(U\) spanned by \(L_{0},\ldots,L_{n}\) is independent of the deformation terms \(\Gamma_{0},\ldots,\Gamma_{n}\). Again, by Proposition 2.7, if suffices check this away from the singular locus \(Z\). Using the fact that \(\widetilde{\kappa}\) restricts to a self-diffeomorphism of \(\mathbb{R}^{2n+2}\setminus Z\), we have that \(f\) defines a diffeomorphism of \(U\setminus Z\) onto \(\kappa(F(U\setminus Z))\). Because the components of \(f\) are holomorphic for the induced complex structure on \(U\setminus Z\), Proposition 2.13 implies that \(f|_{U\setminus Z}\) pushes forward the subbundle spanned by \(L_{0},\ldots,L_{n}\) to the subbundle spanned by \(\partial_{\mathbb{Z}_{0}},\ldots,\partial_{\mathbb{Z}_{n}}\). Thus, the pushforward by \(F|_{U\setminus Z}\) of the subbundle spanned by \(L_{0},\ldots,L_{n}\) is completely determined to be the pullback of the standard complex structure on \(\kappa(F(U\setminus Z))\) by \(\kappa\). Running the same argument in the case where the deformation terms are all zero, we obtain the desired coordinate change. ## 7 Remarks on singular pushforwards The polar coordinate change \(g:\mathbb{R}^{2}\to\mathbb{C}\), \(g(x,y)=xe^{iy}\) played a crucial role in this article by enabling us to relate the nonelliptic operator \({}^{b}\partial_{z}=\frac{1}{2}(x\partial_{x}-i\partial_{y})\) to the standard complex partial derivative operator \(\partial_{z}=\frac{1}{2}(\partial_{x}-i\partial_{y})\) by way of the elementary, but perhaps somewhat mysterious, pushforward formula \(g_{*}({}^{b}\partial_{z})=z\partial_{z}\). In this section, we shed additional light on this pushforward formula by placing it in a more general context. The material in this section will also play a role in planned future work on the \(b\)-Newlander-Nirenberg theorem for complex \(b^{k}\)-manifolds [1]. Suppose that, more generally, we wish to relate the vector field \[\tfrac{1}{2}(x^{k}\partial_{x}-i\partial_{y})\] to \(\partial_{z}\), for \(k\) a positive integer. More generally still, suppose we wish to relate \[\tfrac{1}{2}(f(x)\partial_{x}-i\partial_{y})\] to \(\partial_{z}\), where \(f(z)\) is an entire function on \(\mathbb{C}\) whose restriction \(f(x)\) to \(\mathbb{R}\) is real-valued. We first recall a few more-or-less notational points concerning holomorphic vector fields and imaginary-time flows. One may refer to [3], pp. 39 for additional details. Let \(f\) be a holomorphic function on \(\mathbb{C}\) so that \(V\coloneqq f(z)\partial_{z}\) is a holomorphic vector field on \(\mathbb{C}\). We may decompose \(V\) as \[V=\tfrac{1}{2}(X-iJX),\] where the real vector fields \(X=\operatorname{Re}(f)\partial_{x}+\operatorname{Im}(f)\partial_{y}\) and \(JX=-\operatorname{Im}(f)\partial_{x}+\operatorname{Re}(f)\partial_{y}\) commute with one another. We may then define the complex-time flow of \(V\) by \[\phi^{V}_{w}(z)\coloneqq\phi^{X}_{s}\circ\phi^{JX}_{t}(z) \text{where }w=s+it,\] with the usual caveat that the flow need only be defined for \(w\) sufficiently close to \(0\). This flow is jointly holomorphic in \(w\) and \(z\). One practical consequence of this joint holomorphicity is that, if \(f(z)\) is real on the \(x\)-axis, so that \(X\) coincides with \(f(x)\partial_{x}\) on the \(x\)-axis, then the complex-time flow of the holomorphic vector field \(f(z)\partial_{z}\) may be obtained by analytic continuation in both variables of the real-time flow of the one-dimensional vector field \(f(x)\partial_{x}\). **Example 7.1**.: The real-time flow of the one-dimensional vector field \(X_{1}\coloneqq x\partial_{x}\) is given by \(\phi_{t}^{X_{1}}(x)=e^{t}x\). Accordingly, the complex-time flow of the holomorphic vector field \(V_{1}\coloneqq z\partial_{z}\) is given by \(\phi_{w}^{V_{1}}(z)=e^{w}z\). **Example 7.2**.: The real-time flow of the one-dimensional vector field \(X_{2}\coloneqq x^{2}\partial_{x}\) is given by \(\phi_{t}^{X_{2}}(x)=\frac{x}{1-tx}\). Accordingly, the complex-time flow of the holomorphic vector field \(V_{2}\coloneqq z^{2}\partial_{z}\) is given by \(\phi_{w}^{V_{2}}(z)=\frac{z}{1-wz}\). **Example 7.3**.: For any integer \(k\geq 2\), the real-time flow of the one-dimensional vector field \(X_{k}\coloneqq x^{k}\partial_{x}\) is given by \(\phi_{t}^{X_{k}}(x)=\frac{x}{k-\sqrt[k-1]{1-(k-1)tx^{k-1}}}\). Accordingly, the complex-time flow of the holomorphic vector field \(V_{k}\coloneqq z^{k}\partial_{z}\) is given by \(\phi_{w}^{V_{k}}(z)=\frac{z}{k-\sqrt[k-1]{1-(k-1)wz^{k-1}}}\). Bearing in mind the above notations, we now give the main result of this section. **Proposition 7.4**.: _Let \(f\) be a holomorphic function on \(\mathbb{C}\) that is real-valued on \(\mathbb{R}\). Let \(U\subseteq\mathbb{R}^{2}\) be a sufficiently small open neighbourhood of \(\mathbb{R}\times\{0\}\) that \(h:U\to\mathbb{C}\), \(h(x,y)=\phi_{iy}^{f(z)\partial_{z}}(x)\) is defined. Then, the vector field \(\frac{1}{2}(f(x)\partial_{x}+i\partial_{y})\) on \(U\) is \(h\)-related to the holomorphic vector field \(f(z)\partial_{z}\) on \(\mathbb{C}\)._ Proof.: Decompose \(V=f(z)\partial_{z}\) in the form \(V=\frac{1}{2}(X-iJX)\), as above. Thus, by definition, \(h(x,y)=\phi_{y}^{JX}(x)\). For fixed \((x,y)\in U\), taking \(t\) sufficiently small, we have \[h(x,y+t)=\phi_{y+t}^{JX}(x)=\phi_{t}^{JX}(h(x,y+t)).\] Thus, the vector field \(\partial_{y}\) is \(h\)-related to \(JX\). Similarly, \[\phi_{t}^{X}h(x,y)=\phi_{t}^{X}\phi_{y}^{JX}(x)=\phi_{y}^{JX}\phi_{t}^{X}(x)= \phi_{y}^{JX}\phi_{t}^{f(x)\partial_{z}}(x)=h(\phi_{t}^{f(x)\partial_{z}}(x), y)=h(\phi_{t}^{f(x)\partial_{z}}(x,y))\] Thus, the vector field \(f(x)\partial_{x}\) is \(h\)-related to \(X\). The result follows by linearity. **Corollary 7.5**.: _Let \(g_{2}:\mathbb{R}^{2}\to\mathbb{C}\), \(g_{2}(x,y)=\frac{x}{1-ixy}\). Then, \(\frac{1}{2}(x^{2}\partial_{x}+i\partial_{y})\) is \(g_{2}\)-related to \(z^{2}\partial_{z}\)._ **Corollary 7.6**.: _Fix an integer \(k\geq 2\). Let \(g_{k}:\mathbb{R}^{2}\to\mathbb{C}\), \(g_{k}(x,y)=\frac{x}{k-\sqrt[k-1]{1-i(k-1)x^{k-1}y}}\). Then \(\frac{1}{2}(x^{k}\partial_{x}-i\partial_{y})\) is \(g_{k}\)-related to \(z^{k}\partial_{z}\)._ We remark that \(g_{k}(x,y)=\frac{x}{k-\sqrt[k-1]{1-i(k-1)x^{k-1}y}}\) collapses \(Z=\{0\}\times\mathbb{R}\) to \(\{0\}\) and defines a diffeomorphism \(\mathbb{R}^{2}\setminus Z\to\{re^{i\theta}\in\mathbb{C}:r\neq 0,-\frac{\pi}{2(k-1) }<\theta<\frac{\pi}{2(k-1)}\}\). Note that the line \(\mathrm{Re}(z)=1\) is completely contained in the principal branch of \(\sqrt[k-1]{z}\). For purposes of clarification, sample plots of the real vector field \(JX\) in the decomposition \(z^{k}\partial_{z}=\frac{1}{2}(X-iJX)\) are given below. Note all integral curves passing through the \(x\)-axis are complete.
2302.07131
A Semiempirical Transparency Model for Dual Energy Cargo Radiography Applications
Cargo containers passing through ports are scanned by non-intrusive inspection systems to search for concealed illicit materials. By using two photon beams with different energy spectra, dual energy inspection systems are sensitive to both the area density and the atomic number of cargo contents. Most literature on the subject assumes a simple exponential attenuation model for photon intensity in which only free streaming photons are detected. However, this approximation neglects second order effects such as scattering, leading to a biased model and thus incorrect material predictions. This work studies the accuracy of the free streaming model by comparing it to simulation outputs, finding that the model shows poor atomic number reconstruction accuracy at high-$Z$ and suffers significantly if the source energy spectra and detector response function are not known exactly. To address these challenges, this work introduces a semiempirical transparency model which modifies the free streaming model by rescaling different components of the mass attenuation coefficient, allowing the model to capture secondary effects ignored by the free streaming model. The semiempirical model displays improvement agreement with simulated results at high-$Z$ and shows excellent extrapolation to materials and thicknesses which were not included during the calibration step. Furthermore, this work demonstrates that the semiempirical model yields accurate atomic number predictions even when the source spectra and detector response are not known exactly. Using the semiempirical model, manufacturers can perform a simple calibration to enable more precise $Z$ reconstruction capabilities, which has the potential to significantly improve the performance of existing radiographic systems.
Peter Lalor, Areg Danagoulian
2023-02-14T15:44:46Z
http://arxiv.org/abs/2302.07131v1
# A Semiempirical Transparency Model for Dual Energy Cargo Radiography Applications ###### Abstract Cargo containers passing through ports are scanned by non-intrusive inspection systems to search for concealed illicit materials. By using two photon beams with different energy spectra, dual energy inspection systems are sensitive to both the area density and the atomic number of cargo contents. Most literature on the subject assumes a simple exponential attenuation model for photon intensity in which only free streaming photons are detected. However, this approximation neglects second order effects such as scattering, leading to a biased model and thus incorrect material predictions. This work studies the accuracy of the free streaming model by comparing it to simulation outputs, finding that the model shows poor atomic number reconstruction accuracy at high-\(Z\) and suffers significantly if the source energy spectra and detector response function are not known exactly. To address these challenges, this work introduces a semiempirical transparency model which modifies the free streaming model by rescaling different components of the mass attenuation coefficient, allowing the model to capture secondary effects ignored by the free streaming model. The semiempirical model displays improvement agreement with simulated results at high-\(Z\) and shows excellent extrapolation to materials and thicknesses which were not included during the calibration step. Furthermore, this work demonstrates that the semiempirical model yields accurate atomic number predictions even when the source spectra and detector response are not known exactly. Using the semiempirical model, manufacturers can perform a simple calibration to enable more precise \(Z\) reconstruction capabilities, which has the potential to significantly improve the performance of existing radiographic systems. keywords: Dual energy radiography, non-intrusive inspection, atomic number discrimination, nuclear security + Footnote †: journal: Journal of Chemical Physics ## 1 Introduction In 2021, the Customs and Border Protection (CBP) processed more than 32.7 million imported cargo containers through U.S. ports of entry [1]. A principle concern is that a terrorist might attempt to smuggle a nuclear or radiological dispersal device through a port and subsequently detonate a weapon on U.S. soil. Such an attack could have devastating economic consequences in excess of $1 trillion due to infrastructure damage, trade disruption, and port shutdown costs [2; 3]. The international atomic energy agency (IAEA) tracks incidents of nuclear and other radioactive material out of regulatory control, identifying 320 incidents of trafficking or malicious use since 1993 [4]. For these reasons, U.S. congress passed the SAFE Port Act in 2006, mandating 100 percent screening of U.S. bound cargo and 100 percent scanning of high-risk containers [5]. When a cargo container passes through a U.S. port, it is first passively screened by a radiation portal monitor (RPM). This systems detect neutron and gamma radiation which is being passively emitted from radiological sources which may be hidden inside a cargo container [6; 7]. However, a smuggler could evade passive detection through sufficient shielding of the radiological device [8; 9]. As a result, radiography systems are also utilized to complement passive detection [10]. These X-ray and gamma ray imaging systems produce a sensitive density image of the scanned cargo, enabling the identification of nuclear threats which have been shielded to avoid passive detection. Furthermore, through the use of dual energy radiography, these systems can produce a rough elemental analysis of cargo contents, since the attenuation of photons is dependent on the atomic number of the material. This enhances the capabilities of radiography systems to detect nuclear threats and high-\(Z\) shielding materials [9]. ## 2 Background ### Dual Energy Radiography Overview When a radiography system scans a material of area density \(\lambda\) and atomic number \(Z\), it measures a transparency \(T(\lambda,Z)\), defined as the detected charge in the presence of the material normalized by the open beam measurement: \[T(\lambda,Z)=\frac{Q(\lambda,Z)}{Q_{\mathrm{air}}} \tag{1}\] The exact functional form of Eq. 1 is arbitrarily complex, since it depends on the precise geometric details of any particular scanning system. Importantly, Eq. 1 depends on the atomic number of the imaged object through the mass attenuation coefficient \(\mu(E,Z)\), which describes the attenuation of photons through matter by the Beer-Lambert law: \[\frac{I}{I_{0}}=e^{-\mu(E,Z)\lambda} \tag{2}\] where \(E\) is the photon energy and \(I/I_{0}\) is the ratio of transmitted to initial photon intensity. The dual energy principle observes that the mass attenuation coefficient depends on both energy and atomic number, and thus multiple transparency measurements of the same material using different high and a low energy beam spectra may allow for a determination of the atomic number \(Z\) of the imaged object. We use the subscript \(H\) to refer to the high energy beam, and the subscript \(L\) to refer to the low energy beam. As such, for a pair of measured transparencies \(\{T_{H},T_{L}\}\), the corresponding area density and atomic number estimates \(\hat{\lambda}\) and \(\hat{Z}\) are determined by solving the following \(2\times 2\) system: \[\hat{\lambda},\hat{Z}=\underset{\lambda,Z}{\text{Solve}}\begin{cases}T_{H}( \lambda,Z)=T_{H}\\ T_{L}(\lambda,Z)=T_{L}\end{cases} \tag{3}\] Older works attempted to solve Eq. 3 directly [11; 12]. However, this approach is generally too slow given the millions of pixels in a radiograph image and often only yields approximate solutions. Instead, Eq. 3 is generally solved through the use of reverse lookup tables [13]. In other words, for a \((\lambda,Z)\) range of interest, \(T_{H}(\lambda,Z)\) and \(T_{L}(\lambda,Z)\) are evaluated and tabulated. Then, for a pair of transparency measurements \(\{T_{H},T_{L}\}\) of an unknown material, \(\{\lambda,Z\}\) are reconstructed by reference to the lookup table. This approach is fast, accurate, and conveniently simple. The key remaining ingredient needed to solve Eq. 3 is an accurate choice of transparency models \(T_{H}(\lambda,Z)\) and \(T_{L}(\lambda,Z)\). To do this, existing dual energy literature can be broadly classified into two categories: analytic methods and empirical methods. Analytic models are derived from the exponential dependence of photon intensity (Eq. 2), which neglects scattered photons and thus introduces a significant source of error. Empirical models typically require a burdensome calibration step and show limited atomic number selectivity for materials not included in the calibration. This work introduces a hybrid method, which addresses both of these challenges by defining a semiempirical transparency model. ### Analytic Methods Most authors define a transparency model by constructing a simple, closed form expression for \(T_{\{H,L\}}(\lambda,Z)\)[12; 13; 14; 15]. Eq. 4 shows the typical choice, which we call the free streaming transparency model: \[\begin{split} T_{H}(\lambda,Z)_{\text{free streaming}}& =\frac{\int_{0}^{\infty}D(E)\phi_{H}(E)e^{-\mu(E,Z)\lambda}dE}{ \int_{0}^{\infty}D(E)\phi_{H}(E)dE}\\ T_{L}(\lambda,Z)_{\text{free streaming}}&=\frac{ \int_{0}^{\infty}D(E)\phi_{L}(E)e^{-\mu(E,Z)\lambda}dE}{\int_{0}^{\infty}D(E) \phi_{L}(E)dE}\end{split} \tag{4}\] where \(\phi_{\{H,L\}}(E)\) are the \(\{\)high, low\(\}\) energy differential photon beam spectra, and \(D(E)\) is the detector response function, calculated as \[D(E)=C\int_{0}^{E}R(E,E_{\text{dep}})E_{\text{dep}}dE_{\text{dep}} \tag{5}\] where \(R(E,E_{\text{dep}})\) is the differential detector response matrix, representing the probability that a photon with incident energy \(E\) deposits energy \(E_{\text{dep}}\), and \(C\) is a proportionality constant. Through direct evaluations of Eq. 4, a large, high-resolution lookup table can be constructed, enabling atomic number discrimination [13]. Other works perform a variable transformation instead of using \(T_{\{H,L\}}(\lambda,Z)\) directly, and construct lookup tables on these new features [12; 14; 15]. These methods are all expected to give similar results. A fundamental assumption made by the free streaming model is that only noninteracting photons will be detected and thus ignores the effects of scattered radiation. Past works have claimed that these approximations were validated experimentally and through the use of Monte Carlo simulations [13; 14]. Detector systems are typically heavily collimated to reduce the significance of scattered radiation, although it can sometimes be as much as one percent of the primary detector beam [16]. Section 3 uses Monte Carlo simulations to analyze the accuracy of the free streaming model, finding that the model yields inaccurate atomic number reconstruction at high-\(Z\). Furthermore, the free streaming model shows significant bias if the beam spectra and detector response are not known exactly. These results highlights the primary disadvantage of analytic methods, as a biased transparency model will yield inaccurate atomic number predictions. ### Empirical Methods To circumvent the biases introduced by the free streaming model, some authors instead construct the transparency model empirically through the use of calibration measurements [17]. To do this, transparencies of a known material are measured for a range of different thicknesses, and the results are tabulated. This is repeated for different materials to construct a lookup table, and the \(\{\lambda,Z\}\) of an unknown material are again determined by reference to the lookup table. This is the most common approach used by commercial dual energy cargo inspection systems. The primary advantage of empirical approaches is the high degree of accuracy. Since the transparency model is constructed directly from the data, there is no inherent model bias. However, the data acquisition step can often be long and laborious. Empirical methods require a significant quantity of high-precision measurements to construct the lookup tables. Subsequently, if a target material \(\{\lambda,Z\}\) does not exist in the lookup table, an interpolation scheme is necessary to predict the identity of the material, introducing a source of error. Typically, only a few calibration materials are used, and the task is simplified to discrimination between organics, organics-inorganics, inorganics, and heavy substances [14]. However, such a coarse label discretization reduces accuracy and can yield high variance results if a target material is near the midpoint of two bins. Lee et al. employs a different empirical approach by expressing a material-selective projection image as a linear combination of polynomial basis functions [18; 19]. Then, for a choice of basis materials, a set of material-specific weighting factors are calculated through a least squares minimization against a calibration phantom. After the system has been calibrated, material classification is performed by evaluating the material-selective projections, which indicate whether the corresponding basis material is present in the image. The authors show that this approach can yield accurate material discrimination capabilities without knowledge of the photon energy spectra or the detector response function. However, this method is incapable of identifying materials with atomic numbers that differ significantly from the basis materials, resulting in limited atomic number selectivity. ### A Semiempirical Transparency Model To address the challenges described in sections 2.2 and 2.3, this work introduces a semiempirical transparency model. First, we define a semiempirical mass attenuation coefficient, \(\tilde{\mu}(E,Z;a,b,c)\), as follows: \[\tilde{\mu}(E,Z;a,b,c)=a\mu_{\text{PE}}(E,Z)+b\mu_{\text{CS}}(E,Z)+c\mu_{\text{ PP}}(E,Z) \tag{6}\] where \(\mu_{\text{PE}}(E,Z)\), \(\mu_{\text{CS}}(E,Z)\), and \(\mu_{\text{PP}}(E,Z)\) are the mass attenuation coefficients from the photoelectric effect (PE), Compton scattering (CS), and pair production (PP), respectively. Values for the mass attenuation coefficients are calculated from NIST cross section tables [20]. In Eq. 6, \(a\), \(b\), and \(c\) are calibration parameters which scale the relative importance of photoelectric absorption, Compton scattering, and pair production on the net photon attenuation. In the limit \(a=b=c=1\), the nominal mass attenuation coefficient is recovered. The motivation behind this approach is that by rescaling different components of the mass attenuation coefficient, it may be possible to capture some of the secondary effects ignored by the free streaming model. The semiempirical transparency model is then adapted from the free streaming model (Eq. 4) as follows: \[\begin{split}&\tilde{T}_{H}(\lambda,Z;a_{H},b_{H},c_{H})=\frac{ \int_{0}^{\infty}D(E)\phi_{H}(E)e^{-\tilde{\mu}(E,Z;a_{H},b_{H},c_{H})\lambda} dE}{\int_{0}^{\infty}D(E)\phi_{H}(E)dE}\\ &\tilde{T}_{L}(\lambda,Z;a_{L},b_{L},c_{L})=\frac{\int_{0}^{ \infty}D(E)\phi_{L}(E)e^{-\tilde{\mu}(E,Z;a_{L},b_{L},c_{L})\lambda}dE}{\int_{0 }^{\infty}D(E)\phi_{L}(E)dE}\end{split} \tag{7}\] As to be described in section 3, the semiempirical transparency model shows improved agreement with simulated data compared to the free streaming model, particularly when the source energy spectra and detector response function are not known exactly. Additionally, the semiempirical transparency model requires far less calibration data than fully empirical methods and shows excellent extrapolation to elements which were not included in the calibration step. ## 3 Analysis ### Accuracy of the Free Streaming Transparency Model In order to quantify the accuracy of the free streaming model (Eq. 4), transparency measurements were simulated in Geant4 [21; 22]. In each simulation, dual energy bremsstrahlung beams with endpoint energies of 10MeV and 6MeV were directed through targets composed of different materials and thicknesses and measured by a stack of collimated cadmium tungstate (CdWO\({}_{4}\)) scintillators. This outputs a simulated set of transparency data points to compare to the model predictions. The simulation geometry is described in more detail in section A.1. Fig. 0(a) compares the simulated transparency measurements to the free streaming model predictions, revealing an obvious model bias. To analyze the atomic number discrimination capabilities of the model, Fig. 2a shows the simulated measurements and the model predictions on an \(\alpha\)-curve. An \(\alpha\)-curve is a plot of \(\alpha_{H}-\alpha_{L}\) versus \(\alpha_{H}\) for different elements and area densities, where a log transform \(\alpha\rightarrow-\log T\) has been performed. Every element on an \(\alpha\)-curve forms a characteristic \(\alpha\)-line, describing the relative effect on \(T_{H}\) and \(T_{L}\) as area density is varied. An \(\alpha\)-curve is a useful visualization tool because slight transparency differences between the high and low energy measurements become more apparent, and the separation between different \(\alpha\)-lines offers insights into atomic number discrimination capabilities. Fig. 2a reveals that the free streaming model accurately reconstructs the \(\alpha\)-lines of low- to med-\(Z\) materials, but becomes less accurate at high-\(Z\). ### Accuracy of the Semiempirical Transparency Model First, \(a\), \(b\), and \(c\) are calibrated by minimizing the squared logarithmic error between the semiempirical model (Eq. 7) and simulated calibration measurements: \[\begin{split}& a_{H},b_{H},c_{H}=\operatorname*{argmin}_{a,b,c} \sum_{i}\left(\log\tilde{T}_{H}(\lambda_{i},Z_{i};a,b,c)-\log T_{H,i}\right)^{ 2}\\ & a_{L},b_{L},c_{L}=\operatorname*{argmin}_{a,b,c}\sum_{i}\left( \log\tilde{T}_{L}(\lambda_{i},Z_{i};a,b,c)-\log T_{L,i}\right)^{2}\end{split} \tag{8}\] In Eq. 8, the summation index \(i\) labels the simulation of calibration material \(\{\lambda_{i},Z_{i}\}\) with corresponding output transparencies \(\{T_{H,i},T_{L,i}\}\). In this work, carbon (\(Z=6\)), iron (\(Z=26\)), and lead (\(Z=82\)) were chosen as calibration materials due to their availability in real applications and because they span a wide range of \(Z\). High resolution simulations were performed at an area density of \(\lambda=150\)g/cm\({}^{2}\) to generate the calibration dataset. At least three calibration materials are required, and this work finds that thicker materials yield a more accurate calibration. The calibration parameters are determined separately for the high and low energy beams. The best fit values of \(a\), \(b\), and \(c\) are shown in the first two rows of table 1 for each incident beam spectrum. The simulated transparency measurements are compared to the semiempirical model predictions in Fig. 1b, finding that the semiempirical formulation reduces the model bias by a factor of \(\approx\)10 over the free streaming model. Fig. 2b plots the model predictions on an \(\alpha\)-curve, revealing that the semiempircial model shows improved agreement with the simulations at high-\(Z\). In order to calculate the atomic number reconstruction accuracy of the free streaming and the semiempirical transparency models, the \(Z\) of best fit is calculated for each of the simulated materials by finding the atomic number which minimizes the squared logarithmic error between the simulated measurements and the model predictions. The results of this analysis are shown in the first three rows of table 2, along with the true material \(Z\). Both models yield accurate atomic number predictions for low- to med-\(Z\) materials. At high-\(Z\), the free streaming model becomes unstable and is unable to identify any material which matches the simulation outputs. On the contrary, the semiempirical model maintains a relatively high degree of accuracy at high-\(Z\), although the solution is not always unique for reasons described in our previous works [23]. This high-\(Z\) degeneracy is a fundamental property of dual energy X-ray scanners whereby different high-\(Z\) elements can yield identical transparency measurements. ### Uncertainty in the Source Energy Spectra and Detector Response Function The source energy spectra \(\phi_{\{H,L\}}\) and detector response function \(D(E)\) serve as implicit parameters in both the free streaming and the semiempirical transparency models (Eqs. 4 and 7). These parameters are unique to each experimental system and it has thus far been assumed that they are known exactly. However, due to the angular dependence of linac based X-ray sources, the beam spectra change at different beam angles. Furthermore, variability within the linac may result in photon beam spectra which differ from an idealized model. Similarly, the true detector response of a real system may differ from simulation output. As a result, it is not reliable to assume the source energy spectra and detector response function are all known exactly. To probe this effect, we intentionally introduce a discrepancy between the implicit parameters used during transparency calculations and those used during transparency simulations. This procedure will enable us to benchmark the effectiveness of the semiempirical transparency model in a realistic setting in which the model parameters slightly differ from the experiment. During transparency simulations, we will continue to use \(\{10,6\}\) MeV bremsstrahlung beam spectra and record the true detector response. However, during transparency calculations, we use bremmstrahlung beam spectra with endpoint energies of 9.7MeV and 6.3MeV and the following approximate formulation of the detector response function: \[D(E)\approx E\left(1-e^{-\mu_{\mathrm{det}}(E)\lambda_{\mathrm{det}}}\right) \frac{\mu_{\mathrm{det}}^{\mathrm{en}}(E)}{\mu_{\mathrm{det}}(E)} \tag{9}\] where \(\lambda_{\mathrm{det}}\) is the area density of the detector, \(\mu_{\mathrm{det}}\) is the mass attenuation coefficient of the detector, and \(\mu_{\mathrm{det}}^{\mathrm{en}}\) is the mass energy absorption coefficient of the detector [20]. We use the label "mismatched" to distinguish instances with an intentional shift between the model parameters and the simulation parameters. It is expected that a mismatched beam spectra and detector response will worsen the atomic number reconstruction capabilities of the free streaming model. It would be useful if the semiempirical model were robust to this discrepancy by capturing these effects in the calibration procedure. In other words, perhaps through a particular selection of \(a\), \(b\), and \(c\), it may be possible to rescale different components of the mass attenuation coefficient to counteract the mismatched model parameters. For instance, by choosing \(c_{L}<1\), we can artificially decrease the number of pair production interactions in our calculations, which may allow for the 6.3 MeV semiempirical transparency model to match the results of the 6 MeV simulations. The analysis performed earlier in this section was then repeated under these new considerations. The new values of the calibration parameters \(a\), \(b\), and \(c\) are shown in the last two rows of table 1. The subsequent transparency calculations using both the free streaming model and the semiempirical model were compared to the simulated dataset. Fig. 1(c) reveals that using mismatched model parameters significantly worsens the accuracy of the free streaming model. On the contrary, Fig. 1(d) demonstrates that the semiempirical transparency model continues to show excellent agreement with the simulated transparency measurements. These results are quantified in the last two rows of table 2, where the semiempirical model shows significantly improved atomic number reconstruction capabilities compared to the free streaming model. As before, the solution becomes non-unique at high-\(Z\) due to a fundamental and unavoidable ambiguity between heavy metals [23]. This result shows promise for the semiempirical transparency model to enable accurate material discrimination capabilities in realistic cargo scanning applications. ## 4 Conclusion This work presents a semiempirical transparency model for cargo radiography measurements. The model introduces three calibration parameters which scale different components of the mass attenuation coeffi \begin{table} \begin{tabular}{c c c c} \hline \hline Bremsstrahlung endpoint energy & a & b & c \\ \hline 10 MeV & 1.1158 & 0.9616 & 0.9973 \\ 6 MeV & 0.9427 & 0.9687 & 0.9984 \\ 10 MeV (simulation), 9.7 MeV (model) & 1.0659 & 0.9788 & 1.0031 \\ 6 MeV (simulation), 6.3 MeV (model) & 1.1224 & 1.0343 & 0.9089 \\ \hline \hline \end{tabular} \end{table} Table 1: Calibration parameters \(a\), \(b\), and \(c\) for each of the analysis cases considered in this study. For the first two rows, the beam spectra and detector response used in the model exactly match the simulations. For the last two rows, there is an intentional mismatch between the beam spectra and detector response used in the model compared to the simulations. Figure 1: Photon beam transparency as a function of area density for different materials using the high energy 10 MeV bremsstrahlung beam. The semiempirical model shows significantly reduced bias compared to the free streaming model. Figure 2: \(\alpha\)-curves for the analysis cases presented in section 3. Different materials correspond to different \(\alpha\)-lines, and the separation between these \(\alpha\)-lines enables atomic number discrimination. Compared to the free streaming model, the semiempirical model shows significantly improved agreement with the simulated data, especially when using mismatched model parameters. Materials shown are hydrogen (\(Z=1\)), aluminum (\(Z=13\)), iron (\(Z=26\)), silver (\(Z=47\)), gadolinium (\(Z=64\)), lead (\(Z=82\)), and uranium (\(Z=92\)). cient. The semiempirical model is highly flexible in its ability to be applied to different system geometries by recalculating the calibration parameters. Compared to analytic models, the semiempirical model shows significantly less bias and improved atomic number reconstruction accuracy for high-\(Z\) elements. Furthermore, the semiempirical model maintains strong agreement with simulated data even when the source energy spectra and detector response function are not known exactly. Compared to empirical methods, the semiempirical model requires significantly less calibration data and shows accurate extrapolation to elements and thicknesses which were not included in the calibration step. For these reasons, the semiempirical transparency could enable improved atomic number reconstruction capabilities for dual energy cargo radiography applications. Future research should apply the semiempirical transparency model to real transparency data taken by dual energy systems to verify the accuracy in a practical setting. ## 5 Acknowledgements This work was supported by the Department of Energy Computational Science Graduate Fellowship (DOE CSGF) under grant DE-SC0020347. The authors acknowledge advice from Brian Henderson for his expertise on the technical nuances of dual energy radiography systems. The authors declare no conflict of interest. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & H & C & Al & Ca & Fe & Ge & Zr & Ag & Cs & Gd & W & Pb & U \\ \hline True \(Z\) & 1 & 6 & 13 & 20 & 26 & 32 & 40 & 47 & 55 & 64 & 74 & 82 & 92 \\ Free streaming \(Z\) & 1 & 6 & 13 & 20 & 26 & 32 & 40 & 48 & 57 & 69, 91 & N/A & N/A & N/A \\ Semiempirical \(Z\) & 1 & 6 & 13 & 20 & 26 & 32 & 41 & 48 & 56 & 64 & 76, 99 & 81, 95 & 90 \\ Free Steaming \(Z\) & N/A & 1 & 8 & 15 & 20 & 26 & 34 & 40 & 47 & 55 & 64, 98 & 67, 96 & 69, 94 \\ (mismatched) & & & & & & & & & & & & & \\ Semiempirical \(Z\) & 1 & 6 & 13 & 20 & 26 & 32 & 41 & 48 & 56 & 64 & 76, 100 & 80, 97 & 85, 94 \\ (mismatched) & & & & & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing the true material atomic number to the reconstructed atomic number using the free streaming transparency model (Eq. 4) and the semiempirical transparency model (Eq. 7). In the second and third rows, the models and the simulations use identical photon beam spectra and detector response functions. In the last two rows, the models use mismatched beam spectra and detector responses. Cells containing two entries indicate both materials match the simulations, whereas cells containing “N/A” indicate no material was found to reproduce the simulated measurements.
2303.10764
Complex surfaces with many algebraic structures
We find new examples of complex surfaces with countably many non-isomorphic algebraic structures. Here is one such example: take an elliptic curve $E$ in $\mathbb P^2$ and blow up nine general points on $E$. Then the complement $M$ of the strict transform of $E$ in the blow-up has countably many algebraic structures. Moreover, each algebraic structure comes from an embedding of $M$ into a blow-up of $\mathbb P^2$ in nine points lying on an elliptic curve $F\not\simeq E$. We classify algebraic structures on $M$ using a Hopf transform: a way of constructing a new surface by cutting out an elliptic curve and pasting a different one. Next, we introduce the notion of an analytic K-theory of varieties. Manipulations with the example above lead us to prove that classes of all elliptic curves in this K-theory coincide. To put in another way, all motivic measures on complex algebraic varieties that take equal values on biholomorphic varieties do not distinguish elliptic curves.
Anna Abasheva, Rodion Déev
2023-03-19T20:46:11Z
http://arxiv.org/abs/2303.10764v1
# Complex surfaces with many algebraic structures ###### Abstract We find new examples of complex surfaces with countably many non-isomorphic algebraic structures. Here is one such example: take an elliptic curve \(E\) in \(\mathbb{P}^{2}\) and blow up nine general points on \(E\). Then the complement \(M\) of the strict transform of \(E\) in the blow-up has countably many algebraic structures. Moreover, each algebraic structure comes from an embedding of \(M\) into a blow-up of \(\mathbb{P}^{2}\) in nine points lying on an elliptic curve \(F\not\simeq E\). We classify algebraic structures on \(M\) using a **Hopf transform**: a way of constructing a new surface by cutting out an elliptic curve and pasting a different one. Next, we introduce the notion of an **analytic K-theory of varieties**. Manipulations with the example above lead us to prove that classes of all elliptic curves in this K-theory coincide. To put in another way, all motivic measures on complex algebraic varieties that take equal values on biholomorphic varieties do not distinguish elliptic curves. ###### Contents * 1 Introduction * 2 Hopf transforms * 2.1 Analytic compactifications * 2.2 Hopf surfaces * 2.3 Secondary Hopf surfaces * 2.4 Hopf duality and analytic cobordance * 2.5 Hopf transforms * 3 Surfaces with square-zero elliptic curves * 3.1 Kodaira dimension * 3.2 Rational surfaces * 3.3 Surfaces of class VII * 4 Analytic Grothendieck group
2307.03589
Nitsche method for Navier-Stokes equations with slip boundary conditions: Convergence analysis and VMS-LES stabilization
In this paper, we analyze the Nitsche's method for the stationary Navier-Stokes equations on Lipschitz domains under minimal regularity assumptions. Our analysis provides a robust formulation for implementing slip (i.e. Navier) boundary conditions in arbitrarily complex boundaries. The well-posedness of the discrete problem is established using the Banach Ne\v{c}as Babu\v{s}ka and the Banach fixed point theorems under standard small data assumptions, and we also provide optimal convergence rates for the approximation error. Furthermore, we propose a VMS-LES stabilized formulation, which allows the simulation of incompressible fluids at high Reynolds numbers. We validate our theory through numerous numerical tests in well established benchmark problems.
Aparna Bansal, Nicolás Alejandro Barnafi, Dwijendra Narain Pandey
2023-07-07T13:34:25Z
http://arxiv.org/abs/2307.03589v2
Nitsche method for Navier-Stokes equations with slip boundary conditions: Convergence analysis and VMS-LES stabilization ###### Abstract In this paper, we analyze the Nitsche's method for the stationary Navier-Stokes equations on Lipschitz domains under minimal regularity assumptions. Our analysis provides a robust formulation for implementing slip (i.e. Navier) boundary conditions in arbitrarily complex boundaries. The well-posedness of the discrete problem is established using the Banach Necas Babuska and the Banach fixed point theorems under standard small data assumptions, and we also provide optimal convergence rates for the approximation error. Furthermore, we propose a VMS-LES stabilized formulation, which allows the simulation of incompressible fluids at high Reynolds numbers. We validate our theory through several numerical tests in well established benchmark problems. **Keywords:** Navier-Stokes equation, Navier boundary condition, Nitsche's Method, Banach fixed point theorem, Banach-Necas-Babuska theorem, A-Priori analysis, Variational Multiscale modeling, Large Eddy simulation. **Mathematics Subject Classification :** 65N30 \(\cdot\) 65N12 \(\cdot\) 65N15 \(\cdot\) 65J15 \(\cdot\) 76D07 ## 1 Introduction The Navier Stokes equations describe the motion of incompressible fluids, and they pose significant challenges across different disciplines. The numerical analysis community has put considerable efforts to develop robust and efficient numerical techniques for approximating their efficient numerical approximation. It is typically assumed that the fluid adheres to the walls of its recipient, which is known as the no-slip boundary condition. The accuracy of this assumption has been a subject of intense debate [14]. There are many fluid flow phenomena such as inkjet printing [13], pipe flow [1], complex turbulent flows [15], and slide coating [16] that are better addressed using boundary conditions that allow for the fluid to slip through the walls, known also as Navier boundary conditions. Navier boundary conditions impose a constraint only in the normal direction, which makes their implementation not trivial. To do so, the existing approaches can be separated into (1) Lagrange Multiplier [17, 18, 19, 20, 21] and (2) Penalty methods with regularization term [18, 19, 20]. Both approaches weakly enforce the slip condition into the weak formulation, which although useful, can present erratic behavior known as the Babuska-type paradox, which can result in a loss of convergence [10]. We highlight the stabilized formulation from [11] and the non-conforming penalty formulation analyzed in [15, 1] which adequately characterizes the impact of the variational crimes. In general, penalization schemes avoid the Babuska paradox but require additional parameters. One particular method for imposing boundary conditions weakly is Nitsche's method [10], which can be regarded as an Augmented Lagrangian formulation for imposing boundary conditions with a Lagrange multiplier. A drawback of this method, as in other penalty methods, is that it requires a stabilization constant \(\gamma\) that must be sufficiently large. By adding a stabilization term to the weak formulation of the problem, Nitsche's method addresses the issues that arise from the strong imposition of boundary conditions on approximate geometries, as well as allow for a natural formulation for non-trivial boundary conditions. More recently, a specific treatment of the Navier boundary condition has been studied in [21] for the Oseen problem. We also highlight the work by Gjerde and Scott for curved boundaries in [10] and on kinetic instabilities [10]. The convergence analysis for a stabilized finite element formulation was also very recently developed [1]. The turbulent behavior of the Navier Stokes equations for high Reynolds numbers gives rise to numerical instabilities that make their numerical approximation very challenging and severely impact the accuracy of the finite element method approximations. These issues can be alleviated using stabilized schemes such as the Streamline Upwind Petrov Galerkin (SUPG) method, the Galerkin Least square (GLS) method and the Variational Multiscale (VMS) method (see [11] for a review). In addition to the numerical instabilities, it is fundamental to approximate the unresolved scales which is done with turbulence models i.e. Reynolds Averaged Navier Stokes (RANS) and Large Eddy Simulation (LES). However, RANS are constrained and might not be workable in some real world situations due to simplifications and assumptions in representing the complex nature of turbulent flows. RANS heavily relies on turbulence modeling, which result in an over-simplification and restricts the ability to accurately anticipate complex flow phenomena, as reported in [10]. In this paper, we will focus on VMS-LES due to its suitability for high Reynolds simulations [14, 1], which allows for more computationally efficient LES approaches to the Navier-Stokes equations. Our main contribution is twofold: on one hand, we provide a complete convergence analysis of the slip boundary conditions for the stationary Navier-Stokes equations under minimal regularity assumptions. On the other hand, we propose a robust computational framework for the simulation of the Navier boundary conditions in complex geometries with high Reynolds number. Several numerical tests validate our claims. ### Outline of the paper In Section 2, we present the Navier-Stokes equations with the slip boundary condition, and derive the weak formulation of the problem, and then discuss the solvability analysis of the continuous case. In Section 3, we introduce the Nitsche scheme, then derive the variational formulation and establish the well-posedness of the discrete Oseen problem using the Banach Necas Babuska. The well-posedness result is extended to the Navier-Stokes equations using the Banach's fixed point theorems in a standard way. In Section 4, we derive a priori estimate, and prove the optimal convergence of the method. In Section 5, we propose a stabilized scheme using the VMS-LES formulation to simulate fluids with high Reynolds number while taking into account the slip boundary condition. We consider the unsteady case in order to overcome the challenges associated with the stationary Navier-Stokes equations when studying numerical simulations at high Reynolds numbers. In Section 6, we perform three numerical tests to support the theory was developed. The first one validates the theoretical results of the Nitsche scheme. The second one is a benchmark problem that demonstrates the consistency of our scheme for both steady and unsteady formulations at arbitrary Reynolds numbers. The third test shows the behavior of the fluid passing through a cylinder at high Reynolds numbers, further validating the findings from Section 6. ### Notations and preliminaries Let \(\Omega\subseteq\mathbb{R}^{n=2,3}\) be an open and bounded domain with a Lipschitz polygonal boundary \(\Gamma\). The Sobolev spaces are denoted by \(W^{k,p}(\Omega)\) with \(k\geq 0\). They contain all \(L^{p}(\Omega)\) with \(p\geq 1\) functions whose distributional derivative up to order \(k\) are in \(L^{p}(\Omega)\). The norm and seminorm in \(W^{k,p}(\Omega)\) are denoted by \(\|\cdot\|_{k,p,\Omega}\) and \(|\cdot|_{k,p,\Omega}\). For \(p=2\), \(W^{k,p}(\Omega)\) is denoted by \(H^{k}(\Omega)\) and its corresponding norm and seminorm is denoted by \(\|\cdot\|_{s,\Omega}:=\|\cdot\|_{k,2,\Omega}\) and \(|\cdot|_{k,\Omega}:=\mid\cdot|_{k,2,\Omega}\), respectively. The space \(L^{2}_{0}(\Omega)\) represents all \(L^{2}\) functions with average zero condition over \(\Omega\). The vector valued Sobolev spaces will be represented using boldface letter as \(\boldsymbol{H}^{k}(\Omega)\). Additionally, we will denote with \(H^{1}_{\Gamma_{a}}(\Omega)\) the subspace of \(H^{1}(\Omega)\) with homogeneous boundary conditions on \(\Gamma_{a}\) (or \(H^{1}_{0}(\Omega)\) when \(\Gamma_{a}=\Gamma\) ), for which the Friedrichs-Poincare inequality holds [13] i.e. there exists \(C_{\mathrm{FP}}>0\) which depends on \(\Omega\) and \(\Gamma_{a}\) such that \[\|f\|_{1,\Omega}\leq C_{\mathrm{FP}}|f|_{1,\Omega}\quad\forall f\in H^{1}_{ \Gamma_{a}}(\Omega). \tag{1}\] The Holder inequality is given by \[\int_{\Omega}|fg|\leq\|f\|_{L^{p}(\Omega)}\|g\|_{L^{q}(\Omega)},\quad\forall f \in L^{p}(\Omega),\forall g\in L^{q}(\Omega),\quad\text{with}\quad\frac{1}{p} +\frac{1}{q}=1. \tag{2}\] We have that the following Sobolev embedding \(H^{1}(\Omega)\hookrightarrow L^{q}(\Omega)\) holds for \(1\leq q<\infty\) when \(n=2\) and \(1\leq q\leq 6\) when \(n=3\). In particular, there exists a constant \(C_{\mathrm{Sob}}\left(q,n\right)>0\), that depends only on the domain, such that \[\|f\|_{0,q,\Omega}\leq C_{\mathrm{Sob}}(q,n)\|g\|_{1,\Omega}\,\text{for}\left\{ \begin{array}{ll}q\geq 1&\text{if}\,n=2,\\ q\in[1,6]&\text{if}\,n=3.\end{array}\right. \tag{3}\] Finally, we will consider the norm of a product space \(\mathrm{V}\times\Pi\) to be \(\|(\cdot,\cdot)\|=\|\cdot\|_{\mathrm{V}}+\|\cdot\|_{\Pi}\). Also, whenever an inequality holds for positive constants that do not depend on the mesh size, we will simply write \(\lesssim\) or \(\gtrsim\) and omit the constants. Continuous Problem In this section we introduce the model problem, define the weak formulation, discuss the stability properties, and finally prove the existence and uniqueness of the solution. ### The Model Problem The stationary incompressible Navier-Stokes equations is stated as follows \[-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla p =\mathbf{f} \quad\mathrm{in}\,\Omega, \tag{4}\] \[\mathrm{div}\,\mathbf{u} =0 \quad\mathrm{in}\,\Omega,\] \[\mathbf{u} =0 \quad\mathrm{on}\,\Gamma_{D},\] \[\int_{\Omega}p\,dx =0,\] posed on a spatial domain \(\Omega\subseteq\mathbb{R}^{n},n\in\{2,3\}\) with Lipschitz boundary \(\Gamma\), where \(\mathbf{u}\) is the fluid velocity, \(\nu>0\) is the viscosity, \(p\) is the fluid pressure and \(\mathbf{f}\) represents the external body force on \(\Omega\). The Navier boundary condition on \(\Gamma_{\mathrm{Nav}}\) is defined as \[\mathbf{u}\cdot\mathbf{n}=0 \quad\mathrm{on}\,\Gamma_{\mathrm{Nav}}, \tag{5}\] \[\nu\mathbf{n}^{t}D(\mathbf{u})\boldsymbol{\tau}^{k}+\beta\mathbf{ u}\cdot\boldsymbol{\tau}^{k}=0 \quad\mathrm{on}\,\Gamma_{\mathrm{Nav}},\quad k=1,2,\] where the boundary \(\Gamma=\overline{\Gamma}_{D}\cup\overline{\Gamma}_{\mathrm{Nav}}\) and \(\Gamma_{D}\cap\Gamma_{\mathrm{Nav}}=\emptyset\). The Navier boundary condition allows the fluid to slip along the boundary and requires that the tangential component of the stress vector at the boundary be proportional to the tangential velocity. The strain tensor is denoted by \(D(\mathbf{u})=\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}\), and \(\mathbf{n}\) and \(\boldsymbol{\tau}^{k}\) are unit normal and tangent vectors to the boundary \(\Gamma\). The friction coefficient \(\beta\) will require some controlled behaviour for well-posedness as shown in Lemma C. The assumption of homogeneity in the boundary conditions is made for the sake of simplifying the subsequent analysis, as the existence of lifting operators has already been established [13]. Non-homogeneous boundary conditions are utilized in the numerical tests in Section 6. ### Weak Formulation of the Continuous Problem Define the spaces: \[\mathrm{V} \coloneqq\left\{\mathbf{v}\in\mathbf{H}^{1}(\Omega):\,\mathbf{v} \cdot\mathbf{n}=0\,\mathrm{on}\,\Gamma_{\mathrm{Nav}},\mathbf{v}=0\,\mathrm{ on}\,\Gamma_{D}\right\},\] \[\Pi \coloneqq L_{0}^{2}(\Omega).\] The standard weak formulation of (4) and (5) is given by: Find \((\mathbf{u},p)\in\mathrm{V}\times\Pi\), such that \[\mathcal{A}\left[\left(\mathbf{u},p\right);\left(\mathbf{v},q\right)\right]= \mathcal{F}(\mathbf{v})\quad\forall\left(\mathbf{v},q\right)\in\mathrm{V} \times\Pi, \tag{6}\] where \[\mathcal{A}\left[\left(\mathbf{u},p\right);\left(\mathbf{v},q\right)\right] \coloneqq\frac{\nu}{2}(D(\mathbf{u}),D(\mathbf{v}))+(\mathbf{u}\cdot\nabla \mathbf{u},\mathbf{v})-(p,\nabla\cdot\mathbf{v})-(q,\nabla\cdot\mathbf{u})+ \int_{\Gamma_{\mathrm{Nav}}}\beta\sum_{i}\left(\boldsymbol{\tau}^{i}\cdot \mathbf{v}\right)\left(\boldsymbol{\tau}^{i}\cdot\mathbf{u}\right)ds,\] \[\mathcal{F}(\mathbf{v})\coloneqq\left\langle\mathbf{f},\mathbf{v}\right\rangle,\] where \(\left\langle\cdot,\cdot\right\rangle\) represents the duality pairing between V and its dual \(\mathrm{V}^{\prime}\). Assuming that the load function \(\mathbf{f}\) belongs to \(\mathrm{V}^{\prime}\), we can express (6) in the following form: Find \((\mathbf{u},p)\in\mathrm{V}\times\Pi\), such that \[\mathbf{A}(\mathbf{u},\mathbf{v})+\mathbf{B}(\mathbf{v},p)+ \mathbf{c}(\mathbf{u};\mathbf{u},\mathbf{v}) =\mathcal{F}(\mathbf{v})\quad\forall\mathbf{v}\in\mathrm{V}, \tag{7}\] \[\mathbf{B}(\mathbf{u},q) =0\quad\quad\quad\forall q\in\Pi,\] where \[\mathbf{A}(\mathbf{u},\mathbf{v}) \coloneqq\mathbf{a}(\mathbf{u},\mathbf{v})+\mathbf{a}_{\tau}^{ \partial}(\mathbf{u},\mathbf{v}), \tag{8}\] \[\mathbf{B}(\mathbf{v},p) \coloneqq\mathbf{b}(\mathbf{v},p),\] with forms defined so that \[\mathbf{a}(\mathbf{u},\mathbf{v}) \coloneqq\frac{\nu}{2}(D(\mathbf{u}),D(\mathbf{v})),\] \[\mathbf{b}(\mathbf{v},p) \coloneqq-(\mathrm{div}\,\mathbf{v},p),\] \[\mathbf{a}_{\tau}^{\partial}(\mathbf{u},\mathbf{v}) \coloneqq\int_{\Gamma_{\mathrm{Nav}}}\beta\sum_{i}\left(\mathbf{\tau}^{i }\cdot\mathbf{u}\right)\left(\mathbf{\tau}^{i}\cdot\mathbf{v}\right)ds,\] \[\mathbf{c}(\mathbf{w};\mathbf{u},\mathbf{v}) \coloneqq(\mathbf{w}\cdot\nabla\mathbf{u},\mathbf{v}),\] \[\mathcal{F}\left(\mathbf{v}\right) \coloneqq\left\langle\mathbf{f},\mathbf{v}\right\rangle.\] The derivation of the weak formulation is detailed in [10]. In what remains of this section, we will establish the continuity, ellipticity and inf-sup properties of some operators that will be instrumental for our analysis ahead. **Lemma A**.: _There exist positive constants \(C_{1}\), \(C_{2}\), and \(C_{3}\) such that_ \[\left|\mathbf{A}\left(\mathbf{u},\mathbf{v}\right)\right| \leq C_{1}\left\|\mathbf{u}\right\|_{1,\Omega}\left\|\mathbf{v} \right\|_{1,\Omega} \forall\mathbf{u},\mathbf{v}\in\mathrm{V},\] \[\left|\mathbf{c}\left(\mathbf{w};\mathbf{u},\mathbf{v}\right)\right| \leq C_{2}\left\|\mathbf{w}\right\|_{1,\Omega}\left\|\mathbf{u} \right\|_{1,\Omega}\left\|\mathbf{v}\right\|_{1,\Omega} \forall\mathbf{u},\mathbf{v},\mathbf{w}\in\mathrm{V},\] \[\left|\mathbf{B}(\mathbf{v},p)\right| \leq C_{3}\|\mathbf{v}\|_{1,\Omega}\|p\|_{0,\Omega} \forall\mathbf{v}\in\mathrm{V},p\in\Pi,\] \[\left|\mathcal{F}(\mathbf{v})\right| \leq\|\mathbf{f}\|_{\nu^{\prime}}\left\|\mathbf{v}\right\|_{ \mathrm{V}} \forall\mathbf{v}\in\mathrm{V}.\] Proof.: These inequalities are a direct consequence of (1), (2), (3), Cauchy Schwarz and Holder's inequalities. The constants \(C_{1}\), \(C_{2}\), and \(C_{3}\) depend on the domain \(\Omega\), \(\nu\) and \(\beta\). Let \(\mathrm{Z}\) be the kernel of \(\mathbf{B}\), that is \[\mathrm{Z}\coloneqq\left\{\mathbf{v}\in\mathrm{V}:\mathbf{B}(\mathbf{v},p)=0, \forall p\in\Pi\right\}=\left\{\mathbf{v}\in\mathrm{V}:(\mathrm{div}\,\mathbf{ v},p)=0,\forall p\in\Pi\right\}.\] It can be rewritten as: \[\mathrm{Z}\coloneqq\left\{\mathbf{v}\in\mathrm{V}:\mathrm{div}\,\mathbf{v}=0 \mathrm{in}\,\Omega\right\}.\] **Lemma B**.: _There exists a minimum positive constant \(C_{b}\) for sufficiently small values of \(\left|b\right|\) (when \(b<0\)). Due to the boundary condition on \(\Gamma_{D}=\Gamma\setminus\Gamma_{\text{Nav}}\), there is a Poincare inequality [11] of the form_ \[\int_{\Omega}|\mathbf{v}|^{2}d\mathbf{x}\leq C_{b}\left(\frac{1}{2}\int_{ \Omega}|D(\mathbf{v})|^{2}d\mathbf{x}+\int_{\Gamma_{\text{Nav}}}b\left|P_{T} \mathbf{v}\right|^{2}ds\right)\qquad\forall\mathbf{v}\in\mathrm{Z}. \tag{9}\] _The constant \(C_{b}\) depends on \(\Omega\), \(\Gamma_{\text{Nav}}\), and \(b\). It is considered to be the smallest positive constant such that (9) holds. For more details on this condition, see [11]._ The following lemma establishes the ellipticity of \(\mathbf{A}\) on \(\mathrm{Z}\). **Remark 2.1**.: _We highlight that this result allows for negative values of \(\beta\)._ **Lemma C**.: _There exist a constant \(\xi>0\) depends on \(\beta\), \(\nu\), \(\Omega\) and \(\Gamma_{\text{Nav}}\) such that_ \[\mathbf{A}(\mathbf{v},\mathbf{v})\geq\xi\|\mathbf{v}\|_{1,\Omega}\quad\forall \mathbf{v}\in\mathrm{Z}, \tag{10}\] _where \(\xi=\frac{\nu}{C_{\beta/\nu}}\)._ Proof.: Consider the bilinear form \[\mathbf{A}(\mathbf{v},\mathbf{v})=\frac{\nu}{2}(D(\mathbf{v}),D(\mathbf{v}))+ \int_{\Gamma_{\text{Nav}}}\beta\sum_{i}\left(\mathbf{\tau}^{i}\cdot\mathbf{v} \right)\left(\mathbf{\tau}^{i}\cdot\mathbf{v}\right)ds.\] By introducing the tangent space \(T\) and the projection \(P_{T}\) onto the tangent space, allows the boundary term to be expressed in coordinate free form i.e. \(P_{T}=I-\mathbf{n}\otimes\mathbf{n}\)[23]. Then \[\sum_{i=1}^{d-1}\left(\mathbf{\tau}^{i}\cdot\mathbf{v}\right)\left(\mathbf{\tau}^{i} \cdot\mathbf{u}\right)=\left(P_{T}\mathbf{v}\right)\cdot\left(P_{T}\mathbf{u} \right),\] and also we use the Lemma B as follows: \[\mathbf{A}(\mathbf{v},\mathbf{v}) =\frac{\nu}{2}\int_{\Omega}|D(\mathbf{v})|^{2}d\mathbf{x}+\int_{ \Gamma_{\text{Nav}}}\beta\left|P_{T}\mathbf{v}\right|^{2}ds\] \[=\frac{\nu}{4}\int_{\Omega}|D(\mathbf{v})|^{2}d\mathbf{x}+\frac{ \nu}{4}\left(\int_{\Omega}|D(\mathbf{v})|^{2}d\mathbf{x}+\int_{\Gamma_{\text {Nav}}}\frac{4\beta}{\nu}\left|P_{T}\mathbf{v}\right|^{2}ds\right)\] \[\geq\xi\|\mathbf{v}\|_{1,\Omega}\quad\forall\mathbf{v}\in\mathbb{Z}.\] where \(\xi=\min\{\frac{\nu}{2},\frac{\nu}{4C_{4\beta/\nu}}\}\), and \(C_{4\beta/\nu}\) is the constant denoted as \(C_{b}\) with \(b=4\beta/\nu\). The constant \(C_{b}\) depends on \(\Omega\), \(\Gamma_{\mathrm{Nav}}\) and \(b\), but given that the geometry is fixed, we denote the dependence only on b. Interestingly, it was shown in [14] that the function \(b\mapsto C_{b}\) is monotone decreasing. Next, we present the continuous inf-sup condition for the bilinear form \(\mathbf{B}\). **Lemma D**.: _There exist a positive constant \(\theta>\)0 dependent on the shape of the domain \(\Omega\) such that_ \[\sup_{\mathbf{0}\neq\mathrm{V}\in\mathrm{V}}\frac{|\mathbf{B}( \mathbf{v},q)|}{\|\mathbf{v}\|_{1,\Omega}}\geq\theta\|q\|_{0,\Omega}\quad \forall q\in\Pi. \tag{11}\] Proof.: See [13] for a proof. #### 2.2.1 The fixed point operator In this section, we make the assumption that the data is sufficiently small and utilize the Banach fixed point theorem to establish the existence and uniqueness of a solution for equation (7). Let us introduce the bounded set \[\mathcal{K}:=\left\{\mathbf{v}\in\mathrm{V}:\|\mathbf{v}\|_{1, \Omega}\leq\alpha^{-1}\|\mathbf{f}\|_{\mathrm{V}^{\prime}}\right\}, \tag{12}\] with \(\alpha\) is a positive constant defined in (18). Now, we define the fixed point operator as \[\mathcal{J}:\mathcal{K}\to\mathcal{K},\quad\mathbf{w}\to\mathcal{J }(\mathbf{w})=\mathbf{u}, \tag{13}\] where given \(\mathbf{w}\in\mathcal{K}\), \(\mathbf{u}\) is the first component of the solution of the linearized version of problem (7): Find \((\mathbf{u},p)\in\mathbf{V}\times\Pi\), such that \[\mathbf{A}(\mathbf{u},\mathbf{v})+\mathbf{B}(\mathbf{v},p)+ \mathbf{c}(\mathbf{w};\mathbf{u},\mathbf{v}) =\mathcal{F}(\mathbf{v})\quad\forall\mathbf{v}\in\mathrm{V}, \tag{14}\] \[\mathbf{B}(\mathbf{u},q) =0\qquad\forall q\in\Pi.\] Based on the above, we establish the following relation \[\mathcal{J}(\mathbf{u})=\mathbf{u}\Leftrightarrow(\mathbf{u},p) \in\mathrm{V}\times\Pi\,\text{satisfies \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: **Theorem 1**.: _Assume that_ \[\frac{2}{\alpha^{2}}\|\mathbf{f}\|_{V^{\prime}}\leq 1. \tag{19}\] _Then, given \(\mathbf{w}\in\mathbf{K}\), there exists a unique \(\mathbf{u}\in\mathbf{K}\) such that \(\mathcal{J}(\mathbf{w})=\mathbf{u}\)._ **Theorem 2**.: _Let \(\mathbf{f}\in\mathrm{V}^{\prime}\) such that_ \[\frac{2}{\alpha^{2}}\|\mathbf{f}\|_{V^{\prime}}\leq 1. \tag{20}\] _Then, there exists a unique \((\mathbf{u},p)\in\mathrm{V}\times\Pi\) solution to (7). In addition, there exists \(C>0\) such that_ \[\|\mathbf{u}\|_{1}+\|p\|_{0,\Omega}\leq C\|\mathbf{f}\|_{\mathrm{V}^{\prime}}. \tag{21}\] ## 3 Discrete Problem This section studies the solvability and convergence analysis of the Nitsche's scheme for the problem (7). We assume that the polygonal computational domain \(\Omega\) is discretized using a collection of regular partitions, denoted as \(\{\mathcal{T}_{h}\}_{h>0}\), where \(\Omega\subset\mathbb{R}^{n}\) is divided into simplices \(T\) (triangles in 2D or tetrahedra in 3D) with a diameter \(h_{T}\). The characteristic length of the finite element mesh \(\mathcal{T}_{h}\) is denoted as \(h:=\max_{T\in\mathcal{T}_{h}}h_{T}\). For a given triangulation \(\mathcal{T}_{h}\), we define \(\mathcal{E}_{h}\) as the set of all faces in \(\mathcal{T}_{h}\), with the following partitioning \[\mathcal{E}_{h}:=\mathcal{E}_{\Omega}\cup\mathcal{E}_{D}\cup\mathcal{E}_{ \mathrm{Nav}}\] where \(\mathcal{E}_{\Omega}\) represents the faces lying in the interior of \(\Omega\), \(\mathcal{E}_{\mathrm{Nav}}\) represents the faces lying on the boundary \(\Gamma_{\mathrm{Nav}}\), and \(\mathcal{E}_{D}\) represents the faces lying on the boundary \(\Gamma_{D}\). Additionally, \(h_{e}\) denotes the \((n-1)\) dimensional diameter of an face. Here _faces_ loosely refer to the geometrical entities of co-dimension 1. Now, let us introduce the finite element pair. \[\mathrm{V}_{h} \coloneqq\left\{\mathbf{v}_{h}\in\mathbf{C}(\overline{\Omega}): \mathbf{v}_{h}=0\mathrm{on}\,E\in\mathcal{E}_{D},\left.\mathbf{v}_{h}\right|_ {T}\in\mathbf{P}_{k}(T)\quad\forall T\in\mathcal{T}_{h}\right\},\] \[\Pi_{h} \coloneqq\left\{q_{h}\in\mathrm{C}(\overline{\Omega}):q_{h}|_{T} \in\mathrm{P}_{k-1}(T)\quad\forall T\in\mathcal{T}_{h}\right\}\cap\Pi,\] where \(\mathrm{P}_{k}(T)\) is the space of polynomials of degree less than or equal to \(k\) defined on \(T\). **Remark 3.1**.: _It is noted that \(\Pi_{h}\) is a subspace of \(\Pi\), but \(\mathrm{V}_{h}\) is not a subspace of \(\mathrm{V}\). In that sense, Nitsche's method can be considered a non-conforming finite element approximation._ ### Nitsche's Method The main objective of Nitsche's method [14, 15] is to impose boundary conditions weakly so that they hold only asymptotically, which for this work will hold for the Navier boundary condition \(\mathbf{u}\cdot\mathbf{n}=0\) only. As a result, the weak formulation with the Nitsche method can be expressed as follows: Find \((\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\), such that \[\mathcal{A}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right] =\mathcal{F}(\mathbf{v}_{h})\quad\forall\left(\mathbf{v}_{h},q_{h} \right)\in\mathrm{V}_{h}\times\Pi_{h}, \tag{22}\] with the forms are defined as \[\mathcal{A}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right] \coloneqq\sum_{T\in\mathcal{T}_{h}}\bigg{(}\frac{\nu}{2}(D( \mathbf{u}_{h}),D(\mathbf{v}_{h}))+(\mathbf{u}_{h}\cdot\nabla\mathbf{u}_{h}, \mathbf{v}_{h})-(p_{h},\nabla\cdot\mathbf{v}_{h})-(q_{h},\nabla\cdot\mathbf{u }_{h})\bigg{)}\] \[+\sum_{E\in\mathcal{E}_{\mathrm{Nav}}}\bigg{(}-\int_{E}\mathbf{n} ^{t}(\nu D(\mathbf{u}_{h})-p_{h}I)\mathbf{n}(\mathbf{n}\cdot\mathbf{v}_{h}) ds-\int_{E}\mathbf{n}^{t}(\nu D(\mathbf{v}_{h})-q_{h}I)\mathbf{n}(\mathbf{n} \cdot\mathbf{u}_{h})ds\] \[+\int_{E}\beta\sum_{i}\left(\boldsymbol{\tau}^{i}\cdot\mathbf{v} _{h}\right)\left(\boldsymbol{\tau}^{i}\cdot\mathbf{u}_{h}\right)ds+\gamma\int_ {E}{h_{e}}^{-1}(\mathbf{u}_{h}\cdot\mathbf{n})(\mathbf{v}_{h}\cdot\mathbf{n}) ds\bigg{)},\] \[\mathcal{F}(\mathbf{v}_{h})\coloneqq\langle\mathbf{f},\mathbf{v}_{h}\rangle,\] and \(\gamma>0\) is a positive constant that needs to be chosen sufficiently large, as proved in Lemma H later. We can rewrite (22) as: Find \((\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\), such that \[\begin{array}{rcl}\mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{h})+\mathbf{B}_{ h}(\mathbf{v}_{h},p_{h})+\mathbf{c}(\mathbf{u}_{h};\mathbf{u}_{h},\mathbf{v}_{h})&= \mathcal{F}(\mathbf{v}_{h})\quad\forall\mathbf{v}_{h}\in\mathrm{V}_{h},\\ \mathbf{B}_{h}(\mathbf{u}_{h},q_{h})&=0\qquad\forall q_{h}\in\Pi_{h},\end{array} \tag{23}\] where \[\mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\mathbf{a}(\mathbf{u}_{h},\mathbf{v}_{h})+\mathbf{a}_{r}^{ \partial}(\mathbf{u}_{h},\mathbf{v}_{h})+\mathbf{a}_{r}^{\partial}(\mathbf{u}_{ h},\mathbf{v}_{h})-\mathbf{a}_{c}^{\partial}(\mathbf{u}_{h},\mathbf{v}_{h})- \mathbf{a}_{c}^{\partial}(\mathbf{v}_{h},\mathbf{u}_{h}), \tag{24}\] \[\mathbf{B}_{h}(\mathbf{u}_{h},q_{h}) \coloneqq\mathbf{b}(\mathbf{u}_{h},q_{h})+\mathbf{b}^{\partial}( \mathbf{u}_{h},q_{h}),\] with forms defined so that \[\mathbf{a}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\sum_{T\in\mathcal{T}_{h}}\frac{\nu}{2}(D(\mathbf{u}_{h} ),D(\mathbf{v}_{h})),\] \[\mathbf{b}(\mathbf{u}_{h},q_{h}) \coloneqq-\sum_{T\in\mathcal{T}_{h}}(\operatorname{div}\mathbf{u }_{h},q_{h}),\] \[\mathbf{b}^{\partial}(\mathbf{u}_{h},q_{h}) \coloneqq\sum_{E\in\mathcal{E}_{\text{Nav}}}\int_{E}q_{h}( \mathbf{n}\cdot\mathbf{u}_{h})ds,\] \[\mathbf{a}_{c}^{\partial}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\sum_{E\in\mathcal{E}_{\text{Nav}}}\int_{E}\mathbf{n}^{ t}\nu D(\mathbf{u}_{h})\mathbf{n}(\mathbf{n}\cdot\mathbf{v}_{h})ds,\] \[\mathbf{a}_{r}^{\partial}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\sum_{E\in\mathcal{E}_{\text{Nav}}}\int_{E}\beta\sum_{i} \left(\mathbf{\tau}^{i}\cdot\mathbf{u}_{h}\right)\left(\mathbf{\tau}^{i}\cdot\mathbf{v }_{h}\right)ds,\] \[\mathbf{a}_{\gamma}^{\partial}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\sum_{E\in\mathcal{E}_{\text{Nav}}}\int_{E}\frac{\gamma}{ h_{e}}(\mathbf{u}_{h}\cdot\mathbf{n})(\mathbf{v}_{h}\cdot\mathbf{n})ds,\] \[\mathcal{F}(\mathbf{v}_{h}) \coloneqq\langle\mathbf{f},\mathbf{v}_{h}\rangle,\] \[\mathbf{c}(\mathbf{w}_{h};\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq\sum_{T\in\mathcal{T}_{h}}(\mathbf{w}_{h}\cdot\nabla \mathbf{u}_{h},\mathbf{v}_{h}). \tag{25}\] #### 3.1.1 Discrete stability properties In this section, we leverage well-known inverse and trace inequalities to establish two results: the ellipticity of \(\mathbf{A}_{h}\) and the inf-sup property \(\mathbf{B}_{h}\). **Lemma F**.: _Let \(\mathbf{v}_{h}\in\mathrm{V}_{h}\) then for each \(T\in\mathcal{T}_{h};\,l,m\in\mathbb{N}\), with \(0\leq m\leq l\), there exists a positive constant \(C_{4}\), independent of \(T\), such that_ \[\left|\mathbf{v}_{h}\right|_{,T}\leq C_{4}h_{T}^{m-l}\left|\mathbf{v}_{h} \right|_{m,T}.\] Proof.: [1, Lemma 12.1]. **Lemma G**.: _Let \(\mathbf{v}_{h}\in\mathrm{V}_{h}\) then for each \(T\in\mathcal{T}_{h},E\subset\partial T\), there exists a positive constant \(C_{5}\), independent of \(T\), such that_ \[\left\|\mathbf{v}_{h}\right\|_{0,E}\leq C_{5}h_{T}^{-\frac{1}{2}}\left\| \mathbf{v}_{h}\right\|_{0,T}.\] Proof.: [13]. We need to define the energy norm on \(\mathrm{V}_{h}\) as \[\left\|\mathbf{v}_{h}\right\|_{1,h}^{2}\coloneqq\left\|\nabla\mathbf{v}_{h} \right\|_{0,\Omega}^{2}+\sum_{E\in\mathcal{E}_{\text{Nav}}}\frac{1}{h_{e}} \left\|\mathbf{v}_{h}\cdot\mathbf{n}\right\|_{0,E}^{2}. \tag{26}\] We highlight that this norm, in contrast to the one found in [1], contains only the boundary term on the normal component. This is happens because we consider only the Navier boundary condition weakly, but indeed our proof would remain mostly unmodified if we were to consider the Dirichlet boundary conditions weakly as well. We would only require adding the missing norm for the velocity in this energy norm. All estimates would remain the same. We now state the continuity of the discrete bilinear forms in terms of this norm. **Theorem 3**.: _There exist positive constants \(C_{7}\), \(C_{8}\), and \(C_{9}\) independent of \(h\) such that_ \[\left|\mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{h})\right|\leq C_{7}\|\mathbf{u}_{h}\|_{1,h}\|\mathbf{v}_{h}\|_{1,h}, \forall\mathbf{u}_{h},\mathbf{v}_{h}\in\mathrm{V}_{h},\] \[\left|\mathbf{c}\left(\mathbf{w}_{h};\mathbf{u}_{h},\mathbf{v}_{h} \right)\right|\leq C_{8}\left\|\mathbf{w}_{h}\right\|_{1,h}\left\|\mathbf{u}\right\| _{1,h}\|\mathbf{v}_{h}\|_{1,h}, \forall\mathbf{w}_{h},\mathbf{u}_{h},\mathbf{v}_{h}\in\mathrm{V}_{h},\] \[\left|\mathbf{B}_{h}(\mathbf{v}_{h},q_{h})\right|\leq C_{9}\|\mathbf{v}_{h}\|_{1,h}\|q_{h}\|_{0,\Omega}, \forall\mathbf{v}_{h}\in\mathrm{V}_{h},q_{h}\in\Pi_{h}.\] Proof.: The proof of above inequalities is a direct consequence of Lemmas 1, 2, the Sobolev embedding with \(r=4\), and Cauchy-Schwarz and Holder inequalities. Moreover, it can be seen that \[C_{7} \sim\frac{\nu}{2}+\beta+\gamma+\nu C_{5}\] \[C_{8} \sim C_{\mathrm{Sob}}(4,n)\] \[C_{9} \sim 1+C_{5}\] i.e., above constants can bounded by a constant that depends only on the trace inequality constant \(C_{5}\), and on the parameters \(\nu\),\(\gamma\) and \(\beta\). The discrete kernel of \(\mathbf{B}_{h}\) is defined by: \[\mathrm{Z}_{h}\coloneqq\left\{\mathbf{v}_{h}\in\mathrm{V}_{h}:\mathbf{B}_{h} \left(\mathbf{v}_{h},p_{h}\right)=0,\forall p_{h}\in\Pi_{h}\right\}.\] It can be equivalently written as \[\mathrm{Z}_{h}\coloneqq\left\{\mathbf{v}_{h}\in\mathrm{V}_{h}:\sum_{T\in \mathcal{T}_{h}}\int_{T}p_{h}\nabla\cdot\mathbf{v}_{h}d\mathbf{x}-\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}\int_{E}p_{h}(\mathbf{n}\cdot\mathbf{v}_{h})ds=0 \quad\forall p_{h}\in\Pi_{h}\right\}. \tag{27}\] The following lemma establishes the ellipticity of \(\mathbf{A}\) on \(\mathrm{V}_{h}\). **Lemma 1**.: _There exist positive constants \(\gamma_{0},C_{0}\) and \(C_{S}=C_{S}\left(\beta,\nu\right)\), independent on \(h\), such that_ \[\mathbf{A}_{h}(\mathbf{v}_{h},\mathbf{v}_{h})\geq C_{S}\|\mathbf{v}_{h}\|_{1, h}\quad\forall\mathbf{v}_{h}\in\mathrm{Z}_{h}, \tag{28}\] _where \(C_{S}=\min\{\xi-\nu C_{5}^{2}C_{0},\gamma-\frac{\nu}{C_{0}}\}\) with \(\gamma\geq\gamma_{0}>\frac{\nu}{C_{0}}\), \(C_{0}<\frac{\xi}{C_{5}^{2}\nu}\), and \(\xi\) is the coercivity constant defined in the continuous case._ Proof.: For the proof, we use (24) to obtain \[\mathbf{A}_{h}(\mathbf{v}_{h},\mathbf{v}_{h}) =\frac{\nu}{2}(D(\mathbf{v}_{h}),D(\mathbf{v}_{h}))+\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}\int_{E}\beta\sum_{i}\left(\mathbf{\tau}^{i}\cdot \mathbf{v}_{h}\right)\left(\mathbf{\tau}^{i}\cdot\mathbf{v}_{h}\right)ds\] \[-\sum_{E\in\mathcal{E}_{\mathrm{Nov}}}2\int_{E}\mathbf{n}^{t}\nu D (\mathbf{v}_{h})\mathbf{n}(\mathbf{n}\cdot\mathbf{v}_{h})ds+\frac{\gamma}{h_{ e}}\int_{E}(\mathbf{v}_{h}\cdot\mathbf{n})(\mathbf{v}_{h}\cdot\mathbf{n})ds.\] Using Lemma 1, Lemma 2, Cauchy Schwarz and Holder's inequalities, the following estimate can be established \[\mathbf{A}_{h}(\mathbf{v}_{h},\mathbf{v}_{h}) \geq\xi\|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}-2\nu\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}\|\mathbf{v}_{h}\cdot\mathbf{n}\|_{0,E}\|D\mathbf{ v}_{h}\|_{0,E}+\frac{\gamma}{h_{e}}\sum_{E\in\mathcal{E}_{\mathrm{Nov}}}\| \mathbf{v}_{h}\cdot\mathbf{n}\|_{0,E}^{2}\] \[\geq\xi\|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}-2\nu\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}h_{e}^{-1/2}\|\mathbf{v}_{h}\cdot\mathbf{n}\|_{0, E}h_{e}^{1/2}\|D\mathbf{v}_{h}\mathbf{n}\|_{0,E}+\frac{\gamma}{h_{e}}\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}\|\mathbf{v}_{h}\cdot\mathbf{n}\|_{0,E}^{2}\] \[\geq\xi\|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}-2\nu\sum_{E\in \mathcal{E}_{\mathrm{Nov}}}\left(\frac{h_{e}C_{0}}{2}\|D\mathbf{v}_{h}\|_{0,E} ^{2}+\frac{h_{e}^{-1}}{2C_{0}}\|\mathbf{v}_{h}\cdot\mathbf{n}\|_{0,E}^{2} \right)+\frac{\gamma}{h_{e}}\sum_{E\in\mathcal{E}_{\mathrm{Nov}}}\|\mathbf{v }_{h}\cdot\mathbf{n}\|_{0,E}^{2}\] \[\geq\xi\|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}-\nu C_{5}^{2}C_{0} \sum_{K\in\mathcal{T}_{h}}\|D\mathbf{v}_{h}\|_{0,K}^{2}-\frac{\nu}{C_{0}\gamma} \sum_{E\in\mathcal{E}_{\mathrm{Nov}}}\frac{\gamma}{h_{e}}\|\mathbf{v}_{h} \cdot\mathbf{n}\|_{0,E}^{2}+\frac{\gamma}{h_{e}}\sum_{E\in\mathcal{E}_{\mathrm{ Nov}}}\|\mathbf{v}_{h}\cdot\mathbf{n}\|_{0,E}^{2}\] \[\geq\xi\|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}-\nu C_{5}^{2}C_{0} \|\nabla\mathbf{v}_{h}\|_{0,\Omega}^{2}+\left(1-\frac{\nu}{C_{0}\gamma}\right) \sum_{E\in\mathcal{E}_{\mathrm{Nov}}}\frac{\gamma}{h_{e}}\|\mathbf{v}_{h} \cdot\mathbf{n}\|_{0}^{2}\] \[\geq C_{S}\|\mathbf{v}_{h}\|_{1,h}.\] By selecting the positive parameter \(C_{0}\) that satisfies \(C_{0}<\frac{\xi}{C_{5}^{2}\nu}\), we ensure that \(\left(\xi-\nu C_{5}^{2}C_{0}\right)>0\). Additionally, we define \(C_{S}=\min\{\xi-\nu C_{5}^{2}C_{0},\gamma-\frac{\nu}{C_{0}}\}\) with \(\gamma\geq\gamma_{0}>\frac{\nu}{C_{0}}\). Next, we proceed to derive the discrete version of Lemma 1. **Remark 3.2**.: _The inf-sup condition is associated with the construction of inf-sup stable elements in incompressible flow modeling. However, this condition is not automatically fulfilled and needs to be verified for specific choices of the approximation spaces \(\mathrm{V}_{h}\) and \(\Pi_{h}\). It is worth mentioning that the Taylor-Hood elements are inf-sup stable, meaning they meet the necessary condition for stability. In fact, the proof can be demonstrated using any pair of inf-sup stable elements._ **Lemma I**.: _There exists \(\hat{\theta}>0\) independent of \(h\) such that_ \[\sup_{\mathbf{0}\neq\mathbf{v}_{h}\in\mathrm{V}_{h}}\frac{|\mathbf{B}_{h}( \mathbf{v}_{h},q_{h})|}{\|\mathbf{v}_{h}\|_{1,h}}\geq\hat{\theta}\|q_{h}\|_{0, \Omega}\quad\forall q\in\Pi_{h}. \tag{29}\] Proof.: Consider the Taylor-Hood Finite element spaces \(\mathrm{V}_{h}\times\Pi_{h}\) such that the discrete inf-sup holds for the bilinear form \(\mathbf{b}(\mathbf{v}_{h},q_{h})\), see [1, Section 5.5] i.e. there exist a positive constant \(\hat{\theta}>0\), independent of h such that \[\sup_{\mathbf{0}\neq\mathbf{v}_{h}\in\mathrm{V}_{h}}\frac{|\mathbf{b}( \mathbf{v}_{h},q_{h})|}{\|\mathbf{v}_{h}\|_{1,\Omega}}\geq\hat{\theta}\|q_{h }\|_{0,\Omega}\quad\forall q_{h}\in\Pi_{h}.\] Consider the discrete space of strongly imposed Navier conditions \[\mathrm{V}_{h,0}=\{\mathbf{v}_{h}\in\mathrm{V}_{h}:\mathbf{v}_{h}\cdot \mathbf{n}=0\,\mathrm{on}\,\Gamma_{\mathrm{nav}}\}.\] This space naturally yields that \(b^{\beta}(\mathbf{v}_{h},q_{h})=0\) for all \(\mathbf{v}_{h}\) in \(\mathrm{V}_{h,0}\), so we obtain \[\sup_{\mathbf{0}\neq\mathbf{v}_{h}\in\mathrm{V}_{h,0}}\frac{|\mathbf{B}_{h}( \mathbf{v}_{h},q_{h})|}{\|\mathbf{v}_{h}\|_{1,h}}=\sup_{\mathbf{0}\neq\mathbf{ v}_{h}\in\mathrm{V}_{h,0}}\frac{|\mathbf{b}(\mathbf{v}_{h},q_{h})|}{\| \mathbf{v}_{h}\|_{1,h}}=\sup_{\mathbf{0}\neq\mathbf{v}_{h}\in\mathrm{V}_{h,0} }\frac{|\mathbf{b}(\mathbf{v}_{h},q_{h})|}{\|\mathbf{v}_{h}\|_{1,\Omega}}\geq \hat{\theta}\|q_{h}\|_{0,\Omega}\quad\forall q_{h}\in\Pi_{h}\] in virtue of the classical inf-sup condition. Now, use this space for a lower bound i.e. \[\sup_{\mathbf{0}\neq\mathbf{v}_{h}\in\mathrm{V}_{h}}\frac{|\mathbf{B}_{h}( \mathbf{v}_{h},q_{h})|}{\|\mathbf{v}_{h}\|_{1,h}}\geq\sup_{\mathbf{0}\neq \mathbf{v}_{h}\in\mathrm{V}_{h,0}}\frac{|\mathbf{B}_{h}(\mathbf{v}_{h},q_{h})|} {\|\mathbf{v}_{h}\|_{1,h}}\geq\hat{\theta}\|q_{h}\|_{0,\Omega}\quad\forall q_ {h}\in\Pi_{h}.\] This concludes the proof. Next, We aim to establish the well-posedness of problem (23). We will utilize a fixed-point operator linked to a linearized form of the problem and demonstrate that this operator has a unique fixed-point. Equivalently, we can prove the well-posedness of problem (23) using the Banach fixed-point theorem. #### 3.1.2 The discrete fixed-point operator and its well-posedness Let us introduce the set \[\mathbf{K}_{\mathbf{h}}=\left\{\mathbf{v}_{h}\in\mathrm{V}_{h}:\|\mathbf{v}_ {h}\|_{1,h}\leq\hat{\alpha}^{-1}\|\mathbf{f}\|_{\mathrm{V}^{\prime}}\right\}, \tag{30}\] with \(\hat{\alpha}>0\) being the constant defined below in Theorem 4. Now, let us define the discrete fixed point operator as \[\mathcal{J}_{h}:\mathbf{K}_{h}\rightarrow\mathbf{K}_{h},\quad\mathbf{w}_{h} \rightarrow\mathcal{J}_{h}\left(\mathbf{w}_{h}\right)=\mathbf{u}_{h},\] where given \(\mathbf{w}_{h}\in\mathbf{K}_{h},\mathbf{u}_{h}\) represents the first component of the solution of the linearized version of problem (24): Find \(\left(\mathbf{u}_{h},p_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\) \[\mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{h})+\mathbf{B}_{h}( \mathbf{v}_{h},p_{h})+\mathbf{c}(\mathbf{w}_{h};\mathbf{u}_{h},\mathbf{v}_{h }) =\mathcal{F}(\mathbf{v}_{h})\quad\forall\mathbf{v}_{h}\in\mathrm{V}_{h}, \tag{31}\] \[\mathbf{B}_{h}(\mathbf{u}_{h},q_{h}) =0\qquad\forall q_{h}\in\Pi_{h}.\] Based on the above, we can establish the following relation \[\mathcal{J}_{h}\left(\mathbf{u}_{h}\right)=\mathbf{u}_{h}\Leftrightarrow( \mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\quad\text{satisfies}\,(\ref{eq:1}). \tag{32}\] In order to guarantee the well-posedness of the discrete problem (24), it is enough to demonstrate the existence of a unique fixed-point for \(\mathcal{J}_{h}\) within the set \(\mathbf{K}_{h}\). However, before delving into the analysis of solvability, we first need to establish the well-definedness of the operator \(\mathcal{J}_{h}\). Let us introduce the bilinear form. \[\mathcal{C}_{h}\left[(\mathbf{u}_{h},p_{h});(\mathbf{v}_{h},q_{h})\right]= \mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{h})+\mathbf{B}_{h}(\mathbf{u}_{h},q_ {h})+\mathbf{B}_{h}(\mathbf{v}_{h},p_{h}). \tag{33}\] **Theorem 4**.: _There exist a positive constant \(\hat{\alpha}\) such that_ \[\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},q_{h}\right)\in\mathrm{V}_{h}\times \Pi_{h}}\frac{\mathcal{C}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right]}{\|\left(\mathbf{v}_{h},q_{h}\right)\|} \geq\hat{\alpha}\left\|\left(\mathbf{u}_{h},p_{h}\right)\right\|\quad\forall \left(\mathbf{u}_{h},p_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h},\] _with_ \[\hat{\alpha}=\frac{C_{S}\hat{\theta}}{2C_{S}+\hat{\theta}+1}.\] _where \(C_{S}\) and \(\hat{\theta}\) are the coercivity and inf-sup stability constants._ Proof.: Owing to Theorem 3, it is clear that \(\mathcal{C}_{h}\left[\cdot;\cdot\right]\) is bounded. Moreover, from Lemma H, Lemma I, and [1, Proposition 2.36], it is not difficult to see that above inf-sup condition holds. Now, we are in position to establish the well-posedness of \(\mathcal{J}_{\mathrm{h}}\). **Theorem 5**.: _Assume that_ \[\frac{2}{\hat{\alpha}^{2}}\|\mathbf{f}\|_{\mathrm{V}^{\prime}}\leq 1, \tag{34}\] _Then, given \(\mathbf{w}_{h}\in\mathbf{K}_{h}\), there exists a unique \(\mathbf{u}_{h}\in\mathbf{K}_{h}\) such that \(\mathcal{J}_{h}\left(\mathbf{w}_{h}\right)=\mathbf{u}_{h}\)._ Proof.: Given \(\mathbf{w}_{h}\in\mathbf{K}_{h}\), we begin by defining the bilinear form: \[\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right]:=\mathcal{C}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right);\left(\mathbf{v}_{h},q_{h}\right)\right]+\mathbf{c}\left( \mathbf{w}_{h};\mathbf{u}_{h},\mathbf{v}_{h}\right), \tag{35}\] where \(\mathcal{C}_{h}\) and \(\mathbf{c}\) are the forms defined in Theorem 4 and (25), respectively, that is \[\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right]=\mathbf{A}_{h}(\mathbf{u}_{h},\mathbf{v}_{ h})+\mathbf{B}_{h}(\mathbf{u}_{h},q_{h})+\mathbf{B}_{h}(\mathbf{v}_{h},q_{h})+ \mathbf{c}\left(\mathbf{w}_{h};\mathbf{u}_{h},\mathbf{v}_{h}\right).\] Then, problem (24) can be rewritten equivalently as: Find \(\left(\mathbf{u}_{h},p_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\), such that \[\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h},p_{h}\right);\left( \mathbf{v}_{h},q_{h}\right)\right]=\mathcal{F}(\mathbf{v}_{h})\quad\forall \left(\mathbf{v}_{h},q_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}. \tag{36}\] First, we establish the well-posedness of \(\mathcal{J}\) in order to demonstrate the well posedness of problem (36) using the Banach Necas Babuska theorem [1, Theorem 2.6]. Consider \(\left(\mathbf{u}_{h},p_{h}\right)\), \(\left(\hat{\mathbf{v}}_{h},\hat{q}_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\) with \(\left(\hat{\mathbf{v}}_{h},\hat{q}_{h}\right)\neq 0\), from Theorem 3 we can observe that \[\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},p_{h}\right)\in\mathrm{V }_{h}\times\Pi_{h}}\frac{\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h },p_{h}\right);\left(\mathbf{v}_{h},q_{h}\right)\right]}{\|\left(\hat{\mathbf{v }}_{h},\hat{q}_{h}\right)\|} \geq\frac{|\mathcal{C}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right); \left(\hat{\mathbf{v}}_{h},\hat{q}_{h}\right)\right]|}{\left\|\left(\hat{ \mathbf{v}}_{h},\hat{q}_{h}\right)\right\|}-\frac{|\mathbf{c}\left(\mathbf{w}_{ h};\mathbf{u}_{h},\hat{\mathbf{v}}_{h}\right)|}{\|\left(\hat{\mathbf{v}}_{h}, \hat{q}_{h}\right)\|}\] \[\geq\frac{|\mathcal{C}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right); \left(\hat{\mathbf{v}}_{h},\hat{q}_{h}\right)\right]|}{\|\left(\hat{\mathbf{v }}_{h},\hat{q}_{h}\right)\|}-\|\mathbf{w}_{h}\|_{1,h}\|(\mathbf{u}_{h},p_{h})\|,\] which together with Theorem 4 and the fact that \(\left(\hat{\mathbf{v}}_{h},\hat{p}_{h}\right)\) is arbitrary, implies \[\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},p_{h}\right)\in\mathrm{V }_{h}\times\Pi_{h}}\frac{\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h },p_{h}\right);\left(\mathbf{v}_{h},q_{h}\right)\right]}{\|\left(\mathbf{v}_{h },p_{h}\right)\|}\geq\left(\hat{\alpha}-\|\mathbf{w}_{h}\|_{1,h}\right)\|( \mathbf{u}_{h},p_{h})\|. \tag{37}\] Hence, from the definition of set \(\mathbf{K}_{h}\) see (30), and assumption (34), we easily get \[\|\mathbf{w}_{h}\|_{1,h}\leq\frac{1}{\hat{\alpha}}\|\mathbf{f}\|_{\mathrm{V}^{ \prime}}\leq\frac{\hat{\alpha}}{2}, \tag{38}\] and then, combining (37) and (38), we obtain \[\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},p_{h}\right)\in\mathrm{V }_{h}\times\Pi_{h}}\frac{\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{u}_{h },p_{h}\right);\left(\mathbf{v}_{h},q_{h}\right)\right]}{\|\left(\mathbf{v}_{h },q_{h}\right)\|}\geq\frac{\hat{\alpha}}{2}\|(\mathbf{u}_{h},p_{h})\|\quad \forall(\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}. \tag{39}\] On the other hand, for a given \(\left(\mathbf{u}_{h},q_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\), we observe that \[\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},q_{h}\right)\in\mathrm{ V}_{h}\times\Pi_{h}}\mathcal{A}_{\mathbf{w}_{h}}\left[\left(\mathbf{v}_{h},q_{h}\right); \left(\mathbf{u}_{h},p_{h}\right)\right] \geq\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},q_{h}\right)\in \mathrm{V}_{h}\times\Pi_{h}}\frac{\mathcal{A}_{\mathbf{w}_{h}}\left[\left( \mathbf{v}_{h},q_{h}\right);\left(\mathbf{u}_{h},p_{h}\right)\right]}{\| \left(\mathbf{v}_{h},q_{h}\right)\|}\] \[=\sup_{\mathbf{0}\neq\left(\mathbf{v}_{h},q_{h}\right)\in \mathrm{V}_{h}\times\Pi_{h}}\frac{\mathcal{C}_{h}\left[\left(\mathbf{v}_{h},q_{h} \right);\left(\mathbf{u}_{h},q_{h}\right)\right]+\mathbf{c}\left(\mathbf{w}_{h };\mathbf{v}_{h},\mathbf{u}_{h}\right)}{\|\left(\mathbf{v}_{h},q_{h}\right)\|},\] from which, \[\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h}\times \Pi_{h}}\mathcal{A}_{\mathbf{w}_{h}}\left[(\mathbf{v}_{h},q_{h});(\mathbf{u}_{h },q_{h})\right] \geq\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\frac{|\mathcal{C}_{h}\left[(\mathbf{v}_{h},q_{h});(\mathbf{u}_{h },p_{h})\right]+\mathbf{c}(\mathbf{w}_{h};\mathbf{v}_{h},\mathbf{u}_{h})|}{ \|(\mathbf{v}_{h},q_{h})\|}\] \[\geq\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\frac{|\mathcal{C}_{h}\left[(\mathbf{v}_{h},q_{h});(\mathbf{u}_{h },p_{h})\right]|}{\|(\mathbf{v}_{h},q_{h})\|}-\frac{|\mathbf{c}(\mathbf{w}_{h };\mathbf{v}_{h},\mathbf{u}_{h})|}{\|(\mathbf{v}_{h},q_{h})\|},\] for all \(\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h}\times\Pi_{h}\), which together with Theorem 3, implies \[\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\mathcal{A}_{\mathbf{w}_{h}}\left[(\mathbf{v}_{h},q_{h});( \mathbf{u}_{h},p_{h})\right] \geq\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\frac{\mathcal{C}_{h}\left[(\mathbf{v}_{h},q_{h});(\mathbf{u}_{h },p_{h})\right]}{\|(\mathbf{v}_{h},q_{h})\|}-\|\mathbf{w}_{h}\|_{1,h}\|( \mathbf{u}_{h},p_{h})\|. \tag{40}\] Therefore, using the fact that \(\mathcal{C}_{h}\left[\cdot;\,\cdot\right]\) is symmetric, from the inequality in Theorem 4 and (40) we obtain \[\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\mathcal{A}_{\mathbf{w}_{h}}\left[(\mathbf{v}_{h},q_{h});( \mathbf{u}_{h},p_{h})\right] \geq\hat{\alpha}\|(\mathbf{u}_{h},p_{h})\|-\|\mathbf{w}_{h}\|_{1,h}\|(\mathbf{ u}_{h},p_{h})\|,\] which combined with (38), yields \[\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\mathcal{A}_{\mathbf{w}_{h}}\left[(\mathbf{v}_{h},q_{h});( \mathbf{u}_{h},p_{h})\right] \geq\frac{\hat{\alpha}}{2}\|(\mathbf{u}_{h},p_{h})\|>0\quad\forall(\mathbf{u}_ {h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h},(\mathbf{u}_{h},p_{h})\neq 0. \tag{41}\] By examining (39) and (41), we can deduce that \(\mathcal{A}_{\mathbf{w}_{h}}\left(\cdot,\cdot\right)\) satisfies the conditions of the Banach Necas Babuska theorem [1, Theorem 2.6]. This guarantees the existence of a unique solution \((\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\) to (24), or equivalently, the existence of a unique \(\mathbf{u}_{h}\in\mathrm{V}_{h}\) such that \(\mathcal{J}_{h}(\mathbf{w}_{h})=\mathbf{u}_{h}\). Furthermore, from (39) and (36) we derive the following inequality: \[\|\mathbf{u}_{h}\|_{1,h}\leq\|(\mathbf{u}_{h},p_{h})\|\leq\frac{1 }{\hat{\alpha}}\|\mathbf{f}\|_{\mathrm{V}^{\prime}}. \tag{42}\] This concludes the proof by showing that \(\mathbf{u}_{h}\in\mathbf{K}_{h}\). #### 3.1.3 Well-posedness of the discrete problem The subsequent theorem establishes the well-posedness of Nitsche's scheme (24). **Theorem 6**.: _Let \(\mathbf{f}\in\mathrm{V}^{\prime}\) such that_ \[\frac{2}{\hat{\alpha}^{2}}\|\mathbf{f}\|_{\mathrm{V}^{\prime}} \leq 1. \tag{43}\] _Then, there exists a unique \((\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\) solution to (23). In addition, there exists \(C>0\), independent of \(h\), such that_ \[\|\mathbf{u}_{h}\|_{1,h}+\|p_{h}\|_{0,\Omega}\leq C\|\mathbf{f} \|_{\mathrm{V}^{\prime}}. \tag{44}\] Proof.: According to the relations given in (32), our aim is to establish well-posedness of (23). This can be accomplished by demostrating that \(\mathcal{J}_{h}\) possesses a unique fixed point in \(\mathbf{K}_{h}\) using Banach's fixed point theorem. The validity of Assumption (42) as shown in Theorem 5, ensures the well-definedness of \(\mathcal{J}_{h}\). Now, let \(\mathbf{w}_{h1}\), \(\mathbf{w}_{h2}\), \(\mathbf{u}_{h1}\), \(\mathbf{u}_{h2}\in\mathbf{K}_{h}\), be such that \(\mathbf{u}_{h1}=\mathcal{J}_{h}\left(\mathbf{w}_{h1}\right)\) and \(\mathbf{u}_{h2}=\mathcal{J}_{h}\left(\mathbf{w}_{h2}\right)\). By employing the definition of \(\mathcal{J}\) and (36), we can conclude the existence of unique \(p_{h1},p_{h2}\in L^{2}(\Omega)\), satisfies the following equations: \[\mathcal{A}_{\mathbf{w}_{h1}}\left[\left(\mathbf{u}_{h1},p_{h1} \right);(\mathbf{v}_{h},q_{h})\right] =\mathcal{F}(\mathbf{v}_{h}),\quad\text{and}\quad\mathcal{A}_{\mathbf{w}_{h2}} \left[\left(\mathbf{u}_{h2},p_{h2}\right);(\mathbf{v}_{h},q_{h})\right]= \mathcal{F}(\mathbf{v}_{h})\quad\forall(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h }\times\Pi_{h}.\] By adding and subtracting appropriate terms, we can derive the following: \[\mathcal{A}_{\mathbf{w}_{h1}}\left[\left(\mathbf{u}_{h1}-\mathbf{u }_{h2};p_{h1}-p_{h2}\right),(\mathbf{v}_{h},q_{h})\right] =-\mathbf{c}\left(\mathbf{w}_{h1}-\mathbf{w}_{h2};\mathbf{u}_{h2}, \mathbf{v}_{h}\right)\quad\forall(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h}\times \Pi_{h}. \tag{45}\] Given that \(\mathbf{w}_{h1}\in\mathbf{K}_{h}\) and using (45), (39), and Theorem 3, we can deduce: \[\frac{\hat{\alpha}}{2}\left\|\mathbf{u}_{h1}-\mathbf{u}_{h2}\right\| _{1} \leq\sup_{\mathbf{0}\neq(\mathbf{v}_{h},p_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\frac{\mathcal{A}_{\mathbf{w}_{h1}}\left[\left(\mathbf{u}_{h1}- \mathbf{u}_{h2},p_{h1}-p_{h2}\right);(\mathbf{v}_{h},q_{h})\right]}{\|( \mathbf{v}_{h},q_{h})\|}\] \[=\sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h} \times\Pi_{h}}\frac{-\mathbf{c}\left(\mathbf{w}_{h1}-\mathbf{w}_{h2};\mathbf{u}_{h 2},\mathbf{v}_{h}\right)}{\|(\mathbf{v}_{h},q_{h})\|}\] \[\leq\left\|\mathbf{w}_{h1}-\mathbf{w}_{h2}\right\|_{1,h}\left\| \mathbf{u}_{h2}\right\|_{1,h},\] which together with the fact that \(\mathbf{u}_{h2}\in\mathbf{K}_{h}\), implies \[\|\mathbf{u}_{h1}-\mathbf{u}_{h2}\|_{1,h}\leq\frac{1}{\hat{\alpha}}\|\mathbf{f} \|_{V^{\prime}}\left\|\mathbf{w}_{h1}-\mathbf{w}_{h2}\right\|_{1,h}\] \[\left\|\mathbf{u}_{h1}-\mathbf{u}_{h2}\right\|_{1,h}\leq\frac{\hat{\alpha}}{2} \left\|\mathbf{w}_{h1}-\mathbf{w}_{h2}\right\|_{1,h}.\] Assumption (42) directly implies that \(\mathcal{J}_{h}\) is a contraction mapping. Now, to establish the estimate (43), let \(\mathbf{u}_{h}\in\mathbf{K}_{h}\) be the unique fixed point of \(\mathcal{J}_{h}\) and let \(\mathbf{u}_{h}\in\mathrm{V}_{h}\) be the unique solution of (23) with \((\mathbf{u}_{h},p_{h})\in\mathrm{V}_{h}\times\Pi_{h}\). By the definition of \(\mathbf{K}_{h}\), it is evident that \(\mathbf{u}_{h}\) satisfies the following \[\|\mathbf{u}_{h}\|_{1,h}\leq\hat{\alpha}^{-1}\|\mathbf{f}\|_{V^{\prime}}.\] Consequently, utilizing (39) on \(\mathcal{A}_{\mathbf{u}_{h}}\), referring back to the definition of \(\mathcal{A}_{\mathbf{u}_{h}}\) given in (35), and using the fact that \((\mathbf{u}_{h},p_{h})\) satisfies (23), we obtain \[\|p_{h}\|_{0,\Omega}\leq\|(\mathbf{u}_{h},p_{h})\|\leq\frac{2}{\hat{\alpha}} \sup_{\mathbf{0}\neq(\mathbf{v}_{h},q_{h})\in\mathrm{V}_{h}\times\Pi_{h}}\frac {\mathcal{A}_{\mathbf{u}_{h}}\left[(\mathbf{u}_{h},p_{h});(\mathbf{v}_{h},q_{ h})\right]}{\|(\mathbf{v}_{h},q_{h})\|}=\frac{2}{\hat{\alpha}}\sup_{\mathbf{0}\neq( \mathbf{v}_{h},q_{h})\in\mathrm{V}_{h}\times\Pi_{h}}\frac{\mathcal{F}(\mathbf{ v}_{h})}{\|(\mathbf{v}_{h},p_{h})\|},\] Thus \[\|p_{h}\|_{0,\Omega}\leq\frac{2}{\alpha}\|\mathbf{f}\|_{\mathrm{V}^{\prime}}.\] Let us denote \(\mathrm{I}_{h}\) be the interpolator operator, Under usual assumptions, the following approximation property hold. **Lemma J**.: _Let there exists \(C_{1}>0\) and \(C_{2}>0\), independent of \(h\), such that for each \(\mathbf{u}\in\boldsymbol{H}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\mathcal{C}_{h}\left[\left(\mathbf{u}_{h},p_{h}\right);\left(\mathbf{v}_{h},q_{h} \right)\right]+\mathbf{c}\left(\mathbf{u}_{h};\mathbf{u}_{h},\mathbf{v}_{h} \right)=\mathcal{F}\left(\mathbf{v}_{h}\right)\quad\forall\left(\mathbf{v}_{h}, q_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}.\] Based on these observations, we can deduce the Galerkin orthogonality property \[\mathcal{C}_{h}\left[\left(\mathbf{e}_{\mathbf{u}},\mathbf{e}_{p}\right); \left(\mathbf{v}_{h},q_{h}\right)\right]+\left[\mathbf{c}\left(\mathbf{u}; \mathbf{u},\mathbf{v}_{h}\right)-\mathbf{c}\left(\mathbf{u}_{h};\mathbf{u}_{h },\mathbf{v}_{h}\right)\right]=0\quad\forall\left(\mathbf{v}_{h},q_{h}\right) \in\mathrm{V}_{h}\times\Pi_{h}. \tag{48}\] Subsequently, by utilizing the decompositions given in (47), the definition of \(\mathcal{A}_{\mathrm{w}}\) in (35) for discrete \(\mathcal{A}_{\mathrm{w}_{h}}\), and the identity \[\mathbf{c}\left(\mathbf{u};\mathbf{u},\mathbf{v}_{h}\right)=\mathbf{c}\left( \mathbf{u}-\mathbf{u}_{h};\mathbf{u},\mathbf{v}_{h}\right)+\mathbf{c}\left( \mathbf{u}_{h};\mathbf{u},\mathbf{v}_{h}\right). \tag{49}\] Now, using (35), (49), and (48), we deduce that for all \(\left(\mathbf{v}_{h},q_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\), the following relationship holds \[\mathcal{A}_{\mathbf{u}_{h}}\left[\left(\chi_{\mathbf{u}},\chi_{ p}\right);\left(\mathbf{v}_{h},q_{h}\right)\right] =\mathcal{C}_{h}\left[\left(\chi_{\mathbf{u}};\chi_{p}\right), \left(\mathbf{v}_{h},q_{h}\right)\right]+\mathbf{c}\left(\mathbf{u}_{h};\chi_ {\mathbf{u}},\mathbf{v}_{h}\right),\] \[=-\mathcal{C}_{h}\left[\left(\xi_{\mathbf{u}},\xi_{p}\right); \left(\mathbf{v}_{h},q_{h}\right)\right]+\mathbf{c}\left(\mathbf{u}_{h};\chi_ {\mathbf{u}},\mathbf{v}_{h}\right),\] \[=-\mathcal{C}_{h}\left[\left(\xi_{\mathbf{u}},\xi_{p}\right); \left(\mathbf{v}_{h},q_{h}\right)\right]-\mathbf{c}\left(\mathbf{u}-\mathbf{u }_{h};\mathbf{u},\mathbf{v}_{h}\right)-\mathbf{c}\left(\mathbf{u}_{h};\mathbf{ u},\mathbf{v}_{h}\right)+\mathbf{c}\left(\mathbf{u}_{h};\mathbf{u}_{h},\mathbf{v}_{h} \right)+\mathbf{c}\left(\mathbf{u}_{h};\chi_{\mathbf{u}},\mathbf{v}_{h}\right),\] \[=-\mathcal{C}_{h}\left[\left(\xi_{\mathbf{u}},\xi_{p}\right); \left(\mathbf{v}_{h},q_{h}\right)\right]-\mathbf{c}\left(\xi_{\mathbf{u}}+ \chi_{\mathbf{u}};\mathbf{u},\mathbf{v}_{h}\right)-\mathbf{c}\left(\mathbf{u}_ {h};\xi_{\mathbf{u}}+\chi_{\mathbf{u}},\mathbf{v}_{h}\right)+\mathbf{c}\left( \mathbf{u}_{h};\chi_{\mathbf{u}},\mathbf{v}_{h}\right),\] \[=-\mathcal{C}_{h}\left[\left(\xi_{\mathbf{u}},\xi_{p}\right); \left(\mathbf{v}_{h},q_{h}\right)\right]-\mathbf{c}\left(\xi_{\mathbf{u}}; \mathbf{u},\mathbf{v}_{h}\right)-\mathbf{c}\left(\chi_{\mathbf{u}};\mathbf{ u},\mathbf{v}_{h}\right)-\mathbf{c}\left(\mathbf{u}_{h};\xi_{\mathbf{u}}, \mathbf{v}_{h}\right),\] which together with the definition of \(\mathcal{C}_{h}\) given in equation (33), implies \[\mathcal{A}_{\mathbf{u}_{h}}\left[\left(\chi_{\mathbf{u}},\chi_{ p}\right);\left(\mathbf{v}_{h},q_{h}\right)\right] =-\mathbf{A}_{h}\left(\xi_{\mathbf{u}},\mathbf{v}_{h}\right)- \mathbf{B}_{h}\left(\xi_{\mathbf{u}},q_{h}\right)-\mathbf{B}_{h}\left(\mathbf{ v}_{h},\xi_{p}\right)-\mathbf{c}\left(\xi_{\mathbf{u}};\mathbf{u},\mathbf{v}_{h} \right)-\mathbf{c}\left(\chi_{\mathbf{u}};\mathbf{u},\mathbf{v}_{h}\right)\] \[\quad-\mathbf{c}\left(\mathbf{u}_{h};\xi_{\mathbf{u}},\mathbf{v}_ {h}\right), \tag{50}\] for all \(\left(\mathbf{v}_{h},q_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\). Next, utilizing the discrete inf-sup condition (41) at the left hand side of (50), and applying the continuity properties of \(\mathbf{A},\mathbf{B}\), and \(\mathbf{c}\) stated in Theorem 3 to the right hand side of (50), we can derive the following \[\left\|\chi_{\mathbf{u}}\right\|_{1,h}+\left\|\chi_{p}\right\|_{0} \lesssim\frac{2}{\widehat{\alpha}\|\left(\mathbf{v}_{h},q_{h} \right)\|}\bigg{(}\|\xi_{\mathbf{u}}\|_{1,h}\|\mathbf{v}_{h}\|_{1,h}+\|\xi_{ \mathbf{u}}\|_{1,h}\|q_{h}\|_{0}+\|\xi_{\mathbf{p}}\|_{0}\|\mathbf{v}_{h}\|_{1,h }+\|\xi_{\mathbf{u}}\|_{1,h}\|\mathbf{v}_{h}\|_{1,h}\|\mathbf{u}\|_{1}+\] \[\quad\|\chi_{\mathbf{u}}\|_{1,h}\|\mathbf{v}_{h}\|_{1,h}\|\mathbf{ u}\|_{1}+\|\xi_{\mathbf{u}}\|_{1,h}\|\mathbf{v}_{h}\|_{1,h}\|\mathbf{u}_{h}\|_{1,h} \bigg{)},\] \[\lesssim\frac{2}{\widehat{\alpha}}\left(\|\xi_{p}\|_{0}+\left(2+ \left\|\mathbf{u}_{h}\right\|_{1,h}+\|\mathbf{u}\|_{1}\right)\|\xi_{\mathbf{u}}\|_{1,h }+\left\|\chi_{\mathbf{u}}\right\|_{1,h}\|\mathbf{u}\|_{1}\right),\] \[\lesssim\frac{2}{\widehat{\alpha}}\left(\|\xi_{p}\|_{0}+\left(2+ \left\|\mathbf{u}_{h}\right\|_{1,h}+\|\mathbf{u}\|_{1}\right)\|\xi_{\mathbf{u}} \|_{1,h}+\left\|\chi_{\mathbf{u}}\right\|_{1,h}\|\mathbf{u}\|_{1}\right),\] \[\left\|\chi_{p}\right\|_{0}+\left(1-\frac{2}{\widehat{\alpha}}\| \mathbf{u}\|_{1}\right)\|\chi_{\mathbf{u}}\|_{1,h}\lesssim\frac{2}{\widehat{ \alpha}}\left(\|\xi_{p}\|_{0}+\left\{2+\|\mathbf{u}_{h}\|_{1,h}+\|\mathbf{u}\|_{1 }\right\}\|\xi_{\mathbf{u}}\|_{1,h}\right). \tag{51}\] Therefore, by taking into account the fact that \(\mathbf{u}\in\mathbf{K}\) and \(\mathbf{u}_{h}\in\mathbf{K}_{h}\) based respectively on assumptions (45) and (51), we can conclude that \[\left\|\chi_{p}\right\|_{0}+\left\|\chi_{\mathbf{u}}\right\|_{1,h}\lesssim\left( \left\|\xi_{p}\right\|_{0}+\left\|\xi_{\mathbf{u}}\right\|_{1,h}\right). \tag{52}\] In this way, from (47), (52) and the triangle inequality we obtain \[\left\|\left(\mathbf{e}_{p},\mathbf{e}_{u}\right)\right\|\leq\left\|\left(\chi_ {p},\chi_{\mathbf{u}}\right)\right\|+\left\|\left(\xi_{p},\xi_{\mathbf{u}} \right)\right\|\lesssim\left\|\left(\xi_{p},\xi_{\mathbf{u}}\right)\right\|.\] This, together with the fact that \(\left(\mathbf{z}_{h},\zeta_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\) is arbitrary, leads to the conclusion of the proof. **Theorem 8**.: _Let \(\left(\mathbf{u},p\right)\in\mathrm{V}\times\Pi\) and \(\left(\mathbf{u}_{h},p_{h}\right)\in\mathrm{V}_{h}\times\Pi_{h}\) denotes the unique solutions of the continuous problem (7) and discrete problem (23), respectively, with \(\mathbf{f}\) satisfying (45). Suppose that \(\left(\mathbf{u},p\right)\in\left(\boldsymbol{H}^{l+1}(\Omega)\cap\mathrm{V} \right)\times\left(H^{l}(\Omega)\cap\Pi\right)\) with \(l\geq 1\), Then there exists \(C_{\mathrm{rate Stabilized formulation for high Reynolds numbers The aim of this section is to provide a VMS-LES formulation of the Navier Stokes equations with slip boundary conditions. We validate with numerical tests the use of the Variational Multiscale (VMS) method with Nitsche in solving the Navier-Stokes equations in their standard weak form. In a time interval \((0,T]\) with \(T>0\), the model problem reads: \[\begin{split}\frac{\partial\mathbf{u}}{\partial t}-\nu\Delta \mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla p&=\mathbf{f} \quad\text{in}\,\Omega\times(0,T),\\ \operatorname{div}\mathbf{u}&=0\quad\text{in}\, \Omega\times(0,T),\\ \mathbf{u}&=0\quad\text{on}\,\Gamma_{D}\times(0,T), \\ \mathbf{u}\cdot\mathbf{n}&=0\quad\text{on}\,\Gamma_{ \text{Nav}}\times(0,T),\\ \nu\mathbf{n}^{t}D(\mathbf{u})\boldsymbol{\tau}^{k}+\beta \mathbf{u}\cdot\boldsymbol{\tau}^{k}&=0\quad\text{on}\,\Gamma_{ \text{Nav}}\times(0,T),\quad k=1,2,\\ \mathbf{u}(0)&=0\quad\text{in}\,\Omega\times\{0\}, \\ (p,1)_{\Omega}&=0.\end{split} \tag{53}\] The weak formulation of problem (53) can be written as for all \(t\in(0,T]\), Find \((\mathbf{u},p)\in\mathrm{V}\times\Pi\) with \(\mathbf{u}(0)=0\) such that \[\mathcal{A}\left[\left(\mathbf{u},p\right);\left(\mathbf{v},q\right)\right]= \mathcal{F}(\mathbf{v})\quad\forall\in(\mathbf{v},q)\in\mathrm{V}\times\Pi, \tag{54}\] where \[\mathcal{A}\left[\left(\mathbf{u},p\right);\left(\mathbf{v},q\right)\right] \coloneqq\left(\frac{\partial\mathbf{u}}{\partial t},\mathbf{v}\right)+\frac{ \nu}{2}\left(D(\mathbf{u}),D(\mathbf{v})\right)+(\mathbf{u}\cdot\nabla \mathbf{u},\mathbf{v})-(p,\nabla\cdot\mathbf{v})-(q,\nabla\cdot\mathbf{u})+ \int_{\Gamma_{\text{Nav}}}\beta\sum_{i}\left(\boldsymbol{\tau}^{i}\cdot \mathbf{v}\right)\left(\boldsymbol{\tau}^{i}\cdot\mathbf{u}\right)ds,\] \[\mathcal{F}(\mathbf{v})\coloneqq\left\langle\mathbf{f},\mathbf{v}\right\rangle.\] ### The VMS-LES formulation The VMS technique involves decomposing the solution into coarser and finer scales. As a result, we decompose the weak formulation of the Navier-Stokes equations (54) into two subproblems, considering the coarse scale and the fine scale. The finite element method is used to approximate the coarse scale solution, while the fine scale solution is formulated analytically. Now, we decompose the space into the direct sum of two subspaces: \[\mathcal{Y}_{0}=\mathcal{V}_{0}^{h}\oplus\mathcal{V}_{0}^{\prime} \tag{55}\] where \(\mathcal{V}_{0}^{h}\) known as coarse scale spaces, are the finite element spaces used for the numerical discretization, i.e. \[\mathcal{V}_{0}^{h}=\mathrm{V}_{h}\times\Pi_{h},\] where \(\mathcal{V}_{0}^{\prime}\) are infinite dimensional, known as the fine scale spaces, and orthogonal to \(\mathcal{V}_{0}^{h}\) respectively. Then, we have the following decompositions from (55): \[\begin{split}\mathbf{u}&=\mathbf{u}_{h}+\mathbf{u}^{ \prime},\\ p&=p_{h}+p^{\prime},\end{split}\] where \((\mathbf{u},p)\in\mathcal{Y}_{0}\) and this is the starting point of VMS-LES method i.e. the separation of the flow field into resolved scales \((\mathbf{u}_{h},p_{h})\) and unresolved scales \((\mathbf{u}^{\prime},p^{\prime})\). Following the approach proposed in [1], Performing the two-scale decomposition on the problem (54), we obtain two subproblems, one for the coarse and one for the fine scales: \[\mathcal{A}\left[\mathbf{V}_{h};\mathbf{U}_{h}+\mathbf{U}^{\prime}\right]= \mathcal{F}(\mathbf{V}_{h}) \tag{56}\] \[\mathcal{A}\left[\mathbf{V}^{\prime};\mathbf{U}_{h}+\mathbf{U}^{\prime}\right]= \mathcal{F}(\mathbf{V}^{\prime}) \tag{57}\] where the abbreviations \(\mathbf{U}=(\mathbf{u},p)\) and \(\mathbf{V}=(\mathbf{v},q)\) are used for simplicity. It should be noted that the fine scale solution is typically modeled analytically, expressed in terms of both the problem's data and the coarse scale solution, and then substituted into the coarse scale subproblem. By projecting the fine scale solution into the coarse scale solution, a finite dimensional system for the coarse scale solution is obtained. The solution of (57) is represented as \[\mathbf{U}^{\prime}=F_{\mathbf{U}}(\mathbf{R}(\mathbf{U}_{h})), \tag{58}\] which can be interpreted as the unresolved scales that are derived as a function of the residual of the resolved scales. By substituting (58) into the resolved scales equation (56), a unified set of equations for the resolved scales is obtained. The objective is to approximate \(F_{\mathbf{U}}\) using models that are not dependent on the underlying physics of turbulent flows but are derived solely based on mathematical reasoning. Finally, we adopt a similar approach to [10] in modeling the fine-scale velocity and pressure variables as: \[\mathbf{u}^{\prime} \simeq-\mathcal{S}_{M}\left(\mathbf{u}_{h}\right)\mathbf{r}_{M} \left(\mathbf{u}_{h},p_{h}\right)\] \[p^{\prime} \simeq-\mathcal{S}_{C}\left(\mathbf{u}_{h}\right)r_{C}\left( \mathbf{u}_{h}\right)\] where \(\mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h}\right)\) and \(r_{C}\left(\mathbf{u}_{h}\right)\) indicate the strong residuals of the momentum and continuity equations: \[\mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h}\right)=\frac{\partial \mathbf{u}_{h}}{\partial t}+\mathbf{u}_{h}\cdot\nabla\mathbf{u}_{h}+\nabla p _{h}-\nu\Delta\mathbf{u}_{h}-\mathbf{f},\] \[r_{C}\left(\mathbf{u}_{h}\right)=\nabla\cdot\mathbf{u}_{h},\] respectively. Moreover, \(\mathcal{S}_{M}\) and \(\mathcal{S}_{C}\) are the stabilization parameters designed by a specific Fourier analysis applied in the framework of stabilized methods, which we choose similarly to [1] as: \[\mathcal{S}_{M}\left(\mathbf{u}_{h}\right) =\left(\frac{\sigma^{2}}{\Delta t^{2}}+\mathbf{u}_{h}\cdot \boldsymbol{G}\mathbf{u}_{h}+C_{r}\nu^{2}\boldsymbol{G}:\boldsymbol{G}\right)^ {-1/2}\] \[\mathcal{S}_{C}\left(\mathbf{u}_{h}\right) =\left(\mathcal{S}_{M}\boldsymbol{g}\cdot\boldsymbol{g}\right)^{-1} \tag{59}\] where \(\Delta t\) denotes the time step, while \(\sigma\) denotes the order of the BDF (Backward Differentiation Formulas) time scheme [14]. Furthermore, the constant \(C_{r}=60\cdot 2^{r-2}\) is calculated using an inverse inequality which depends on the polynomial degree \(r\) associated with the velocity finite element space [13]. Moreover, \(\boldsymbol{G}\) and \(\boldsymbol{g}\) corresponds to the metric tensor and vector, respectively, and their definitions are defined as \[G_{ij}=\sum_{k=1}^{d}\frac{\partial\xi_{k}}{\partial x_{i}}\frac {\partial\xi_{k}}{\partial x_{j}},\] \[g_{i}=\sum_{k=1}^{d}\frac{\partial\xi_{k}}{\partial x_{i}},\] with \(\boldsymbol{x}=\left\{x_{i}\right\}_{i=1}^{d}\) represents the coordinates of element \(K\) in physical space, \(\boldsymbol{\xi}=\left\{\xi_{i}\right\}_{i=1}^{d}\) represents the coordinates of element \(\hat{K}\) in parametric space, and \(\frac{\partial\boldsymbol{\xi}}{\partial\boldsymbol{x}}\) represents the inverse Jacobian of the element mapping between the reference and physical domains. We make the same assumptions as [1]: \[\left\{\begin{array}{l}\frac{\partial\boldsymbol{v}_{h}}{\partial t}=0,\\ \mathbf{u}^{\prime}=0\,\mathrm{on}\,\Gamma,\\ (D(\mathbf{v}_{h}),D(\mathbf{u}^{\prime}))=0.\end{array}\right. \tag{60}\] By explicitly expressing the left-hand side of (56) and adding the Nitsche terms to the action of the Laplace distribution, we arrive at the following result: \[\mathcal{A}\left[\mathbf{V}_{h};\mathbf{U}_{h}+\mathbf{U}^{\prime}\right]= (\mathbf{u}_{t},\mathbf{v}_{h})+\frac{\nu}{2}(D(\mathbf{u}_{h}),D( \mathbf{v}_{h}))+((\mathbf{u}_{h}+\mathbf{u}^{\prime})\cdot\nabla(\mathbf{u}_{ h}+\mathbf{u}^{\prime}),\mathbf{v}_{h})-(p_{h}+p^{\prime},\nabla\cdot\mathbf{v}_{h})\] \[-(q_{h},\nabla\cdot(\mathbf{u}_{h}+\mathbf{u}^{\prime}))+\int_{ \Gamma_{\mathrm{Nav}}}p^{\prime}(\mathbf{n}\cdot\mathbf{v}_{h})ds+\int_{ \Gamma_{\mathrm{Nav}}}q(\mathbf{n}\cdot\mathbf{u}^{\prime})ds\] \[+\sum_{E\in\mathcal{E}_{\mathrm{Nav}}}\bigg{(}-\int_{E}\mathbf{n }^{t}(\nu D(\mathbf{u}_{h})-p_{h}I)\mathbf{n}(\mathbf{n}\cdot\mathbf{v}_{h}) ds-\int_{E}\mathbf{n}^{t}(\nu D(\mathbf{v}_{h})-q_{h}I)\mathbf{n}(\mathbf{n} \cdot\mathbf{u}_{h})ds\] \[+\int_{E}\beta\sum_{i}\left(\mathbf{\tau}^{i}\cdot\mathbf{v}_{h} \right)\left(\mathbf{\tau}^{i}\cdot\mathbf{u}_{h}\right)ds+\gamma\int_{E}{h_{e}}^ {-1}(\mathbf{u}_{h}\cdot\mathbf{n})(\mathbf{v}_{h}\cdot\mathbf{n})ds\bigg{)},\] \[= (\mathbf{u}_{t},\mathbf{v}_{h})+\frac{\nu}{2}(D(\mathbf{u}_{h}),D (\mathbf{v}_{h}))+(\mathbf{u}_{h}\cdot\nabla\mathbf{u}_{h},\mathbf{v}_{h})-(p _{h},\nabla\cdot\mathbf{v}_{h})-(q_{h},\nabla\cdot\mathbf{u}_{h})\] \[+\sum_{E\in\mathcal{E}_{\mathrm{Nav}}}\bigg{(}-\int_{E}\mathbf{n }^{t}(\nu D(\mathbf{u}_{h})-p_{h}I)\mathbf{n}(\mathbf{n}\cdot\mathbf{v}_{h}) ds-\int_{E}\mathbf{n}^{t}(\nu D(\mathbf{v}_{h})-q_{h}I)\mathbf{n}(\mathbf{n} \cdot\mathbf{u}_{h})ds\] \[+\int_{E}\beta\sum_{i}\left(\mathbf{\tau}^{i}\cdot\mathbf{v}_{h} \right)\left(\mathbf{\tau}^{i}\cdot\mathbf{u}_{h}\right)ds+\gamma\int_{E}{h_{e}}^ {-1}(\mathbf{u}_{h}\cdot\mathbf{n})(\mathbf{v}_{h}\cdot\mathbf{n})ds\bigg{)}-( p^{\prime},\nabla\cdot\mathbf{v}_{h})\] \[-(q_{h},\nabla\cdot\mathbf{u}^{\prime})+\int_{\Gamma_{\mathrm{Nav }}}p^{\prime}(\mathbf{n}\cdot\mathbf{v}_{h})ds+\int_{\Gamma_{\mathrm{Nav}}}q( \mathbf{n}\cdot\mathbf{u}^{\prime})ds+(\mathbf{u}_{h}\cdot\nabla\mathbf{u}^{ \prime},\mathbf{v}_{h})\] \[+(\mathbf{u}^{\prime}\cdot\nabla\mathbf{u}_{h},\mathbf{v}_{h})+( \mathbf{u}^{\prime}\cdot\nabla\mathbf{u}^{\prime},\mathbf{v}_{h}).\] Thereafter, we apply integration by parts to the fine-scale terms that appear in the coarse-scale equations, by considering the aforementioned assumption (60). This results into the semi-discrete VMS-LES formulation of the Navier-Stokes equation with Nitsche which is expressed in terms of the weak residual as follows: for all \(t\in(0,T]\), Find \(\mathbf{U}_{h}=\{\mathbf{u}_{h},p_{h}\}\in\mathcal{V}_{0}^{h}\) with \(\mathbf{u}_{h}(0)=0\) such that \[\mathbf{H}\left[\mathbf{V}_{h};\mathbf{U}_{h}\right]=\mathbf{L}\left(\mathbf{ V}_{h}\right) \tag{61}\] for all \(\mathbf{V}_{h}=\{\mathbf{v}_{h},q_{h}\}\in\mathcal{V}_{0}^{h}\), where we considered the following definitions: \[\mathbf{H}\left[\mathbf{V}_{h};\mathbf{U}_{h}\right]\coloneqq \mathcal{G}^{NS}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+\mathcal{G}^{ \mathrm{SUPG}}\,\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+\mathcal{G}^{ \mathrm{VMS}}\,\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+\mathcal{G}^{ \mathrm{LES}}\,\left(\mathbf{V}_{h},\mathbf{U}_{h}\right),\] \[\mathbf{L}\left(\mathbf{V}_{h}\right)\coloneqq\left\langle\mathbf{ v}_{h},\mathbf{f}\right\rangle,\] with \[\mathcal{G}^{NS}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)= \sum_{T\in\mathcal{T}_{h}}\bigg{(}\left(\mathbf{u}_{t},\mathbf{v}_{h} \right)+\frac{\nu}{2}\left(D(\mathbf{v}_{h}),D(\mathbf{u}_{h})\right)+\left( \mathbf{v}_{h},\mathbf{u}_{h}\cdot\nabla\mathbf{u}_{h}\right)-\left(\nabla \cdot\mathbf{v}_{h},p_{h}\right)-(q_{h},\nabla\cdot\mathbf{u}_{h})\,\bigg{)}+\] \[\sum_{E\in\mathcal{E}_{\mathrm{Nav}}}\bigg{(}-\int_{E}\mathbf{n}^{ t}\left(\nu D(\mathbf{u}_{h})-p_{h}I\right)\mathbf{n}\left(\mathbf{n}\cdot \mathbf{v}_{h}\right)ds-\int_{E}\mathbf{n}^{t}\left(\nu D(\mathbf{v}_{h})-q _{h}I\right)\mathbf{n}\left(\mathbf{n}\cdot\mathbf{u}_{h}\right)ds\] \[+\int_{E}\beta\sum_{i}\left(\mathbf{\tau}^{i}\cdot\mathbf{v}_{h} \right)\left(\mathbf{\tau}^{i}\cdot\mathbf{u}_{h}\right)ds+\gamma\int_{E}{h_{e}}^{ -1}\left(\mathbf{u}_{h}\cdot\mathbf{n}\right)\left(\mathbf{v}_{h}\cdot \mathbf{n}\right)ds\bigg{)} \tag{62}\] \[\mathcal{G}^{\mathrm{SUPG}}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)= \sum_{T\in\mathcal{T}_{h}}\bigg{(}\left(\mathbf{u}_{h}\cdot\nabla \mathbf{v}_{h}-\bar{C}\nabla q_{h},\mathcal{S}_{M}\left(\mathbf{u}_{h}\right) \mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h}\right)\right)-\left(\nabla\cdot \mathbf{v}_{h},\mathcal{S}_{C}\left(\mathbf{u}_{h}\right)\mathbf{r}_{C}\left( \mathbf{u}_{h}\right)\right)\bigg{)}\] (63) \[\mathcal{G}^{\mathrm{VMS}}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)= \sum_{T\in\mathcal{T}_{h}}\bigg{(}\left(\mathbf{u}_{h}\cdot(\nabla \mathbf{v}_{h})^{T},\mathcal{S}_{M}\left(\mathbf{u}_{h}\right)\mathbf{r}_{M} \left(\mathbf{u}_{h},p_{h}\right)\right)-\sum_{E\in\mathcal{E}_{\mathrm{Nav}}} \bigg{(}\int_{E}\mathcal{S}_{C}\left(\mathbf{u}_{h}\right)\mathbf{r}_{C}\left( \mathbf{u}_{h}\right)\left(\mathbf{n}\cdot\mathbf{v}_{h}\right)ds\] \[+\int_{E}q_{h}\left(\mathbf{n}\cdot\mathcal{S}_{M}\left(\mathbf{u }_{h}\right)\mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h}\right)\right)ds\bigg{)}\] (64) \[\mathcal{G}^{\mathrm{LES}}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)= -\sum_{T\in\mathcal{T}_{h}}\bigg{(}\left(\nabla\mathbf{v}_{h}, \mathcal{S}_{M}\left(\mathbf{u}_{h}\right)\mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h} \right)\otimes\mathcal{S}_{M}\left(\mathbf{u}_{h}\right)\mathbf{r}_{M}\left( \mathbf{u}_{h},p_{h}\right)\right)\bigg{)}. \tag{65}\] In our formulation, we introduce an additional constant \(\tilde{C}\) in (63) to discuss two choices for finite elements. Specifically, we set \(\tilde{C}=0\) when we make use of \(\mathbb{P}_{2}-\mathbb{P}_{1}\) inf-sup stable finite elements. Conversely, when we make use of stabilized equal order finite elements for both the velocity and pressure variables, we set \(\tilde{C}=1\), as stated in [1]. We would like to highlight that (62) represents the weak formulation of the Navier-Stokes equation with Nitsche. We conclude that (63) represents the classical Streamline Upwind Petrov Galerkin (SUPG) stabilization terms and (64) represents additional terms introduced by VMS (Variational Multiscale) method. Finally, (65) represents the LES (Large Eddy Simulation) modeling of turbulence. **Remark 5.1**.: _This formulation differs from the standard one because of the contribution of pressure terms on the boundary \(\Gamma_{\text{Nav}}\)._ **Remark 5.2**.: _It is observed that as the time step \(\Delta t\) approaches zero, the stabilization parameters in (59) behave as follows:_ \[\mathcal{S}_{M}\sim\Delta t\to 0\quad\mathcal{S}_{C}\sim\frac{1}{\Delta t}\to\infty.\] _A similar behavior is also demonstrated in [22] i.e., the VMS-LES modelling may lose its effectiveness for small time steps. It is observed that as \(\Delta t\to 0\), the term associated with the LES modeling of turbulence \(\left(\nabla\mathbf{v}_{h},\mathcal{S}_{M}\left(\mathbf{u}_{h}\right)\mathbf{ r}_{M}\left(\mathbf{u}_{h},p_{h}\right)\otimes\mathcal{S}_{M}\left(\mathbf{u}_{h} \right)\mathbf{r}_{M}\left(\mathbf{u}_{h},p_{h}\right)\right)\) becomes negligible, while the dominant term \(\left(\nabla\cdot\mathbf{v}_{h},\mathcal{S}_{C}\left(\mathbf{u}_{h}\right) \mathbf{r}_{C}\left(\mathbf{u}_{h}\right)\right)\) fails to effectively act as a turbulence model in the semi-discrete VMS-LES weak formulation of the Navier-Stokes equations (61)._ ### Fully discrete VMS-LES formulation We obtain a fully discrete VMS-LES weak formulation of the Navier-Stokes equations with Nitsche by discretizing time with the BDF scheme of order \(\sigma\), and the nonlinear terms in the above formulation are handled using Newton-Gregory backward polynomials [10]. A detailed explanation of the fully discrete VMS-LES method is provided in [22]. \[\text{Find}\ \mathbf{U}_{h}=\left\{\mathbf{u}_{h}^{n+1},p_{h}^{n+1}\right\} \in\mathcal{V}_{0}^{h}\ :\] \[\tilde{\mathcal{G}}^{\text{NS}}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+ \tilde{\mathcal{G}}^{\text{SUPG}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+\tilde{ \mathcal{G}}^{\text{VMS}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right)+\tilde{ \mathcal{G}}^{\text{LES}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right)=\left\langle \mathbf{v}_{h},\mathbf{f}^{n+1}\right\rangle, \tag{66}\] for all \(\mathbf{V}_{h}=\left\{\mathbf{v}_{h},q_{h}\right\}\in\mathcal{V}_{0}^{h}\) with \(\mathbf{f}^{n+1}=\mathbf{f}(t^{n+1})\). The bilinear forms \(\tilde{\mathcal{G}}^{\text{NS}}\left(\mathbf{V}_{h},\mathbf{U}_{h}\right), \tilde{\mathcal{G}}^{\text{SUPG}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right),\tilde{ \mathcal{G}}^{\text{VMS}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right),\) and \(\tilde{\mathcal{G}}^{\text{LES}}\ \left(\mathbf{V}_{h},\mathbf{U}_{h}\right)\) are defined in Appendix A. These forms are modifications of the bilinear forms defined in (61). ## 6 Numerical Experiments Now, computational examples are presented to demonstrate the consistency of the numerical scheme. The open-source finite element library FEniCS [1] is utilized to simulate all numerical computations. The theoretical results of Theorem 8 is numerically validated in the first example. The Nitsche method is validated in the second example by comparing with the benchmark problem found in the literature [10]. The VMS-LES approach with Nitsche at high Reynolds numbers is validated in the last two examples. A Lagrange multiplier is used to implement the average zero condition for the pressure approximation. ### Test 1: Convergence rates In this numerical test, we compute the convergence rate of the Nitsche method (8), considering the square domain \(\Omega=(-1,1)^{2}\), and a sequence of uniformly refined meshes. We present numerical test based on the following exact solution \[\mathbf{u}\left(x_{1},x_{2}\right)=\left(2x_{2}(1-x_{1}^{2}),-2x_{1}(1-x_{2}^ {2})\right)^{T}\] \[p\left(x_{1},x_{2}\right)=\left(2x_{1}-1\right)\left(2x_{2}-1\right).\] The slip boundary condition is imposed on \(x_{2}=-1\) and the essential boundary condition is enforced on the rest of the boundary. We can observe that Table 1 present the approximation errors for pressure and velocity as well as the convergence rate, which are in good agreement with the theory. Table 2 presents the error in \(L_{2}\) norm on the slip condition on \(\Gamma_{\text{Nav}}\) and it shows that the larger the Nitsche parameter \(\gamma\) is, the smaller error on the slip condition. This happen because the error of the actual equation increases, so there is a compromise between both things. We can observe this behaviour in Table 1. Additionally, we see that the number of Newton iterations required to reach the prescribed tolerance of \(10^{-7}\) is at most three. Figure 1 represents the computed velocity field. The results in Table 1 and Figure 1 were computed with \(\beta=10\), \(\nu=1\), and \(\gamma=10\). \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|r|r|} \hline \(\gamma\) & Mesh & D.O.F. & Newton Its & \(\|p-p_{h}\|_{0}\) & rate & \(\|\nabla(\mathbf{u}-\mathbf{u}_{h})\|\) & rate & \(\|\mathbf{u}-\mathbf{u}_{h}\|_{0}\) & rate \\ \hline & \(8\times 8\) & 659 & 3 & \(5.20\times 10^{-2}\) & \(-\) & \(1.12\times 10^{-1}\) & \(-\) & \(4.90\times 10^{-3}\) & - \\ & \(16\times 16\) & 2467 & 2 & \(1.27\times 10^{-2}\) & 2.03 & \(2.23\times 10^{-2}\) & 2.32 & \(4.90\times 10^{-4}\) & 3.34 \\ 1.0 & \(32\times 32\) & 9539 & 2 & \(3.13\times 10^{-3}\) & 2.02 & \(4.55\times 10^{-3}\) & 2.29 & \(5.00\times 10^{-5}\) & 3.30 \\ & \(64\times 64\) & 37507 & 2 & \(7.78\times 10^{-4}\) & 2.01 & \(1.23\times 10^{-3}\) & 1.88 & \(7.00\times 10^{-6}\) & 2.92 \\ & \(128\times 128\) & 148739 & 2 & \(1.94\times 10^{-4}\) & 2.00 & \(2.59\times 10^{-4}\) & 2.25 & \(1.00\times 10^{-6}\) & 3.23 \\ \hline & \(8\times 8\) & 659 & 3 & \(5.18\times 10^{-2}\) & \(-\) & \(8.33\times 10^{-2}\) & \(-\) & \(3.47\times 10^{-3}\) & - \\ & \(16\times 16\) & 2467 & 2 & \(1.27\times 10^{-2}\) & 2.02 & \(1.81\times 10^{-2}\) & 2.19 & \(3.82\times 10^{-4}\) & 3.18 \\ 10 & \(32\times 32\) & 9539 & 2 & \(3.14\times 10^{-3}\) & 2.02 & \(4.24\times 10^{-3}\) & 2.10 & \(4.50\times 10^{-5}\) & 3.09 \\ & \(64\times 64\) & 37507 & 2 & \(7.78\times 10^{-4}\) & 2.01 & \(1.03\times 10^{-3}\) & 2.04 & \(5.00\times 10^{-6}\) & 3.04 \\ & \(128\times 128\) & 148739 & 2 & \(1.94\times 10^{-4}\) & 2.00 & \(2.53\times 10^{-4}\) & 2.02 & \(1.00\times 10^{-6}\) & 3.01 \\ \hline & \(8\times 8\) & 659 & 3 & \(5.15\times 10^{-2}\) & \(-\) & \(6.32\times 10^{-2}\) & \(-\) & \(2.74\times 10^{-3}\) & - \\ & \(16\times 16\) & 2467 & 2 & \(1.27\times 10^{-2}\) & 2.02 & \(1.59\times 10^{-2}\) & 1.98 & \(3.42\times 10^{-4}\) & 3.00 \\ 100 & \(32\times 32\) & 9539 & 2 & \(3.13\times 10^{-3}\) & 2.02 & \(3.90\times 10^{-3}\) & 1.99 & \(4.30\times 10^{-5}\) & 3.00 \\ & \(64\times 64\) & 37507 & 2 & \(7.78\times 10^{-4}\) & 2.01 & \(1.00\times 10^{-3}\) & 1.99 & \(5.00\times 10^{-6}\) & 2.99 \\ & \(128\times 128\) & 148739 & 2 & \(1.94\times 10^{-4}\) & 2.00 & \(2.50\times 10^{-4}\) & 1.99 & \(1.00\times 10^{-6}\) & 2.99 \\ \hline \end{tabular} \end{table} Table 1: Test 1: Experimental errors, iteration count, Number of degree of freedom (D.O.F.), and convergence rates for the approximate solutions \(\mathbf{u}_{h}\) and \(p_{h}\). Values are displayed for the Taylor-Hood space \(\mathbb{P}_{2}-\mathbb{P}_{1}\) with \(\beta=10\) and \(\nu=1\). Figure 1: Test 1: velocity field ### Test 2: Lid-driven cavity test This is a lid-driven cavity test and we perform it for steady and unsteady formulations. Consider the stationary Navier-Stokes equations with slip boundary conditions. The evaluation includes modelling a planar flow of an isothermal fluid inside a cavity driven. The cavity is represented as a square domain \(\Omega=(0,1)\times(0,1)\), with negligible body force, and one moving wall. The velocity imposed on the top boundary \(\{x_{2}=1\}\) is defined as \[\mathbf{u}=\begin{cases}(10x_{1},0)^{\mathrm{T}}&\text{for}\,0.0\leqslant x_{1 }\leqslant 0.1\\ (1,0)^{\mathrm{T}}&\text{for}\,0.1\leqslant x_{1}\leqslant 0.9\\ (10-10x_{1},0)^{\mathrm{T}}&\text{for}\,0.9\leqslant x_{1}\leqslant 1\end{cases}\] and the homogeneous slip boundary with \(\beta=1\) and \(\gamma=10\) is enforced on the other three sides. In Figure 2, we observe the velocity streamlines for \(Re=1\) and \(Re=500\), confirming the expected behavior and aligning with the results reported in [10]. Secondly, we consider the unsteady Navier Stokes equations with the slip boundary condition and we apply the VMS-LES approach with Nitsche for validating the scheme at high Reynolds numbers. Figure 3 shows the velocity streamlines plots at Reynolds number \(Re=1000\) and \(Re=5000\) at final time \(T=35\) with time step \(\Delta t=0.035s\). As the Reynolds number increases, the primary vortex migrates towards the center of the cavity or becomes more dense and the flow of the fluid is unpredictable in this region. Figure 3: Test 2: Velocity streamlines for \(Re=1000\) and \(Re=5000\) with mesh = 32 \(\times\) 32 at \(T=35\) with \(\Delta t=0.035s\) Figure 2: Test 2: Velocity streamlines \(forRe=1\) and \(Re=500\) with mesh = 32 \(\times\) 32 ### Test 3: Flow past through a circular cylinder This example is based on a standard three dimensional CFD benchmark problem: flow past through a circular cylinder. The geometrical settings of the domain is taken from [1]. The computational mesh is depicted in Figure 4. In this problem, no slip boundary condition are imposed on all the lateral walls of the box, while do-nothing boundary conditions is imposed at the outflow plane. On the surface of the cylinder we impose the homogeneous slip condition with \(\beta=1\) and \(\gamma=10\). Finally, the inflow condition is given by \[\mathbf{u}_{D}:=\left(\frac{16U_{m}\sin(\pi t/8)x_{2}x_{3}(H-x_{2})(H-x_{3})}{ H^{4}},0,0\right)^{T}\] with \(U_{m}:=2.25\) m/s and \(H=0.41\) m. The Reynolds number is given by the formula \(Re=\frac{UD}{\nu}\) where \(U\) represents the average velocity of the fluid imposed on the inflow boundary, \(D\) corresponds to the diameter of the cylinder. The mesh is depicted in Figure 4. Our objective is to represent the behavior of fluid velocity at high Reynolds numbers. The numerical solution of the streamlines of the fluid at different Reynolds numbers, including \(1000\); \(10\,000\); and \(50\,000\) are observed at time \(T=1\) with time step \(\Delta t=0.1s\) in Figures 5, 6 and 7. The isovalues of the pressure is presented at Reynold number \(50,000\) in Figure 8. The solution exhibits oscillations at larger time intervals, specifically, the tail of the flow after the obstacle develops typical oscillations. ## 7 Conclusion In this paper, we address two main contributions. Firstly, we analyze Nitsche's method for the stationary Navier-Stokes equations on Lipschitz domains under minimal regularity assumptions. Our analysis provides a robust formulation for implementing slip (i.e., Navier) boundary conditions in arbitrarily complex boundaries. We establish the well-posedness of the discrete problem using the Banach Necas Babuska and the Banach fixed-point theorems under standard small data assumptions. Additionally, we provide optimal convergence rates for the approximation error. Secondly, we propose a Variational Multiscale Large Eddy Simulation (VMSLES) stabilized formulation, which enables the simulation of incompressible fluids at high Reynolds numbers. Finally, we perform three numerical tests: the first one validates the theoretical results of the Nitsche's scheme, the second one is a benchmark problem that demonstrates the consistency of our scheme for both steady and unsteady formulations at arbitrary Reynolds numbers, and the third test shows the behavior of the fluid passing through a cylinder at high Reynolds numbers. ## Acknowledgments AB was supported by the Ministry of Education, Government of India - MHRD for financial assistance. NAB was supported by the ANID Grant _FONDECYT de Postdoctorado N\({}^{*}\) 3230326_. ## Data Availability Enquiries about data availability should be directed to the authors. ## Declarations **Conflict of interest** The authors have not disclosed any competing interests. Figure 8: Test 3: Isovalues of the pressure at Re = 50 000 at \(T=1\) with \(\Delta t=0.1\) Figure 7: Test 3: Fluid velocity streamlines tubes at Re = 50 000 at \(T=1\) with \(\Delta t=0.1\)
2303.09235
Is the Earth's magnetic field a constant ? a legacy of Poisson
In the report he submitted to the Acad\'emie des Sciences, Poisson imagined a set of concentric spheres at the origin of the Earth's magnetic field. It may come as a surprise to many that Poisson as well as Gauss both considered the magnetic field to be constant. We propose in this study to test this surprising assertion for the first time evoked by Poisson (1826). First, we will present a development of Maxwell's equations in the framework of a static electric field and a static magnetic field in order to draw the necessary consequences for the Poisson hypothesis. In a second step, we will see if the observations can be in agreement with Poisson (1826). To do so, we have chosen to compare 1) the polar motion drift and the secular variation of the Earth's magnetic field, 2) the seasonal pseudo-cycles of day length together with those of the sea level recorded by different tide gauges around the globe and those of the Earth's magnetic field recorded in different magnetic observatories. We then propose a mechanism, in the spirit of Poisson, to explain the presence of the 11-year in the magnetic field. We test this mechanism with observations and finally we study closely the evolution of the g10 coefficient of the IGFR over time.
J-L. Le Mouël, F. Lopes, V. Courtillot, D. Gibert, J-B. Boulé
2023-03-16T11:19:32Z
http://arxiv.org/abs/2303.09235v2
# Is the Earth's magnetic field a constant? a legacy of Poisson ###### Abstract In the report he submitted to the Academie des Sciences, Poisson imagined a set of concentric spheres at the origin of the Earth's magnetic field. It may come as a surprise to many that Poisson as well as Gauss both considered the magnetic field to be constant. We propose in this study to test this surprising assertion for the first time evoked by Poisson (1826). First, we will present a development of Maxwell's equations in the framework of a static electric field and a static magnetic field in order to draw the necessary consequences for the Poisson hypothesis. In a second step, we will see if the observations can be in agreement with Poisson (1826). To do so, we have chosen to compare 1) the polar motion drift and the secular variation of the Earth's magnetic field, 2) the seasonal pseudo-cycles of day length together with those of the sea level recorded by different tide gauges around the globe and those of the Earth's magnetic field recorded in different magnetic observatories. We then propose a mechanism, in the spirit of Poisson, to explain the presence of the 11-year in the magnetic field. We test this mechanism with observations and finally we study closely the evolution of the g10 coefficient Introduction The birth of geomagnetism as a science can be dated as August 8, 1269: on that day Petrus Peregrinus wrote three letters ([1]; see _f.i._ Courtillot and Le Mouel [2]) during the siege of the city of Lucera in the Italian region of Puglie. The letters can be considered as the first scientific article on geomagnetism, 331 years before the famous De Magnete by Gilbert ([3]). Expressed in modern terms, Peregrinus wrote that the Earth had a magnetic field with a dipolar structure and that some rocks or minerals were magnetized. One could carve a sphere out of magnetite, pierce a hole through its center and it would oscillate around the northward direction, a thought experiment that announced the compass. In the three centuries that unfurled since then, a lot has been discovered: the variation of inclination with latitude, the daily, annual and other periodic variations ([4, 5, 6]), irregular variations such as events linked to solar activity, the secular variation and its sudden jerks ([7, 8, 9]). In order to handle an ever increasing data base, magnetic indices (_eg_ aa, Dst, Kp, _etc_...) were introduced (_eg_[10, 11, 12, 13, 14, 4, 15, 16, 17, 6]). It was found that quasi periodical variations of the magnetic field ([4, 5, 6]), and also sun spots ([18]) followed a [19] power law with exponent -5/3. In a series of papers that started with Le Mouel in 1984 ([20]) and continued with Jault et al. ([21]), and Jault and Le Mouel ([22, 23, 24]), these authors found that the trends of (1) magnetic secular variation, (2) polar motion and (3) length of day were strongly correlated. They proposed to explain these observations with a coupling mechanism in which flow in a cylinder tangent to the core and the rotation axis exchange torques at the core-mantle boundary. The solution for flow on the cylinder (see [24], system 6) is the same as that generated by an internal gravitational wave in a rotating fluid (_eg_[25]) also known as Proudman ([26]) flow. But this mechanism encountered serious difficulties with the orders of magnitude of physical parameters such as CMB topography. Also, the torque exerted by the fluid pressure and the electromagnetic torque were too weak to validate the model, as concluded by Jault and Le Mouel ([24]). Actually, the model of a cylinder tangent to the core is very close in spirit to that envisioned by Poisson ([27]). In the report he submitted to the Academie des Sciences, prior to an oral communication, Poisson imagined a set of concentric spheres in place of a cylinder. Poisson ([27]) was the first scientist to describe the magnetic field as a series of spherical harmonics, a decade before Gauss did ([28]), Part 5, chapter 1, 'Allgemeine Theorie des Erdmagnetismus'). He also invented a technique to measure the absolute value of the horizontal component of the magnetic field ([29, 30]), seven years before Gauss ([31]) did. It may come as a surprise to many that Poisson ([27]) as well as Gauss ([28] both considered the magnetic field to be constant. This was an axiomatic basis for the development into spherical harmonics. There are very clear statements to this effect in the writings of both scientists. In Poisson ([27]) page 49, one reads (our translation from the French): "_We will assume that the hollow sphere be magnetized under the influence of a force that be the same in magnitude and direction for all its points, such as the magnetic action of Earth, for instance_". And in page 54:"_Since time does not enter these formulae, a consequence is that, after the first instants of rotation, that we did not mention, the action of the rotating sphere on a given point will be constant in magnitude and in direction_". Gauss ([28]), part 5, chapter 1, paragraph 2, page 6, Gauss writes (our translation from the German): "_...magnetism consists only in galvanic currents (that is constant currents) that persist in the smallest parts of the bodies..._". Gauss develops his theory very quickly (pages 18 to 23, paragraphs 14 to 27), without any physical proof. And his mathematical proof is exactly that found in Legendre ([32]) or Laplace([33]) for the gravitational field. In contrast, the 130 pages of Poisson ([27]) memoir are devoted both to the full physical and mathematical proofs of the magnetic field description. Given that Poisson ([27]) work precedes and is more complete than that of Gauss ([28]), it is only fair to recognize that the former was the first to develop the magnetic field in spherical harmonics and to state that this magnetic field was constant. We do not propose to follow, as Poisson ([27]) did, the description of a magnetic field based on Maupertuis ([34]) principle of least action, but rather to pursue our previous presentation of the laws of gravitation Lopes et al. ([35]) following Lagrange ([36]) and extend it to the case (and consequences) of the constancy of the electric and magnetic fields, giving their full physical meaning to the moments. This is the aim of section 2. In section 3 we try to explain the variations that one can measure within the frame of Poisson ([27]) paradigm. We confront the previous theoretical developments with modern observations in section 4 and conclude in section 5. ## 2 On the constancy of the magnetic field ### Some consequences of Maxwell's equations One can learn a lot from a Lagrangian approach to the derivation of Maxwell's equations. Following Maupertuis ([34]), one only needs to know the action of a moving charged particle in an electromagnetic (**EM**) field, associated with the action of its interaction with that field, to derive the first pair of equations : \[\mathrm{rot}\mathbf{E}=-\frac{1}{c}\frac{\partial\mathbf{H}}{ \partial t} \tag{1a}\] \[\mathrm{div}\mathbf{H}=0 \tag{1b}\] Adding the action of field \(\mathbf{EM}\), one obtains the second pair: \[\mathrm{rot}\mathbf{H}=\frac{1}{c}\frac{\partial\mathbf{E}}{ \partial t}+\frac{4\pi}{c}\mathbf{j} \tag{1c}\] \[\mathrm{div}\mathbf{E}=4\pi\rho \tag{1d}\] Equations (1a) to (1d) link in a symmetrical way the magnetic field (\(\mathbf{H}\)) to the electric field (\(\mathbf{E}\)). \(c\) is the velocity of the charged particle \(\rho\) associated to current density \(\mathbf{j}\). It is important to note that, without knowing the action of \(\mathbf{EM}\), one has access to important properties : the space component, which is actually the field H, is conserved (1b). By analogy to Euler's (continuity1) equation, "magnetic charges" do not exist. Equation (1a) implies that as soon as \(\mathbf{H}\) varies with time, a field \(\mathbf{E}\) that is perpendicular (rot) and in quadrature with \(\mathbf{H}\) is created (instantly hence the term \(1/c\)). The second pair of equations is fully symmetrical. Equation (1d) implies that there exists an "electric charge" (\(\rho\)) that locally deforms the field \(\mathbf{E}\). In a vacuum (1d) has exactly the same physical meaning as (1b). But one must add a current density to the time variation of \(\mathbf{E}\) to propagate the field \(\mathbf{H}\) (1c). A side note on (1c) also known as Maxwell-Ampere equation. The classical understanding is that magnetic fields can be generated in two different ways, either by electrical currents (Ampere's theorem), or by time changes of field \(\mathbf{E}\), or the sum of both. The Lagrangian approach clarifies the picture. An \(\mathbf{EM}\) field is defined by its 4-vector potential \(A_{i}(=\varphi,\mathbf{A})\), where \(\varphi\) is the time component (called the scalar potential, linked to \(\mathbf{E}\)) and \(\mathbf{A}\) the space component (called the vector potential and linked to \(\mathbf{H}\)). Charges that move in the field must obey the same decomposition ; one therefore introduces a 4-vector current density \(\vec{j}(=c\rho,\mathbf{j})\), with a scalar charge density (\(\rho\)) found in (1d) and a vector current density (\(\mathbf{j}\)). Footnote 1: The continuity equation from fluid mechanics The \(\mathbf{EM}\) field described by the Maxwell equations must be in one of the three following forms: electrostatic, magne tostatic or a propagating field (wave propagation) that will not be discussed further in this paper. ### The electrostatic field Equations (1a) and (1d) reduce to: \[\text{div}\textbf{E}=4\pi\rho \tag{2a}\] \[\text{rot}\textbf{E}=0 \tag{2b}\] **E** derives from scalar potential (\(\varphi\)): \[\textbf{E}=-\text{grad}\ \varphi\] leading to the Poisson equation: \[\text{div}(\text{grad})\ \varphi=\Delta\varphi=-4\pi\rho \tag{2c}\] In a vacuum (\(\rho=0\)), the scalar potential verifies the Laplace equation: \[\Delta\ \varphi=0\] The field produced by a point charge (\(e\)) will be directed along the vector having the charge as one of its extremities. **E** is a radial field. The absolute value of **E** depends only on the distance \(R\) to \(e\). Applying the divergence theorem to (2a): \[\text{div}\textbf{E}\longmapsto\ \iiint_{V}\text{div}\textbf{E}\ dV=\iiint_{S} \textbf{E}.d\textbf{S}\] The flux of **E** across a spherical surface with radius \(R\) centered on \(e\) is \(4\pi R^{2}\textbf{E}\) and also equals \(4\pi e\) from Gauss's theorem, \[\oint\textbf{E}d\textbf{f}=4\pi\int edV.\] Finally, in vector form: \[\textbf{E}=\frac{e\textbf{R}}{R^{3}} \tag{2d}\] Thus, the field produced by a point charge is inversely proportional to the square of the distance to \(e\) (Coulomb's law). The potential associated to this field is: \[\varphi=\frac{e}{\textbf{R}} \tag{2e}\] and for a system of charges: \[\varphi=\sum_{i}\frac{e_{i}}{\textbf{R}_{i}} \tag{2f}\] Let us observe this field (2f) at a distance that is large compared to the charge system's dimension, that is far enough so that their relative motions can be considered as constant (in the sense of Lagrangian mechanics). This allows one to use the concept of "moment". Let us choose a coordinate system whose origin lies within the charge system and let \(r_{i}\) be their respective vector radii. The total observed potential at point \(R_{o}\) is: \[\varphi=\sum\frac{e_{a}}{|\textbf{R}_{o}-\textbf{r}_{i}|} \tag{3a}\] For \(R_{o}\gg r_{i}\) and thanks to the generalized Maclaurin expansion given by [32], we can develop (3a) in a series of powers of \(\frac{r_{i}}{R_{o}}\), using the first order formula: \[f(\textbf{R}_{o}-\textbf{r})\approx f(\textbf{R}_{o})-\textbf{r}\ \text{grad}f( \textbf{R}_{o})\] thus (3a) becomes: \[\varphi=\frac{\sum e_{i}}{\textbf{R}_{o}}-\sum e_{i}\textbf{r}_{i}\text{grad }\frac{1}{\textbf{R}_{o}} \tag{3b}\] The sum \(\textbf{d}=\sum e_{i}\textbf{r}_{i}\) is the dipolar moment of the charge system. By analogy with a system of masses, this dipolar moment is the mathematical equivalent to the tensor of moments of inertia of order 2 in geodesy. If the sum of charges is zero, the dipolar moment does not depend on the choice of the origin (\(\textbf{r}^{{}^{\prime}}=\textbf{r}+\textbf{k}\)), \(k\) constant, \(\sum e_{i}=0\), then \(\textbf{d}^{{}^{\prime}}=\sum e_{i}\textbf{r}^{{}^{\prime}}_{i}=\sum e_{i} \textbf{r}_{i}+\textbf{k}\sum e_{i}=\textbf{d}\)). The potential at large distances can be written: \[\varphi=-\mathbf{d}\mathbf{v}\frac{1}{\mathbf{R}_{o}}=\frac{\mathbf{d}\mathbf{R}_{o }}{R_{o}^{3}} \tag{3c}\] and: \[\mathbf{E} =-\text{grad}\varphi\ \ \ (=(\mathbf{d}\mathbf{v})\nabla\frac{1}{ \mathbf{R}_{o}})\] \[=-\text{grad}\frac{\mathbf{d}\mathbf{R}_{o}}{R_{o}^{3}}=-\frac{1} {R_{o}^{3}}\text{grad}(\mathbf{d}\mathbf{R}_{o})-(\mathbf{d}\mathbf{R}_{o}) \text{grad}\frac{1}{R_{o}^{3}}\] \[=-\frac{\mathbf{d}}{R_{o}^{3}}\nabla\mathbf{R}_{o}-\frac{\mathbf{ R}_{o}}{R_{o}^{3}}\nabla\mathbf{d}-\mathbf{d}\mathbf{R}_{o}\nabla\frac{1}{R_{o}^{3}}\] \[=-\frac{\mathbf{d}}{R_{o}^{3}}-0+\frac{3\mathbf{d}\mathbf{R}_{o }}{R_{o}^{3}}\] \[=\frac{3(\mathbf{d}\mathbf{d})\mathbf{n}-\mathbf{d}}{R_{o}^{3}}\] where \(\mathbf{n}\) is the unit vector oriented towards \(R_{o}\). At large distances, the potential is inversely proportional to the square of distance and \(\mathbf{E}\) to its cube. \(\mathbf{E}\) is axially symmetrical about \(\mathbf{d}\). In a plane where the direction of \(\mathbf{d}\) is that of the \(z\) axis, the Cartesian components of \(\mathbf{E}\) are: \[E_{z}=d.\ \frac{3\text{cos}^{2}\theta-1}{R_{o}^{3}},\quad E_{z}=d.\ \frac{3\text{sin} \theta\text{cos}\theta}{R_{o}^{3}} \tag{3d}\] and the radial and tangential components are: \[E_{r}=d.\ \frac{2\text{cos}\theta}{R_{o}^{3}},\quad E_{\theta}=-d.\ \frac{\text{sin} \theta}{R_{o}^{3}} \tag{3e}\] \(\theta\) being the angle between \(z\) and \(\mathbf{R}_{o}\). We note that equations (3d) and (3e) are the same as those of the components of a dipolar magnetic field with d being replaced by \(\frac{\mu_{0}}{4\pi}(\text{eg}\) Le Mouil in [37], chapter 26, page 40, system 24). As done by [33], one can always develop the scalar potential \(\varphi\) as a sum of contributions in ascending powers of \(\frac{1}{\mathbf{R}_{0}}\), the term \(\varphi^{(n)}\) being proportional to \(\frac{1}{\mathbf{R}_{o}^{n+1}}\), \[\varphi=\varphi^{(0)}+\varphi^{(1)}+\varphi^{(2)}+\ldots \tag{4a}\] The first term \(\varphi^{(0)}\) is determined by the sum of all charges or masses for Laplace ([33]), for whom this term can never be zero. As we have just seen, when the sum of electric charges is zero, one is led to the electrostatic components (3d) and (3e). The second term, \(\varphi^{(1)}\) is the dipolar one, determined by its dipolar moment d. One can continue the development in a Legendre ([32]) series. The next term would be: \[\varphi^{(2)}=\frac{1}{2}\sum e_{x}x_{j}\frac{\partial^{2}}{\partial X_{j} \partial X_{j}}\frac{1}{R_{o}} \tag{4b}\] where the \(x\) coordinates are the components of \(\mathbf{r}\) and \(X\) those of \(\mathbf{R}_{o}\). We note that: \[\Delta\frac{1}{R_{o}}\equiv\delta_{ij}\frac{\partial^{2}}{\partial X_{i} \partial X_{j}}\frac{1}{R_{o}}=0\] One can then write (4b) as: \[\varphi^{(2)}=\frac{1}{2}\sum e(x_{i}x_{j}-\frac{1}{3}r^{2}\delta_{ij})\frac{ \partial^{2}}{\partial X_{i}\partial X_{j}}\frac{1}{R_{o}}\] The tensor \(D_{ij}=\sum e(3x_{i}x_{j}-r^{2}\delta_{ij})\) is the quadrupolar moment of the charge system. Thus: \[\varphi^{(2)}=\frac{D_{ij}}{6}\frac{\partial^{2}}{\partial X_{i}\partial X_{j }}\frac{1}{R_{o}} \tag{4c}\] or, as done for the dipolar term in (3c): \[\frac{\partial^{2}}{\partial X_{i}\partial X_{j}}\frac{1}{R_{o}}=\frac{X_{i} X_{j}}{R_{o}^{3}}-\frac{\delta_{ij}}{R_{o}^{3}}\] and since \(\delta_{ij}D_{ij}=D_{ia}=0\), we have: \[\phi^{(2)}=\frac{D_{ij}n_{i}n_{j}}{2R_{o}^{3}} \tag{4d}\] \(n_{i}\) and \(n_{j}\) are the unit vectors starting from \(\mathbf{R}_{o}\) and oriented along the two axes of the quadrupolar moment. The eigenvalues of the tensor are such that \(D_{ii}=0\) and are therefore linked by: \[D_{xx}=D_{gg}=-\frac{1}{2}D_{zz}\] If we write \(D\) for component \(D_{zz}\), the quadrupolar poten tial becomes, \[\varphi^{(2)}=\frac{D}{4\pi R_{o}^{3}}(3\cos^{2}\theta-1)=\frac{D}{2R_{o}^{3}} \mathcal{P}_{2}(\cos\,\theta) \tag{4e}\] in which one introduces the \(\mathcal{P}_{2}\) Legendre ([32]) polynomial. One can generalize this construction; the \(l\)-order term is determined by a tensor of order \(l\), the \(2^{l}\)-polar moment. The mathematical development above is independent of the physical problem (gravity, magnetism, electricity) one is interested in. It serves to illustrate the fundamental result from Legendre ([32]) regarding the theory of gravitational attraction of masses as generalized by Laplace ([33]): as far as one observes from far enough the sources, the inverse of distance (\(\frac{1}{\mathbf{R}_{o}-\mathbf{r}}\)) is given by: \[\frac{1}{|\mathbf{R}_{o}-\mathbf{r}|}=\frac{1}{\sqrt{\mathbf{R}_{o}^{2}+ \mathbf{r}^{2}-2\mathbf{r}\mathbf{R}_{o}\mathrm{cos}\,\chi}}=\sum_{l=0}^{ \infty}\frac{\mathbf{r}^{l}}{\mathbf{R}_{o}^{l+1}}\mathcal{P}_{l}(\cos\chi) \tag{5a}\] Let us introduce the pairs of spherical angles \(\Theta\), \(\Phi\) and \(\theta\), \(\varphi\) formed respectively by vectors and with the given coordinate axes, and apply the addition theorem of spherical functions: \[\mathcal{P}_{l}(\cos\chi)=\sum_{m=-l}^{l}\frac{(l-|m|)!}{(l+|m|)!}\mathcal{P} _{l}^{\mathrm{int}}(\cos\Theta)\mathcal{P}_{l}^{\mathrm{int}}(\cos\theta)e^{- i\omega(\Phi-\varphi)} \tag{5b}\] were \(\mathcal{P}_{l}^{\mathrm{int}}\) are associated Legendre polynomials. Let us also introduce the spherical functions: \[\mathcal{Y}_{lm}(\theta,\varphi)=(-1)^{m}i\sqrt{\frac{(2l+1)(l-m)!}{4\pi(l+m)! }}\mathcal{P}_{l}^{m}(\cos\theta)e^{im\varphi},\quad m\geq 0. \tag{5c}\] Integrating (5b) and (5c) in (5a), one finally obtains the expression for the inverse of a distance on the sphere: \[\frac{1}{|\mathbf{R}_{o}-\mathbf{r}|}=\sum_{l=0}^{\infty}\sum_{m=-1}^{l}\frac {l^{\prime}}{R_{o}^{l+1}}\frac{4\pi}{2l+1}\mathbf{Y}_{lm}^{\prime}(\Theta, \Phi)\mathcal{Y}_{lm}(\theta,\varphi) \tag{5d}\] Developing each term in equation (3a), one obtains the expression for the term of order \(l\) of the potential: \[\varphi^{(l)}=\frac{1}{R_{o}^{l+1}}\sum_{m=-l}^{l}\sqrt{\frac{4\pi}{2l+1}} \mathcal{Q}_{m}^{(l)}\mathcal{Y}_{(lm)}^{\prime}(\Theta,\Phi) \tag{5e}\] where the \(2l+1\) quantities \(\mathcal{Q}_{m}^{l}\) constitute the \(2^{l}\)-polar moment of the system of charges, defined by: \[\mathcal{Q}_{m}^{l}=\sum_{i}e_{i}r_{i}^{l}\sqrt{\frac{4\pi}{2l+1}}\mathcal{Y} _{lm}(\theta_{i},\varphi_{i}) \tag{5f}\] At this point, let us underline why we have taken the trouble to recall this (at least in large part) classical derivation, which is likely taught in all graduate and even undergraduate physics programs. In this section, we have seen that in the case of a static field \(\mathbf{E}\), Coulomb's law (2e) imposes itself and the field is radial. In the case of a system of charges, if one is too close to the system the interactions of the charges _forbid one to use the Lagrangian concept of moment_. Thus one must remain _far from the system_. But as found by Legendre ([32]) and Laplace ([33]) when attempting to define the shape of the attraction field of masses, it is seen that the \(\mathbf{E}\) field involves the same constraints, _ie_ is _electrostatic_. It is only because we are with a static field that the notion of the _inverse of a distance_ takes its full meaning and that we can develop it into _spherical harmonics_. We can now undertake the same analysis in the case of the magnetic field. ### The magnetostatic field As we have seen above, Maxwell's equations (1b) and (1c) imply that the magnetic field \(\mathbf{H}\), created by charges in finite motion, remaining in a finite region of space (1b), whose impulses always retain finite values, has a stationary character that we wish to analyze further. The two equations are now: \[\text{div}\ \mathbf{H}=0 \tag{6a}\] \[\text{rot}\ \mathbf{H}=\frac{4\pi}{c}\mathbf{j} \tag{6b}\] The vector potential \(\mathbf{A}\) associated to \(\mathbf{H}\) is defined by \(\text{rot}\ \mathbf{A}=\mathbf{H}\). Carried into (6b) it becomes: \[\text{grad}\ \text{div}\mathbf{A}-\Delta\mathbf{A}=\frac{4\pi}{c}\mathbf{j}\] \(\mathbf{A}\) being defined in a non unequivocal way, one can impose an arbitrary condition, such as \(\text{div}\ \mathbf{A}=0\). The previous line then becomes the Poisson equation: \[\Delta\mathbf{A}=-\frac{4\pi}{c}\mathbf{j} \tag{6c}\] (6c) is analogous to (2c), charge density \(\rho\) being replaced by current density \(\frac{\mathbf{j}}{c}\). By analogy with the electrical potential, we can write: \[\mathbf{A}=\frac{1}{c}\int\frac{\mathbf{j}}{R}dV \tag{6d}\] where \(R\) is the distance from observation point to volume element \(dV\). The integral in (6d) can be replaced by a sum and the current density by \(\rho\mathbf{v}\). By analogy with the scalar potential (3a), the vector potential becomes: \[\mathbf{A}=\frac{1}{c}\sum\frac{e_{i}\mathbf{v}_{i}}{R_{i}} \tag{6e}\] This is how charges are introduced in a vector linked to the magnetic field, without risking the mistake of writing that the magnetic field derives from a scalar potential. As done above for the electrostatic field, one can calculate the effect of the moving charges in a reference system with its origin within the charge distribution, with the same notations for vectors \(\mathbf{r}_{i}\) and distance \(\mathbf{R}_{o}\). (6e) becomes: \[\mathbf{A}=\frac{1}{c}\sum\frac{e_{i}\mathbf{v}_{i}}{|\mathbf{R}_{o}-\mathbf{ r}_{i}|} \tag{7a}\] The main difference between scalar and vector potentials is that for \(\mathbf{E}\) one only has the effect of fixed charges or motion as a rigid block, whereas for \(\mathbf{H}\) what counts is the uniform velocity of charges imposed by (1b). This is the reason why Poisson [27]'s title is: 'Du magnetisme en mouvement' (of magnetism in motion). And this is the main reason why one cannot write a physical description of a magnetic field as a series of spherical harmonics. However, some remarkable consequences can be derived. For instance, one can always write in a development as a Legendre ([32]) series analogous to (3b), to first order: \[\mathbf{A}=\frac{1}{cR_{o}}\sum e\mathbf{v}-\frac{1}{c}\sum e\mathbf{v}( \mathbf{r}\nabla\frac{1}{R_{o}})\] One can write \(\sum e\mathbf{v}=\frac{d}{dt}\sum e\mathbf{r}\), but the mean value of the derivative varying in a finite interval is 0. \[\mathbf{A}=-\frac{1}{c}\sum e\mathbf{v}(\mathbf{r}\nabla\frac{1}{R_{o}})= \frac{1}{cR_{o}^{3}}\sum e\mathbf{v}(\mathbf{r}\mathbf{R}_{o}) \tag{7b}\] Note that \(\mathbf{v}=\hat{\mathbf{r}}\), and since \(\mathbf{R}_{o}\) is a constant vector, \[\sum e(\mathbf{R}_{o}\mathbf{r})\mathbf{v}=\frac{1}{2}\frac{d}{dt}\sum e \mathbf{r}(\mathbf{R}_{o}\mathbf{r})+\frac{1}{2}\sum e[\mathbf{v}(\mathbf{r} \mathbf{R}_{o})-\mathbf{r}(\mathbf{v}\mathbf{R}_{o})]\] Carrying this expression in (7b), the mean value of the first term is again 0, thus: \[\mathbf{A}=\frac{1}{2cR_{o}^{3}}\sum e[\mathbf{v}(\mathbf{r}\mathbf{R}_{o})- \mathbf{r}(\mathbf{v}\mathbf{R}_{o})] \tag{7c}\] One recognizes a vector product in (7c). Let us introduce the magnetic moment \(\mathfrak{m}=\frac{1}{2c}\sum e\mathbf{r}\times\mathbf{v}\). Equation (7c) becomes: \[\mathbf{A}=\nabla\frac{1}{R_{o}}\times\mathfrak{m}=\frac{\mathfrak{m}\times \mathbf{R}_{o}}{R_{o}^{3}} \tag{7d}\] Whereas \(\frac{1}{R_{o}}\) verifies Laplace's equation, its modification by the rotational of \(\mathfrak{m}=\frac{1}{2c}\sum e\mathbf{r}\times\mathbf{v}\) does not. With the expression for the vector potential (7d), one can derive the magnetic field. Given the formula rot (\(a\nabla\)) \(b+a\) div \(b-b\) div \(a\), one finds: \[\mathbf{H}=\text{rot}\,\,\text{m}\times\frac{\mathbf{R}_{o}}{R_{o}^{3}}=\text{ div}\frac{\mathbf{R}_{o}}{R_{o}^{3}}-(\text{m}\nabla)\frac{\mathbf{R}_{o}}{R_{o}^{3}}\] Since with \(\mathbf{R}_{o}\neq 0\): \[\text{div}\frac{\mathbf{R}_{o}}{R_{o}^{3}}=\mathbf{R}_{o}\text{grad}\frac{1}{R _{o}^{3}}+\frac{1}{R_{o}^{3}}\text{div}\mathbf{R}_{o}=0,\] and: \[(\text{m}\nabla)\frac{\mathbf{R}_{o}}{R_{o}^{3}}=\frac{1}{R_{o}^{3}}(\text{m} \nabla)\mathbf{R}_{o}+\mathbf{R}_{o}(\text{m}\nabla\frac{1}{R_{o}^{3}})=\frac {\text{m}}{R_{o}^{3}}-\frac{3\mathbf{R}_{o}(\text{m}\mathbf{R}_{o})}{R_{o}^{ 5}},\] then: \[\mathbf{H}=\frac{3\mathbf{n}(\text{mm})-\text{m}}{R_{o}^{3}}, \tag{7e}\] where \(\mathbf{n}\) is the unit vector along direction \(\mathbf{R}_{o}\). If the ratio of mass to charge is the same for all charges in the system, then: \[\text{m}=\frac{1}{2c}\sum e\mathbf{r}\times\mathbf{v}=\frac{e}{2mc}\sum m \mathbf{r}\times\mathbf{v}\] Finally, if all velocities of all charges are such that \(v\ll c\), then \(m\mathbf{v}\) is the impulsion \(\mathbf{p}\) of the charge and: \[\text{m}=\frac{e}{2mc}\sum\mathbf{r}\times\mathbf{p}=\frac{e}{2mc}\mathcal{M}\] (8a) where \[\mathcal{M}=\sum\mathbf{r}\times\mathbf{p}\] is the kinetic moment of the system. Here the ratio of magnetic to mechanical moment is a constant. Let us now consider a system of charges placed in a constant, external magnetic field \(\mathbf{H}\). The (time averaged) force exerted on the system is the Lorentz force. Following Lorentz ([38]), it is known as Maxwell's 5ft equation and given by \(\mathcal{F}=\sum\frac{e}{c}\mathbf{v}\times\mathbf{H}=\frac{d}{dt}\sum\frac{e }{c}\mathbf{r}\times\mathbf{H}\), that is zero. On the other hand, the time average of the moment of forces is \(\mathcal{K}=\sum\frac{e}{c}\mathbf{r}\times(\mathbf{v}\times\mathbf{H})\)_which is not 0_. Writing explicitly the double vector product, \[\mathcal{K}=\sum\frac{e}{c}\{\mathbf{v}(\mathbf{r}\mathbf{H})-\mathbf{H}( \mathbf{v}\mathbf{r})\}=\sum\frac{e}{c}\{\mathbf{v}(\mathbf{r}\mathbf{H})- \frac{1}{2}\mathbf{H}\frac{d}{dt}\mathbf{r}^{2}\}\] one obtains simply: \[\mathcal{K}=\text{m}\times\mathbf{H}\] The Lagrangian of a system of charges placed in an external, constant and uniform magnetic field is: \[\mathcal{L}_{H}=\sum\frac{e}{c}\mathbf{Av}=\sum\frac{e}{2c}(\mathbf{H}\times \mathbf{r})\mathbf{v}=\sum\frac{e}{2c}(\mathbf{r}\times\mathbf{v})\mathbf{H} \tag{8b}\] Introducing the magnetic moment (8b) becomes simply, \[\mathcal{L}_{H}=\text{n}\mathbf{H} \tag{8c}\] By analogy, in a uniform electric field, the Lagrangian of a system with 0 total charge and a dipolar moment includes the term: \[\mathcal{L}_{E}=\mathbf{d}\mathbf{E} \tag{8d}\] Let us consider a system of charges undergoing a finite motion in a field \(\mathbf{E}\) with central symmetry, due to a motionless particle. Let us shift from a motionless system of coordinates to a system undergoing a uniform rotation about an axis passing through the motionless particle. The velocity \(\mathbf{v}\) of the particle in the new system is linked to its velocity \(\mathbf{v}\) in the old system by \(\mathbf{v}^{{}^{\prime}}=\mathbf{v}+\Omega\times\mathbf{r}\) where \(\mathbf{r}\) is the particle's vector radius and \(\Omega\) the angular velocity of the rotating coordinate system. In the fixed system, the Lagrangian of charges is: \[\mathcal{L}=\sum\frac{m\mathbf{v}^{{}^{\prime}2}}{2}-\mathcal{U},\] where \(\mathcal{U}\) is the potential energy of charges in the external field E. In the new system the Lagrangian becomes: \[\mathcal{L}=\sum\frac{m}{2}(\mathbf{v}+\Omega\times\mathbf{r})^{2}-\mathcal{U}.\] If the ratio \(\frac{e}{m}\) of charge to mass is the same for all particles, and if one writes: \[\Omega=\frac{e}{2mc}\mathbf{H}\] then, for sufficiently small values of \(\mathbf{H}\) so that one can neglect terms in \(\mathbf{H}^{2}\), the Lagrangian takes the form: \[\mathcal{L}=\sum\frac{m\nu^{2}}{2}+\frac{1}{2c}\sum e(\mathbf{H}\times\mathbf{ r})\mathbf{v}-\mathcal{U} \tag{8e}\] It is remarkable that the two Lagrangians in (8b) and (8c) are the same. In summary, the Lagrangian of charges in finite motion in an electric field produced by a motionless rotating particle (8c) is the same as the Lagrangian of a system of charges placed in a constant and uniform magnetic field (8b). In slightly different, more readable terms, the behavior of a system of charges with the same \(\frac{e}{m}\) ratio executing a finite motion in a field \(\mathbf{E}\) with central symmetry and a weak and uniform field \(\mathbf{H}\) is equivalent to the behavior of the same charge system in field \(\mathbf{E}\) with respect to a uniformly rotating coordinate system with angular velocity \(\Omega\). This is Larmor's theorem (_cf_[39], chapter VI, [40]). Let us now evaluate the variation of the mean kinetic moment \(\mathcal{M}\). The variation of the mean kinetic moment \(\mathcal{M}\) is equal to the moment \(\mathcal{K}\) of forces applied to the system. Thus \(\mathcal{K}=\mathfrak{m}\times\mathbf{H}=\frac{d\mathcal{M}}{dt}\). If the ratio \(\frac{e}{m}\) is the same for all the system's particles, then \(\mathcal{M}\) is proportional to \(\mathfrak{m}\) (_cf_8a), and: \[\frac{d\mathcal{M}}{dt}=-\Omega\times\mathcal{M}. \tag{8f}\] Vector \(\mathcal{M}\), and thus vector \(\mathfrak{m}\), both rotate with angular velocity \(-\Omega\) about the field direction; its absolute value and the angle it makes with respect to the field direction are constant. ## 3 Some further remarks on section 2 Most if not all modern studies of the geomagnetic field have it varying in time, keeping its first order dipolar configuration and'ready' to be analyzed with spherical harmonics. But a time variable \(\mathbf{E}\) or \(\mathbf{H}\) field implies wave propagation, hence physics described by Helmholtz equations, not Legendre. Let us describe what happens with a variable field in the equations of section 2. The only link between the field geometry and the physics of the problem, that is the e charges and their positions in the reference system, is (5f) that we recall here: \[Q_{m}^{l}=\sum_{i}e_{i}r_{i}^{l}\sqrt{\frac{4\pi}{2l+1}}\mathcal{Y}_{lm}( \theta_{i},\varphi_{i})\] One needs to visualize the shapes of spherical harmonics, that are "physically" represented by the coefficients \(\mathcal{Y}_{lm}\). Figure 1 displays the first four axial multipoles, that is the eigenvectors \(\mathcal{Y}_{1,0}\), \(\mathcal{Y}_{2,0}\), \(\mathcal{Y}_{3,0}\) and \(\mathcal{Y}_{4,0}\) (from left to right and top to bottom) corresponding to the so-called Gauss coeffi Figure 1: Spherical harmonics (from left to right and top to bottom) \(\mathcal{Y}_{1,0}\), \(\mathcal{Y}_{2,0}\), \(\mathcal{Y}_{3,0}\) and \(\mathcal{Y}_{4,0}\). cients \(g_{1,0}\), \(g_{2,0}\), \(g_{3,0}\) and \(g_{4,0}\). The eigenvectors are a constant of the problem, as long as this representation retains a physical meaning. For example, a dipolar magnetic field must always have its principal axis aligned with the axis going from the origin to the geographic North pole. In this paradigm, a variable field would involve only modifications of the space-time physics of term \(e^{\prime}r_{i}^{I}\) in the sum (5f). But this constraint cannot be satisfied because either: 1. the position term \(r_{i}^{I}\) of each moving particle fluctuates with time, so that the Legendre-Laplace condition (05a) is not satisfied any more, that is the inverse distance \(\dfrac{1}{|\mathbf{R}_{o}-\mathbf{r}|}\) is no more a natural solution of the Laplacian. One would need to introduce time, but then the Laplacian would have to be replaced by a Dalembertian, i.e. a different problem; 2. or it is the number and/or quality of the charges that would change with time. But then the nature of the core would change with time and one would need to find a physical mechanism that would explain how the field intensity could decrease (as is the case at present), yet could have increased and even reversed in the past. Poisson [27] reasoning was simpler. As early as page 7 of his memoir, above his first equation, he linked gravitation and magnetism. So did Gauss ([28]), and this was much developed by Heaviside ([41]). This was more than a simple physical analogy. Recall that modern geophysicists use Talwani'algorithms ([42, 43]) to determine the mass (respectively dipolar) distributions based on microgravimetric (resp. magnetic) measurements using the same equation. Poisson ([27]) writes: _'The differential volume element, corresponding to point M' [The magnetic volume], will have \(h^{3}dxd\xi d\eta\) as its expression; we will write \(\mu^{i}h^{3}dxd\xi d\eta\) for the amount of free fluid it contains, \(\mu^{i}\) being positive or negative according to this fluid being boreal or austral. This coefficient will be a function of \(\chi,\xi,\eta\) depending on the distribution of the two fluids inside the magnetic elements. If they are moving, \(\mu^{i}\) will vary with time; but the total quantity of free fluid belonging to the same element must always be zero, so we will always have:_ \[\int\mu^{i}d\chi d\xi d\eta=0\qquad(I)\] _The integral extending to the entire volume of the magnetic element'._ This short quotation is from the very beginning of Poisson ([27]) derivation; in more modern terms div \(\mathbf{H}\) = 0. What enters a volume element and what leaves it is constant: the field is stationary. From Lagrange's standpoint ([36]), Poisson ([27]) implies that at the origin of the magnetic field is a rather undeformable object, a solid body consisting of a set of electromagnetic point sources whose respective distances do not vary, or that can be considered as such at the distance at which its effects are observed. We note that the tensor of a quadrupolar electrical moment \(D_{ij}=\sum e(3x_{i}x_{j}-r^{2}\delta_{ij})\) is similar to the order 2 inertia moment tensor of a rotating solid body \(I_{ik}=\sum m(x_{i}^{2}\delta_{ik}-x_{i}x_{k})\). One then concludes that the space-time variations of electric charges that would not result in a constant field would make _it impossible to construct either an electric or a magnetic moment_. ## 4 Reconciling modern observations with Poisson's theory ### On the drift of the magnetic dipole In sections 2 and 3, we have shown that the only way a magnetic field can be written as the sum of multipolar potentials is that (similar to masses composing a solid rotating body) charges generating the electric and magnetic fields should move uniformly in space and time. In other words the field must be stationary. We also know that the field intensity fluctuates and that this secular variation is morphologically similar to the drift of the rotation pole, called the Markowitz (or Markowitz-Stoyko, [44, 45]) drift. The annual oscillations of the field are morphologically similar to those of the length of the day ([21, 22, 23, 24]). One can reconcile Poisson ([27]) and observations by picturing a dipole that oscillates about geographical North, this oscillation sharing the same excitation (forcings) that act on polar motion. Let us first illustrate this idea. In Figure 2 we show the eigenvector \(\mathbf{y}_{1,0}\) associated with the Gauss coefficient \(g_{1,0}\). **CLF** stands for the Chambon-La-Foret magnetic observatory. Point P is one of the elements of [27]'s magnetic volume composing the dipole. As seen in section 2, the dipole action at **CLF** is characterized by the distance **P-CLF** on the sphere. If the dipole is tilted (Figure 2, right) this distance changes, and so do the coordinates of **CLF** in the two dipole reference systems (Figure 2, left and right). Following Lagrange ([36]), a number of authors (_eg_[46, 47, 48, 49, 50, 35]) have shown how and how much astronomical forces influence Earth's rotation, in the same way its own weight perturbs the rotation of a spinning top, through an exchange of angular moments. The same astronomical forces perturb the number of sunspots, _ie_ solar activity (_eg_[51, 52, 53, 54]). Since [33], we know that masses at the surfaces of planets can re-organize under the influence of stresses acting on the rotation pole. The Liouville-Euler system of linear differential equations of first order runs this re-organization (_cf_[55], section 3 for more details). From sections 2 and 3, moving charges may be considered as a moving fluid in rotation about the dipole's symmetry axis.Any large scale fluid motions must correspond to a block rotation about the Earth's rotation axis, as shown by Laplace and Poincare. There are no other possibilities of natural motion at first order. Inside as well as at its surface the pattern of fluid motion must be the same (_cf_[33, 56]; see [57, 58], for some illustrations). Possibly the longest series of magnetic measurements is that of declination D compiled for Paris (France) by Alexandrescu _et al_. ([59, 60]). The series starts in 1541 but we select values from 1781 onward, when the sampling becomes more regular. The Brest sea level data start in 1807. The polar motion data are defined as the modulus of the horizontal displacement of the pole in a plane tangential to the Earth (\(m_{1}+i\ m_{2}\)); it starts in 1846 and therefore sets the length of our analyzes. We have applied Singular Spectrum Analysis (**SSA**, _eg_[61]) in order to extract the trend (this sub-section) and the annual and semi-annual components (next subsection) from the three time series of sea-level at Brest, magnetic field at **CLF** and length of day. **SSA** uses the mathematical properties of descending order diagonal matrices (Hankel, Toeplitz matrices, _eg_[62]) and their orthogonalization by singular value decomposition (**SVD**, _eg_[63]). In Figure 2(a), we superimpose the **SSA** trends of declination in **CLF** in red, sea level at Brest in blue and polar motion in black. We have dealt with the sea-level series in Le Mouel et _al_. ([64]), in which we compared variations of the sea-level trends with the variations of Markowitz-Stoyko polar drift. We have studied many aspects of polar motion in [65, 48, 66]. The time derivatives of the three trends are shown in Figure 2(b). We have also used the long data set of Stephens and Morrison ([67]) and Gross ([68]) for _lod_ Figure 2: Eigen vector \(\mathbf{y}_{1,0}\) associated with Gauss coefficient \(g_{1,0}\): to the left the dipole is axial and compare its trend with the first derivative of magnetic declination (Figure 2(c)). The trends of the three derivatives display very similar patterns. We have shown before that this pattern is linked to the ephemerids of Uranus ([64, 48]). The similarity between the derivative of the trend of the rotation pole and sea-level is expected: as the Earth shifts, its fluid envelope shifts as a solid in the same way. The two are almost in phase. For the magnetic field, there is a \(\sim\)20-year phase lag. It is in quadrature with polar motion around 1930, and catches up in the 1960s. This vindicates the mechanism advocated by Le Mouel et _al._ ([20, 21]) and Jault et _al._ ([22, 23, 24]), that is a transfer of moment at the core-mantle boundary. The phase lag would be due to the roughness of the **CMB**. Laplace ([33]) has shown theoretically that lod and pole motion are linked by a first order derivative operator. We have verified this with observations in Lopes et al. ([66]). Figure 2(c) compares the (5 yr smoothed) mean value of the derivative of declination in Paris to the corresponding (5 yr smoothed) mean value of length of day. The two series of mean values of solid Earth motions are in quadrature and show a 60 yr oscillation ([66], Figure 04). This 60 yr period is clearly present in the 2-humps pattern of lod as well as in the derivative of declination (blue curve, Figure 2(c)). As shown in section 2, the mechanical and magnetic moments that are at the origin of the geomagnetic field are linked linearly (equation 7(c)). On another hand, the mechanical moment and the polar rotation axis are linked through a first order time derivative (equation 7(d)). In the spirit of Larmor's theorem, we assume that the two phenomena (rotation and magnetism) are linked by the same external forcing ; thus the former should be compared to the derivative of the latter. In the lower part of Figure 2(c), the mean value of lod has been offset by 60 years, leading to an almost perfect match of the patterns. We have not (yet?) been able to explain that offset. One cannot envision that the magnetic field (declination) variations would be due to intensity variations of components X, Y and Z since as seen in section 2 the field must be Figure 3: Comparison between the trends variations of the magnetic declination in Paris, of the mean sea level recorded in Brest, of the polar motion drift and of the length of day. constant (following Poisson ). Figure 2(b) is but an extension to sea-level of an observation made by Le Mouel [20]. ### On the forced quasi-cycles of the magnetic field Next, we extend the observation by Jault et Le Mouel ([24]) of a link between annual and semi-annual oscillations of _lod_ and the magnetic field. We have applied **SSA** in order to extract the annual and semi-annual components from the same three time series (sea-level at Brest, magnetic field at **CLF** and length of day; this sub-section and Figures 3(a), 3(b), 3(c)). Given their phase and amplitude modulation, the superimposition of the semi-annual and annual components generates fringe patterns (Figure 3(a), 3(b), 3(c)). These are better visualized in the enlarged Figure 4(a), 4(b) showing the 1980-1990 decade. In Figure 4(a) the two components generate a characteristic pattern with two humps that correlate between magnetic field and length of day, with constant phases. In Figure 4(b) one of the two humps for sea-level is subdued and looks more like a shifting step. The double hump pattern has already been recognized by [24], their Figure 1. Based on Figures 4 and 5, one can propose that the annual forcing of the double humps is driven by polar motion/rotation. As explained by Poisson ([27]), a moving charged fluid tends to replicate the pattern of the motion; the different processes seem to be in phase and constant. If these originate from fluid motion tangential to the core, there will be a transfer of moment according to the Liouville-Euler equations. Thus, based on Figures 4 and 5, one can propose that the annual forcing of the double humps is driven by polar motion/rotation. We next wish to check whether the same can be said of the other geomagnetic field components at **CLF**. Figure 5(a), 5(b) compares the pattern of _lod_ to those of X, Y and Z: Y behaves as X but Z does not have the two humps, only one strong maximum per year (_ie_ no semi-annual component). We have attempted to check whether the observations made with the couple "magnetic observatory-tide gauge" at **CLF** and nearby Brest could be extended to other couples. Unfortunately there are not many such couples, particularly since all data sets should be of sufficient length and quality. We have found five couples listed in Table 1 below and shown on a world map (Figure 7). \begin{table} \begin{tabular}{l l l} \hline \hline Magnetic observatory & \multicolumn{2}{l}{Tide gauge} \\ \hline \multirow{2}{*}{Chambon-La-Forêt (**CLF**, 2.26\({}^{\circ}\)E, 48.02\({}^{\circ}\)N)} & \multirow{2}{*}{Brest 48.38\({}^{\circ}\)N)} & (4.49\({}^{\circ}\)W, 48.38\({}^{\circ}\)N) \\ & Hartland (**HAD**, 4.48\({}^{\circ}\)W, 51\({}^{\circ}\)N) & \multirow{2}{*}{Newlyn 5.54\({}^{\circ}\)W, 50.10\({}^{\circ}\)N)} \\ Canberra & (**CNB**, 149.36\({}^{\circ}\)E, 35.32\({}^{\circ}\)S) & & \multirow{2}{*}{Newcuslte 151.78\({}^{\circ}\)E, 32.92\({}^{\circ}\)S)} \\ & Hermanus & (**HER**, 19.23\({}^{\circ}\)W, 34.43\({}^{\circ}\)S) & & \\ Kanozan & (**KNZ**, 139.95\({}^{\circ}\)E, 35.25\({}^{\circ}\)N) & & \\ \hline \end{tabular} \end{table} Table 1: List of couple “magnetic observatory-tide gauge” Figure 5: Comparison on the top, between seasonal components of lod (gray curve) and X component of the magnetic field recorded in **CLF** (red curve); on the bottom same comparison with seasonal component extracted from the Brest tide gauge (blue curve). Figure 6: Comparison of annual plus semi-annual components of all three geomagnetic components at **CLF** with those of _lod_. Figures (a)a to (c)c show the combined annual and semi-annual **SSA** components of X, Y and Z at the five magnetic observatories. These figures illustrate the different modulation ("wave") patterns associated with annual and semi-annual forcings. There is an ongoing debate as to their origins (eg [69, 70, 71, 72, 73]). Figures (a)a to (e)e allow one to compare the magnetic oscillations for X, Y and Z with the corresponding sea level oscillation, one for each observatory couple. Magnetic components X extracted from the Brest-**CLF** (Figure (a)a), Simons Bay-**HER** (Figure (b)b) and Newlyn-**HAD** (Figure (c)c) Figure 8: Time evolution of annual plus semi-annual components extracted from the 5 observatories listed in Table (1). Figure 7: Associated couples of a magnetic observatory (red diamond) and a tide gauge (blue circles). See Table 1. couples are in phase opposition with sea-level; the two other field components Y and Z are in phase with sea-level (with a small phase drift over the 40 years of the record). These three couples happen to be located on the same magnetic meridian. The same holds for the Newcastle-**CNB** couple (Figure 8(e)), with a slightly larger phase drift. For the Japanese couple Mera/**KNZ** (Figure 8(d)), X is in phase with sea-level in 1980, when Y and Z are in quadrature. After 40 years of slow drift, Z is in phase, X and Y in quadrature. Finally, for the Australian couple Newcastle/**CNB** (Figure 8(e)), X is in phase opposition, Y in phase and Z in quadrature in 1980 and the three drift respectively to opposition, quadrature and opposition in 2020. We note that in Hartland Z does not have a semi-annual component (Figure 8(c)), and it is quite small in Hermanus (Figure 8(b)). We have tested tens of potential tide gauge/magnetic observatory couples whose results are not good enough to be reported; we just note that some 80% of them have no semi-annual Z. The comparisons made in Figures 9 suggest strongly to us (though, granted, they do not prove) a link between annual variations of sea-level and the magnetic field. The correlations are of better quality than we might have ex Figure 9: Comparison between forced components extracted from each couples observatory/tide gauge according listed on the Table (1) pected, despite differences in topography and geography in the vicinity of the gauges. The same can be said of the (**SSA** determined) trends. We are in a position to strengthen our physical understanding of this link. In section 2 we have seen that if a measured magnetic field has its origin in the uniform motion of charges in rotation about our planet's symmetry axis, the quantity and quality of charges cannot vary with time (or the dipole would vanish). Then there is a link between the magnetic and mechanical moments. It is expressed by equation (8a): \[\mathfrak{m}=\frac{e}{2mc}\sum\mathbf{r}\times\mathbf{p}=\frac{e}{2mc}\mathcal{M}\] Sea-level and magnetic variations are linked through polar motion (_eg_[20], and section 2), here the length of day. Polar motion is forced by the Earth's revolution about the Sun. If the field is constant and the link exists, the ratio \(\frac{e}{2mc}\) must be constant. The (geo-)physical consequences generated by these moments should be to first order the same for all torques around the globe, and the variations in amplitude of the geophysical phenomena involved should be proportional. In Table 2 below, we evaluate the amplitudes of the **SSA** annual components of sea level and magnetic components and their ratios for the five couples of stations of Figure 7. In the 1980-2022 period, these ratios are the same at 7mm/nT for **CLF**, **HAD** and **HER**, almost the same at 8 mm/nT for **CNB** and not so different at 10 mm/nT for **KNZ**. Given the complexity of sea-level physics and geomagnetism, there was no a priori reason why the ratios should be constant, unless equation (08) holds, which seems to be the case. This result vindicates Poisson's approach ([27]): fluid motions in the core are similar to those at the surface; because they are charged, the motifs of variations in the Earth's magnetic field are those of sea-surface, and atmosphere. ### On the 11-yr cycle and the magnetic field The 11-yr cycle is one of the better known variations of the Earth's magnetic field. It is considered as the same Schwabe ([74]) cycle that is found in sunspot numbers. In the minds and logic of Laplace ([33]), Lagrange ([36]) and Poisson ([27]), planets are responsible, through exchanges in moment, for a number of astrophysical and geophysical phenomena. The origin of this 11-yr cycle has likely its source in the revolution and moment of Jupiter. This moment is directly connected to variations of distances to Jupiter in the solar system. Jupiter exerts the largest torque of the 8 planets (and also larger than the Sun's, that is motionless at the time scales we are interested in Lopes et _al._[66]). This torque acts on (or modulates, or forces) sunspots (_eg_[54]) as well as on polar rotation (_eg_[65, 48]). In what we will call the Poisson - Le Mouel's paradigm, the fluid's rotation generates the field; this same rotation must then force solar activity in the form of sunspots. In the case of fluid mechanics, and as shown in Courtillot and Le Mouel ([75]) figure 45 and in Le Mouel _et al._ ([6, 76]), a law of natural turbulence appears, a [19] power law with exponent -5/3. This law is found in sunspots as well as in variations of geomagnetic intensity (the latter since as far \begin{table} \begin{tabular}{l|l|l} \hline \hline couple observatory- & ratio sea level / magnetic component & order of magnitude \\ tide gauge & & \(\sim\)100 mm / 15 nT & \(\sim\)7 mm/nT \\ **CLF**/Brest, between 1980 and 2000 & \(\sim\)100 mm / 15 nT & \(\sim\)7 mm/nT \\ **HAD**/Newlyn, between 1980 and 2005 & \(\sim\)100 mm / 15 nT & \(\sim\)7 mm/nT \\ **CNB**/Newcaste & V, between 1980 and 2022 & \(\sim\)80 mm / 10 nT & \(\sim\)8 mm/nT \\ **HER**/Simons & bay, between 1980 and 2000 & \(\sim\)100 mm / 14 nT & \(\sim\)7 mm/nT \\ **KNZ**/Mera, between 1980 and 1995 & \(\sim\)140 mm / 14 nT & \(\sim\)10 mm/nT \\ \hline \end{tabular} \end{table} Table 2: List of couple ”magnetic observatory-tide gauge” back as at least 1 Myr, see [75]). The angular coordinate \(\theta\) and \(\dfrac{d\psi}{dt}\), that define the location of the pole of rotation on the sphere are the solutions of a system of differential equations, one for \(\theta\), that describes pole motion, the other for \(\dfrac{d\psi}{dt}\) that describes length of day. One is the derivative of the other (see [50]). As is the case for all derivative operators, it amplifies high frequency components. Le Mouel et _al._ ([77]) have shown that the 11-yr component is a major component of length of day, whereas Lopes et _al._ ([65]) have shown that it was a minor one of polar motion. In order to retain the homogeneity of our data sets and their temporal resolution, we have integrated the 11-yr quasi-cycle extracted from length of day by **SSA** (Figure 10, black curve, bottom two rows). In the top row of Figure 10, we have superimposed the 11-yr component from sunspots (pink) with that of \(aa\). The two curves appear to be in quadrature: this is checked by offsetting the Schwabe cycle forward by exactly 11/4 yr (second row). The explanation for this observation is the following: the torque exerted by Jupiter acts directly on sunspots, while the aa index is the difference between two antipodal observatories. Thus the \(aa\) index is a derivative operator. This is likely why the 11-yr cycle is prominent in \(aa\) but minor in the X, Y and Z components. The same accounts for the phases of aa and lod: we integrate the 11-yr component of lod (black curve in 3rd row of Figure 10) and see it is in phase with aa. And finally, according to [36], Jupiter does act on the Earth's rotation, as shown by Lopes et _al._ ([48]) and the Jupiter-Earth distance (blue curve bottom row in Figure 10). This last result provides a good illustration of equation (8d): \[\dfrac{d\mathcal{M}}{dt}=-\Omega\times\mathcal{M}.\] Polar motion as well as length of day are linked to \(\Omega\) (_eg_ Lambeck [55], section 03). We have also seen in theory (and checked in the example above) that the variation in the magnetic field is linked to moment \(\mathcal{M}\). We could say that magnetic field components (X,Y,Z) are to polar motion (\(m_{1}\),\(m_{2}\)) what \(aa\) indices are to length of day. The link between sections 2 and 3 above is the length of day. ### On the international geomagnetic reference field Since the work of Gauss ([28]), the decomposition of the geomagnetic field into spherical harmonics has become normal practice ("routine"). We recall however as a caveat the fact that an electric field or a gravity field can be decomposed in SH because these fields are constant and their elementary sources all have the same sign. Such is not the case with a magnetic field, unless it is constant also (sections 2 and 3). In principle, the SH decomposition of a magnetic field does not have physical significance. Nevertheless a spherical harmonic decomposition of the International Geomagnetic Reference Field (**IGRF**) is published every five years ([78]). Given the fact that there is no magnetic monopole, the first source term (aka "Gauss coefficient") is the axial dipole \(g_{1,0}\), an imaginary source at the Figure 10: Eleven year quasi-cycles extracted by SSA from the geomagnetic index as (red; top 3 rows), the sunspot series (pink; top 2 rows), the length of day (black; bottom 2 rows) and the ephemeris of Jupiter (blue; bottom row marked as distance of Earth from Jupiter). center of the Earth. The other terms of the Fourier expansion on the base functions \(\cos\) and \(\sin\) are written as \(g_{l,m}\) and \(h_{l,m}\). Figure 10(a) shows the monotonous decay of the **IGRF** axial dipole \(g_{1,0}\) since 1900. With Poisson's ([27]) and Le Mouel's ([20]) hypothesis in mind, and given some of the results in the previous sections of this paper (similar behavior of the annual **SSA** components of sea level, rotation axis and magnetic field), it is natural to compare the behavior of the intensity of the **IGRF** dipole with polar motion (\(m_{1}\),\(m_{2}\)) or the equivalent parameter \(\theta\) of [33]. This is done in Figure 10(a). The time variations of \(g_{1}^{0}\) and \(\theta\) are indeed very close. The first derivative of polar motion is in quadrature with the first derivative of \(g_{1}^{0}\) (Figure 10(b)) and in phase (opposition) with the second derivative of \(g_{1}^{0}\) (Figure 10(c)).The variation of dipole intensity follows the variation of the angle \(\theta\) between the rotation axis of the planet and that of the dipole. Another example is given by the similarities between magnetic declination in Paris and rotation axis, and between their derivatives (Figures 2(a) and 2(b)). If the field is constant, we have seen that there is a link between the magnetic moment \(m\) and the kinetic moment \(\mathcal{M}\) \[m=\frac{e}{2mc}\sum\mathbf{r}\times\mathbf{p}=\frac{e}{2mc}\mathcal{M}\] \[\frac{d\mathcal{M}}{dt}=-\Omega\times\mathcal{M}\] We have checked this relation, using observations (Table 2) We have also shown that, in the case of a rotating system of charges about the axis \(Z\) (with velocity \(\Omega\)), this relation becomes: \[\Omega=\frac{e}{2mc}\mathbf{H}\] This does express the link between the variations of the Earth's rotation and the magnetic field \(\mathbf{H}\). Since \(\Omega\) is connected to \(m_{1}\) and \(m_{2}\) through the Liouville-Euler equations, then \(\mathbf{H}\) is also connected to \(m_{1}\) and \(m_{2}\). But the key coordinate is \(m_{3}\) that is the length of day. This is the reason why the second derivation of to \(g_{1,0}\) agrees better with the Markowitz-Stoyko drift, which is as we have seen the second derivative of _lod_. Figure 11: Comparison between (a) the evolution of \(g_{1,0}\) from ([78]) and the Markowitz-Stoyko drift and (b-c) their corresponding time derivatives. Conclusion Our aim in this paper was to assess Poisson's pioneering work ([27]) on the nature and origin of the Earth's magnetic field. These works started with the scientist's recognition that the field had to be constant, which may seem awkward to the modern physicist. Gauss ([28]) also considered that the magnetic field was constant. Le Mouil [20] observed that there was a strong morphological link between the secular variation of the field (\(\dfrac{d\mathbf{H}}{dt}\)) and the drift of the rotation pole (polar motion \(m_{1}\) and \(m_{2}\)), and another link between the forced oscillations of the magnetic field \(\mathbf{H}\) and the length of day \(\dot{m}_{3}\) (_ie_ rotation velocity). Based on these observational facts, [20] proposed a model, a rotating cylinder, that was in many ways identical to Poisson's sphere, though he did not state whether the field was constant or not. Poisson's proof that indeed the magnetic field had to be constant involved theoretical physics and mathematics but only a limited set of observations. Based on a large number of observations, Le Mouil ([20]) and collaborators built quasi "experimentally" the same theory as Poisson in a suite of papers. These papers are used significantly in the literature; it is not clear whether they are compatible with the complex dynamo models that are often preferred today. We have attempted in this paper to return to the original sources and to reconstruct the development of geomagnetism, using "modern" language. Starting from Maxwell's equations, we have derived the equations for the electrostatic and magnetostatic fields (section 2). We come to the same equations as Poisson with an important difference: Poisson chose the Lagrangian approach to gravity ([32, 36, 33]), to formally derive the equations for the magnetic field (_cf_[35]). To scientists of this epoch, there was no difficulty in reasoning in magnetism in the same "classical" way one reasoned in gravity (as a side note, this led to a beautiful piece by Heaviside [41], on the deep analogy between magnetism and gravity). We have pursued a more classical approach, and in section 2 we propose a synthesis with comments of the equations of electromagnetism that can be found in most physics graduate textbooks. One of the points we wish to emphasize is that the magnetic field need not derive from a scalar potential to be developed into spherical harmonics. However, most geomagnetists do make this hypothesis, invoking Stokes's theorem: since magnetic measurements are made at the surface where almost no charge circulates, one can assume that the field derives from a scalar, not a vector potential. It is more physical, hence logical to obtain the spherical harmonics from the electrostatic field, then using the vector potential (equation 6c) to return to their expression for the magnetic field. We have not implemented this last step because one can perform a decomposition in spherical harmonics and write about multi-poles if and only if the motions of charges are finite and uniform, two conditions that are not met in a dynamic field. From this, we have drawn a number of consequences for the magnetic field, the main one being that the magnetic moment of the charges that generate the field and their mechanical moment (thus the motion of the rotation pole) are linked by Larmor's relation. This is in agreement with the theoretical works of Laplace, Poisson ([33, 27]) and [20]. A magnetic field can be written on a basis of spherical harmonics only if this field is constant. Section 3 has dealt with the intrinsic theoretical consequences of a constant dipolar field. From the ideas presented in sections 2 and 3, we have concluded that in order to satisfy all hypotheses and all theoretical results, it is sufficient that the axis of symmetry of the magnetic dipole and the rotation axis can move one with respect to the other, which is never the case with a development in spherical harmonics (schematically illustrated in Figure 2). Section 4 has presented tests of our hypotheses on obser vations and other data. In Figure 3 we have shown that sea-level variations in Brest, variations of declination in Chambon-la-Foret and Markowitz-Stoyko polar drift were essentially the same, except for the phase of declination, that we interpret as resulting from the nature of core-mantle coupling. We know from other tide-gauges [79, 64], that the Brest gauge is representative of most northern hemisphere gauges. [24] discussed the correlation and the model that would link geomagnetic secular variations and polar drift; we extend the correlation and the model to sea-level. Next, we have focused on annual and semi-annual oscillations, as did previously Jault and Le Mouel ([24]), here again bringing in the information carried by sea-level. All these cycles, regardless of whether they correspond to geomagnetic field components (X,Y,Z), length of day or sea-level, are in phase or in phase opposition. This observation falsifies the hypothesis that the field would include seasonal variations (which invoked the Russel and McPherron [71] effect), unless the effect could also explain the presence of the same cycles in the sea-level and length of day variations. We have calculated the ratios of mean amplitude of variations in these cycles of magnetic field components to associated tide gauges. Independent of local and regional geography and topography, we find these ratios to be quasi constant at \(\sim\)8mm/nT. This also agrees with the concept of a constant field (relation 8a). We have next come to testing the famous Schwabe \(\sim\)11 yr cycle. Paradoxically it is rather weak in geomagnetic field components, and quite strong in magnetic indices, such as aa. In a parallel way, the cycle is weak in polar motion yet it is one of the main components of length of day. This is readily understood in Laplace's paradigm ([33]): polar motion and length of day describe the same phenomenon but differ by one order of derivation. In the same way, aa is also a derivative operator (difference between two antipodal observatories). We have shown that the 11yr component of the magnetic field is in phase with the corresponding component of polar motion, when the latter (_lod_) has been integrated (Figure 7). In a reciprocal way, polar motion is linked to the derivative of magnetic components. Our last test has been with the secular variation of the **IGRF**. Despite the caveat that a dynamic magnetic field derives from a vector potential, not a scalar potential, hence can in principle not be developed in spherical harmonics (Poisson, Gauss), regular analyses of magnetic field models have been produced at five year intervals for years since 1900. The secular variation of that field has been monotonous and decreasing, as far as the leading axial dipole field component \(g_{1,0}\) is concerned. This behavior is parallel to that of polar motion (\(m_{1},m_{2}\), see Figure 11a) or also \(\dot{m}_{3}\). These two lines of observations complement our series of tests of the validity (and foresight) of Poisson's derivation of Maxwell's equations, Larmor's equation, the Liouville-Euler system, and in general of the distinction that must be made between electric and magnetic fields. The parallel behaviors of magnetism and rotational mechanics, illustrated by the Larmor formula, have been put to test with modern data (observations) with success, as we have endeavored to show in this paper. These results can be complemented by the analysis of the responses at a series of "commensurate periods" that have their origins in the ballet of (primarily) Jovian planets: these are the "forcing factors" not only of variations in sea-level, and a number of geophysical, atmospheric and heliophysical processes (_eg_[52, 53, 54]) but also of Earth's magnetic field as we have shown in this paper.
2308.04463
Weakly Semi-Supervised Detection in Lung Ultrasound Videos
Frame-by-frame annotation of bounding boxes by clinical experts is often required to train fully supervised object detection models on medical video data. We propose a method for improving object detection in medical videos through weak supervision from video-level labels. More concretely, we aggregate individual detection predictions into video-level predictions and extend a teacher-student training strategy to provide additional supervision via a video-level loss. We also introduce improvements to the underlying teacher-student framework, including methods to improve the quality of pseudo-labels based on weak supervision and adaptive schemes to optimize knowledge transfer between the student and teacher networks. We apply this approach to the clinically important task of detecting lung consolidations (seen in respiratory infections such as COVID-19 pneumonia) in medical ultrasound videos. Experiments reveal that our framework improves detection accuracy and robustness compared to baseline semi-supervised models, and improves efficiency in data and annotation usage.
Jiahong Ouyang, Li Chen, Gary Y. Li, Naveen Balaraju, Shubham Patil, Courosh Mehanian, Sourabh Kulhare, Rachel Millin, Kenton W. Gregory, Cynthia R. Gregory, Meihua Zhu, David O. Kessler, Laurie Malia, Almaz Dessie, Joni Rabiner, Di Coneybeare, Bo Shopsin, Andrew Hersh, Cristian Madar, Jeffrey Shupp, Laura S. Johnson, Jacob Avila, Kristin Dwyer, Peter Weimersheimer, Balasundar Raju, Jochen Kruecker, Alvin Chen
2023-08-08T02:36:41Z
http://arxiv.org/abs/2308.04463v1
# Weakly Semi-Supervised Detection in Lung Ultrasound Videos ###### Abstract Frame-by-frame annotation of bounding boxes by clinical experts is often required to train fully supervised object detection models on medical video data. We propose a method for improving object detection in medical videos through weak supervision from video-level labels. More concretely, we aggregate individual detection predictions into video-level predictions and extend a teacher-student training strategy to provide additional supervision via a video-level loss. We also introduce improvements to the underlying teacher-student framework, including methods to improve the quality of pseudo-labels based on weak supervision and adaptive schemes to optimize knowledge transfer between the student and teacher networks. We apply this approach to the clinically important task of detecting lung consolidations (seen in respiratory infections such as COVID-19 pneumonia) in medical ultrasound videos. Experiments reveal that our framework improves detection accuracy and robustness compared to baseline semi-supervised models, and improves efficiency in data and annotation usage. Keywords:Weakly Supervised Learning Semi-Supervised Learning Object Detection Medical Ultrasound. + Footnote †: Corresponding author. Email: [email protected] ## 1 Introduction Despite the remarkable performance of deep learning networks for object detection and other computer vision tasks [10, 19, 2], most models rely on large-scale annotated training examples, which are often unavailable or burdensome to generate in the medical imaging domain. This is especially true for video-based imaging modalities such as medical ultrasound, where frame-by-frame annotation of bounding boxes or other localization labels is extremely time-consuming and costly, and even more so if annotations must be done by clinical experts. Reducing the annotation burden of training object detectors on medical images has been a focus of much recent work. Semi-supervised and weakly supervised approaches have been proposed to address the annotation challenge, where unlabeled or inexactly/inaccurately labeled data are used to supplement training, often in combination with a small amount of fully labeled data [13, 9, 11, 15, 21, 1]. Examples of weak supervision for object detection include point annotations [22, 4, 17, 14, 3] and image-level class labels [17, 14, 12], both of which are applied on individual image frames. However, even these methods of weak supervision may not be practical in the video domain, where hundreds or thousands of image frames requiring interpretation may be collected in a single clinical exam. In this work, we propose a weakly semi-supervised framework for training object detection models based on video-level supervision, where only a single label is provided for each video. Video-level labels represent a significantly weaker form of supervision than instance- or image-level labels but can be generated much more efficiently. Our approach extends teacher-student models adopted for semi-supervised object detection [11] to the weakly semi-supervised video-based detection task. Our main contributions are as follows: 1. We introduce a simple mechanism during teacher-student training which aggregates individual detections from the teacher (pseudo-labels) into video-level confidence predictions. This allows video-level weak supervision using any standard classification loss. 2. We improve the reliability of pseudo-labels generated during the mutual learning stage by introducing techniques to re-weigh pseudo-labels based on video-level weak supervision. 3. We investigate the learning dynamics between the teacher and student, and propose several improvements to the underlying teacher-student mechanism to increase training stability. These include a method to better initialize models in the "burn-in" stage and a set of adaptive updating schemes to optimize knowledge transfer bidirectionally during mutual learning. We demonstrate the effectiveness of our approach on the task of detecting lung consolidations in medical ultrasound videos, which is an important step in aiding diagnosis and management of patients with respiratory infections such as bacterial and viral pneumonia (including COVID-19 infection). Computer-aided detection of lung consolidation in medical ultrasound is a uniquely challenging problem where the appearance of pathology varies dramatically across disease types, patient populations, and training levels of the personnel acquiring images. Experimental results on a large, multi-center, clinical ultrasound dataset for lung consolidation demonstrate that the proposed framework leads to improved detection performance, robustness, and training stability compared to existing semi-supervised methods. ## 2 Related Work **Semi-Supervised Object Detection:** Semi-supervised object detection aims to utilize large amounts of unlabeled data together with a small set of labeled data. These efforts generally fall into two categories: (1) consistency regularization, which regularizes the prediction of the detector for images undergoing different augmentations [6, 7], and (2) pseudo-labeling, where a teacher model is trained on labeled data to generate pseudo-labels for unlabeled data, and a student model is then trained on both the labeled and pseudo-labeled data [11, 23, 20, 16, 18]. Unbiased Teacher (UBT) [11] is one of the state-of-the-art methods in this category. Our work is inspired by the framework of UBT, but extends the method to the weakly semi-supervised scenario, where we leverage frame-level pseudo-labels and video-level weak supervision. **Weakly Semi-supervised Object Detection:** Weakly semi-supervised object detection is usually based on instance-level weak supervision, e.g., a point on the object [22, 4, 17, 14, 3, 8], or image-level supervision, e.g., the class of the image [17, 14, 12]. Video-level labels are a significantly weaker supervisory signal than instance- or frame-level labels, since the only information provided is that the object class exists somewhere in at least one frame of the video, but in which specific frame(s) and at what location(s) is unknown. ## 3 Method The basic intuition behind our approach is to simultaneously learn from both frame-level and video-level labels to improve object detection performance. We denote a fully labeled set \(\mathcal{D}_{f}\) and a weakly labeled set \(\mathcal{D}_{w}\) as our training data. The fully labeled set \(\mathcal{D}_{f}=\{x_{i}^{f},y_{i}^{f}\}_{i=1}^{N_{f}}\) comprises a set of \(N_{f}\) frames \(x_{i}^{f}\) and their paired frame-level annotations \(y_{i}^{f}\) (i.e., the coordinates of all bounding boxes present in each frame). The weakly labeled set \(\mathcal{D}_{w}=\{x_{j,1:T_{j}}^{w},z_{j}^{w}\}_{j=1}^{N_{w}}\) consists of \(N_{w}\) videos \(x_{j,1:T_{j}}^{w}\) and their video-level class labels \(z_{j}^{w}\) (indicating whether a video contains at least one instance of the object class, which is annotated on the whole video of \(T_{j}\) frames). Due to its high accuracy, computational efficiency, and capacity for real-time inference, YOLO-v5 [5] is adopted as the backbone detector. Non-maximum suppression (NMS) is applied on the outputs of YOLO-v5 as the final output of the detector to remove duplicate prediction. Here, we use \(c\) to denote the confidence vector (a part of YOLO-v5's output) of all predicted boxes from a frame \(x\). ### Teacher-Student Training We adopt the UBT framework from [11] for the weakly semi-supervised detection task. UBT uses two training stages: a burn-in stage for model initialization and a mutual learning stage for teacher-student training (Fig. 1). **Burn-In Stage:** The burn-in stage aims to initialize the detector in a supervised manner using the fully labeled data \(\mathcal{D}_{f}\). The original burn-in method in UBT adopts model weights after a fixed number of early training epochs. However, we observed that detection performance can vary dramatically during training. Here, we improve the training stability during burn-in by applying hierarchical exponential moving average (EMA) updates at each iteration and epoch (yellow block in Fig. 1). Specifically, the iteration-based model \(\theta_{I}\) is introduced to transfer knowledge from an initial detection model \(\theta\) after each iteration (that is, the weights of \(\theta_{I}\) are updated per batch). The epoch-based model \(\theta_{E}\) is added to transfer information from \(\theta_{I}\) after each epoch (weights updated after all batches). This is given by: \[\begin{cases}\theta_{I}\leftarrow\alpha_{i}\theta_{I}+(1-\alpha_{i})\theta,& \text{for each iteration}\\ \theta_{E}\leftarrow\alpha_{e}\theta_{E}+(1-\alpha_{e})\theta_{I},&\text{for each epoch}\end{cases} \tag{1}\] where \(\alpha_{i}\) and \(\alpha_{e}\) are the iteration- and epoch-based EMA keep rates, respectively. The EMA keep rates define a trade-off between the rate of knowledge being transferred from the preceding model versus the stability of the succeeding model. Major oscillations in model performance are seen while training \(\theta_{I}\) alone, even with a carefully selected \(\alpha_{i}\)[5]. In contrast, the addition of \(\theta_{E}\) serves to stabilize the training and results in a better-initialized detector after burn-in. **Mutual Learning Stage:** The mutual learning stage combines the fully labeled data \(\mathcal{D}_{f}\) and weakly labeled data \(\mathcal{D}_{w}\) for teacher-student training. Both the student and teacher models are initialized from the last checkpoint of \(\theta_{E}\) trained in Figure 1: Overview of the proposed method. the burn-in stage. During mutual learning, the student \(\theta_{S}\) is optimized via backpropagation using a combination of full, weak, and semi-supervised losses (see Sections 3.3 and 3.4), and the teacher \(\theta_{T}\) is updated via a gradual EMA transfer of weights from the student. Analogous to the model updates in the burn-in stage, the student is updated with an iteration-based EMA during mutual learning, while the teacher is updated with an epoch-based EMA. An additional mutual refinement scheme is introduced to adaptively adjust the EMA keep rate of the teacher, as well as to conditionally allow transfer of weights back to the student (see Section 3.5). As mutual learning progresses, the accuracy and stability of pseudo-labels produced by the teacher are continuously improved, which in turn improves knowledge distillation to and from the student. At the end of the training, only the teacher \(\theta_{T}\) is kept for evaluation and deployment. ### Weakly Semi-Supervised Learning **Frame-Level Full Supervision:** For both burn-in and mutual learning, fully labeled data \(\mathcal{D}_{f}\) are used in training, with supervision by a detection loss \(\mathcal{L}_{f\_sup}\): \[\mathcal{L}_{f\_sup}=\sum_{i=1}^{N_{f}}\lambda_{coord}\mathcal{L}_{coord}( \mathcal{T}_{s}(x_{i}^{f}),y_{i}^{f})+\lambda_{conf}\mathcal{L}_{conf}( \mathcal{T}_{s}(x_{i}^{f}),y_{i}^{f}), \tag{2}\] where \(\mathcal{L}_{coord}\) is the bounding box coordinate error, and \(\mathcal{L}_{conf}\) is the binary cross-entropy loss between predicted box confidences and corresponding box labels. \(\lambda_{coord}\) and \(\lambda_{conf}\) balance the two losses. \(\mathcal{T}_{s}\) denotes the data augmentation. **Frame-Level Semi-Supervision:** In the mutual learning stage, the weakly labeled data \(\mathcal{D}_{w}\) are added to allow frame-level semi-supervision based on pseudo-labels from the teacher \(\theta_{T}\). Specifically, we generate sub-clips \(\tilde{x}_{j,1:N_{fpv}}^{w}\) by uniformly sampling \(N_{fpv}\) frames from each video \(x_{j,1:T_{j}}^{w}\) in the weak dataset. These sub-clips are fed through the teacher with reduced augmentation \(\mathcal{T}_{r}\) to obtain box predictions \(\hat{y}_{j,1:N_{fpv}}^{T}\) with confidence scores \(c_{j,1:N_{fpv}}^{T}\) from each frame. Only predicted boxes with confidence above a threshold \(\beta\) are kept as pseudo-labels. We use a second detection loss, similar to Eq. 2, to train the student \(\theta_{S}\) against the teacher's pseudo-labels: \[\mathcal{L}_{f\_semi}=\sum_{j=1}^{N_{w}}\sum_{t=1}^{N_{fpv}}\left[\lambda_{ coord}\mathcal{L}_{coord}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{T})+ \lambda_{conf}\mathcal{L}_{conf}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{ T})\right] \tag{3}\] **Video-Level Weak Supervision:** Finally, we utilize the confidence of boxes predicted by the student \(c_{j,1:N_{fpv}}^{S}\) to obtain a frame-level confidence by computing the maximum confidence among all detected boxes in the frame, i.e., \(max(c_{j,t}^{S})\). A final video-level prediction \(\tilde{c}_{j}^{S}\) is computed as the averaged frame-level confidence over the sub-clip, i.e., \(\tilde{c}_{j}^{S}=\frac{1}{N_{fpv}}\sum_{t=1}^{N_{fpv}}max(c_{j,t}^{S})\). Here, we apply a video-level binary cross-entropy classification loss to super-vise the student \(\theta_{S}\) against the video-level labels \(z_{j}^{w}\) from the weak data \(\mathcal{D}_{w}\): \[\mathcal{L}_{v\_weak}=-\sum_{j=1}^{N_{w}}\left[z_{j}^{w}log(\tilde{c}_{j}^{S})+(1- z_{j}^{w})log(1-\tilde{c}_{j}^{S})\right] \tag{4}\] **Combined Loss:** The final loss function for training the student model combines the fully supervised detection loss \(\mathcal{L}_{f\_sup}\), the frame-level semi-supervised detection loss \(\mathcal{L}_{f\_semi}\), and the video-level weakly supervised loss \(\mathcal{L}_{v\_weak}\): \[\mathcal{L}=\lambda_{f\_sup}\mathcal{L}_{f\_sup}+\lambda_{f\_semi}\mathcal{ L}_{f\_semi}+\lambda_{v\_weak}\mathcal{L}_{v\_weak} \tag{5}\] with \(\lambda_{f\_sup}\), \(\lambda_{f\_semi}\), and \(\lambda_{v\_weak}\) to balance the three loss components. ### Weighted Pseudo-labels A critical factor in the effectiveness of mutual learning is the quantity and quality of pseudo-labels from the teacher \(\theta_{T}\). We propose two pseudo-label re-weighting techniques to increase the number of high-quality pseudo-labels during training. **Weakly Supervised Pseudo-label Filtering:** The first approach utilizes the weak video-level label \(z_{j}^{w}\) to filter false pseudo-labels. For a negative video (\(z_{j}^{w}=0\)), we can simply remove all pseudo-labels from every frame, which could be considered as re-weighting the pseudo-label as \(0\). For a positive video (\(z_{j}^{w}=1\)), if no pseudo-label confidence exceeds \(\beta\), we keep the pseudo-label with the highest confidence if it exceeds a lower threshold \(\beta_{l}\), where \(\beta_{l}<\beta\), which could be considered as re-weighting the pseudo-label from \(0\) to its confidence. **Soft Pseudo-labels:** The second approach assigns a weight to each pseudo-label based on its prediction confidence. That is, we re-weigh the loss component for each pseudo-label \(\hat{y}_{j,t}^{T}\) by the square of its confidence (\(c_{j,t}^{T}\))\({}^{2}\) to create "soft pseudo-labels" (\(\hat{y}_{j,t}^{T}\), \(c_{j,t}^{T}\)). The semi-supervised detection loss is reformulated as: \[\begin{split}\mathcal{L}_{coord}(\mathcal{T}_{s}(x_{j,t}^{w}), \hat{y}_{j,t}^{T})&=\sum_{k}^{N_{j,t}}(c_{j,t,k}^{T})^{2}\mathcal{ L}_{coord}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{T})_{k}\\ \mathcal{L}_{conf}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{T}) &=\sum_{k}^{N_{j,t}}(c_{j,t,k}^{T})^{2}\mathcal{L}_{conf}( \mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{T})_{k}\end{split} \tag{6}\] where \(N_{j,t}\) is the number of pseudo-labels in frame \(t\) of video \(j\). \(c_{j,t,k}^{T}\) denotes the confidence of the \(k\)-th pseudo-label in a given frame. \(\mathcal{L}_{coord}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{w})_{k}\) and \(\mathcal{L}_{conf}(\mathcal{T}_{s}(x_{j,t}^{w}),\hat{y}_{j,t}^{w})_{k}\) are the bounding box coordinate and confidence loss components for the \(k\)-th pseudo-label. ### Bidirectional, Adaptive, Teacher-Student Mutual Refinement The learning dynamics between the teacher and student also play a significant role in determining training stability and model robustness. First, the teacher should be updated at a sufficient rate such that it can catch up to the student before the student overfits, i.e., the EMA keep rate \(\alpha_{e}\) cannot be too large. At the same time, the teacher should be characterized by gradual changes in the training curve as opposed to rapid oscillations. Thus the teacher also cannot be made to update too quickly, i.e., \(\alpha_{e}\) cannot be too small. Finally, the student's training curve should not have sudden drops, for example, due to a bad training batch. Here, we introduce two additional techniques to dynamically balance the rate and direction of knowledge transfer during mutual learning: **Adaptive EMA (Student \(\rightarrow\) Teacher):** When using a fixed EMA keep rate \(\alpha_{e}\), there is a trade-off between training stability and rate of knowledge transfer from the student. Instead, we propose an adaptive EMA keep rate that is conditioned on the relative performance of the teacher and student after each training epoch. We use a sigmoid-shaped function for \(\alpha_{e}\), given by: \[\alpha_{e}=\alpha_{e,min}+(\alpha_{e,max}-\alpha_{e,min})\cdot\frac{1}{1+e^{- \tau_{0}(m_{T}-m_{S})-\tau_{1}}} \tag{7}\] where \(\alpha_{e,min}\), \(\alpha_{e,max}\), \(\tau_{0}\) and \(\tau_{1}\) are hyper-parameters defining the function shape. \(m_{T}\) and \(m_{S}\) denote teacher and student performance on a validation set according to some evaluation metric. The adaptive scheme allows \(\alpha_{e}\) to be dynamically adjusted, that is, \(\alpha_{e}\) is decreased (higher rate of knowledge transfer) as the student outperforms the teacher, and increased (lower rate of knowledge transfer) as the student underperforms compared to the teacher. **Inverse Adaptive EMA (Teacher \(\rightarrow\) Student):** To avoid sudden drops in performance by the student during mutual learning, we further introduce a mechanism which allows knowledge transfer in the reversed direction, i.e., from the teacher to the student. We design a similar sigmoidal function for the inverse EMA keep rate \(\alpha_{inv}\), given by: \[\alpha_{inv}=\begin{cases}1,&m_{T}\leq m_{S}\\ \alpha_{inv,min}+(2-2\alpha_{inv,min})\cdot\frac{1}{1+e^{-\tau_{2}(m_{S}-m_{T} )}},&m_{T}>m_{S}\end{cases} \tag{8}\] where \(\alpha_{inv,min}\) and \(\tau_{2}\) are hyper-parameters of the function. Here, knowledge transfer to the student is increased (lower \(\alpha_{inv}\)) when the teacher outperforms the student, and decreased (higher \(\alpha_{inv}\)) when the teacher underperforms. ## 4 Experiments ### Experimental Settings **Data.** An extensive retrospective, multi-center clinical dataset of 7,998 lung ultrasound videos were used in this work. The data were acquired from 420 patients with suspicion of lung consolidation or other related pathology (e.g., pneumonia, pleural effusion) from 8 U.S. clinical sites between 2017 and 2020. The videos were each at least 3 seconds in length and contained at least 60 frames. 385 (fully labeled training set), 337 (validation set) and 599 (test set) videos were annotated for lung consolidation regions using bounding boxes. All data were partitioned at the subject level. The remaining 6,677 videos were annotated only for the presence or absence of lung consolidation at the video level (weakly labeled set). Annotation was carried out by a multi-center team of expert physicians with medical training in lung ultrasound. Each video was annotated by two experts and adjudicated by a third expert when a disagreement between the first two annotators occurred. **Implementation Details.** We used the PyTorch Ultralytics implementation of the YOLO-v5 object detector [5] with default training settings (Adam optimizer with learning rate of 0.001). The weights for the confidence and coordinate losses were set to \(\lambda_{conf}=1.0\), \(\lambda_{coord}=0.05\). The weights for the frame-level fully supervised, frame-level semi-supervised, and video-level weakly supervised losses were set to \(\lambda_{f\_sup}=\lambda_{f\_semi}=1\), and \(\lambda_{v\_weak}=0.05\). For training from pseudo-labels without re-weighting, the confidence threshold was set to \(\beta=0.5\). Otherwise, hyperparameters were set to \(\beta=\beta_{l}=0.1\) when using weighted pseudo-labels. To train with a fixed EMA keep rate, we used \(\alpha_{e}=0.95\). Otherwise, when applying bidirectional adaptive EMA for mutual teacher-student refinement, the hyperparameters were set to \(\alpha_{e,min}=0.75\), \(\alpha_{e,max}=0.99\), \(\alpha_{inv,min}=0.85\), \(\tau_{0}=180\), \(\tau_{1}=3\), and \(\tau_{2}=180\). **Experiments.** We first trained a baseline YOLO-v5 detector on the fully supervised data \(\mathcal{D}_{f}\), denoted as **YOLO**. We compared the baseline to a fully supervised training experiment with hierarchical (iteration and epoch-based) EMA training during burn-in, as proposed in Section 3.2; this is denoted by \(+\)**HE**. All subsequent experiments involving teacher-student mutual learning were initialized from the same YOLO+HE model checkpoint. We implemented the semi-supervised approach from Unbiased Teacher [11] by adding all videos from the weakly supervised dataset \(\mathcal{D}_{w}\), but without providing video-level labels (i.e., treating these as unlabeled data); this is denoted as \(+\)**Unlabeled**. Note, to demonstrate the effectiveness of the proposed method, \(+\)Unlabeled a derived version of the Unbiased Teacher [11], using its way of utilizing unlabeled data while keeping the same setting of the model and train strategy as the rest of competing methods. We then introduced our proposed method of weak semi-supervision, described in Section 3.3, by including the video-level labels from \(\mathcal{D}_{w}\), denoted as \(+\)**Weak**. Finally, experiments using our methods for pseudo-label re-weighting (Section 3.4) and bidirectional, adaptive, teacher-student mutual refinement (Section 3.5) are denoted as \(+\)**Pseudo** and \(+\)**TSMR** respectively. We compared mean Average Precision (mAP) on the validation and test sets described above, with each experiment repeated five times to assess repeatability. Experimental results are summarized in Table 1. \begin{table} \begin{tabular}{l l|l l} \hline \hline Category & Method & Validation mAP & Test mAP \\ \hline \multirow{2}{*}{Fully supervised} & YOLO [5] & 0.435 \(\pm\) 0.012 & 0.412 \(\pm\) 0.013 \\ & YOLO+HE & 0.452 \(\pm\) 0.015 & 0.440 \(\pm\) 0.017 \\ \hline Semi-supervised & YOLO+HE+Unlabeled & 0.468 \(\pm\) 0.005 & 0.447 \(\pm\) 0.003 \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & YOLO+HE+Weak & 0.505 \(\pm\) 0.004 & 0.476 \(\pm\) 0.002 \\ & YOLO+HE+Weak+Pseudo & 0.508 \(\pm\) 0.004 & 0.479 \(\pm\) 0.004 \\ Semi-supervised & YOLO+HE+Weak+TSMR & 0.515 \(\pm\) 0.005 & 0.480 \(\pm\) 0.004 \\ & YOLO+HE+Weak+Pseudo+TSMR & **0.519 \(\pm\) 0.003** & **0.484 \(\pm\) 0.003** \\ \hline \hline \end{tabular} \end{table} Table 1: Validation and test mAP for fully, semi-, and weakly semi-supervised models. Mean \(\pm\) standard deviation based on five repeated experiments. All methods were significantly superior to YOLO (first row) and significantly inferior to the proposed method (last row)(paired two-way t-test, p-value \(<\) 0.05) ### Results & Discussion **Contribution of Hierarchical EMA Training:** The baseline, fully-supervised detector (YOLO) achieved validation and test mAP of 0.435 and 0.412, respectively. These improved to 0.452 and 0.440, respectively, with the inclusion of the hierarchical EMA training strategy during burn-in (YOLO+HE). **Contribution of Semi- and Weak (Video-level) Supervision:** The addition of unlabeled data and semi-supervision based on YOLO+HE+Unlabeled improved validation mAP from 0.452 to 0.468 and test mAP from 0.440 to 0.447, which was a statistically significant increase (p-value \(<\) 0.05). Furthermore, the standard deviation of mAP values over repeated experiments decreased (0.015 to 0.005 for validation, 0.017 to 0.003 for test), suggesting that the semi-supervised model is more stable and repeatable across runs. This was also reflected in the tighter 95% confidence intervals for validation mAP learning curves across repeated runs (Fig. 2). Model performance again increased with the introduction of weak supervision of video-level labels (YOLO+HE+Weak) (validation mAP 0.505, test mAP 0.476, p-value \(<\) 0.05), with corresponding decreases in mAP standard deviation (0.004 in validation, 0.002 in test) and 95% confidence intervals over repeat runs (Fig. 2). To further investigate the contribution of video-level supervision, we trained models with all fully labeled data \(\mathcal{D}_{f}\) but utilized a proportion of video labels \(z_{j}^{w}\) with the remainder of \(\mathcal{D}_{w}\) treated as unlabeled. mAP improved consistently with increased video-level supervision (Fig. 3). **Contribution of Weighted Pseudo-labels:** The use of weighted pseudo-labels (YOLO+HE+Weak+Pseudo) further improved validation mAP from 0.505 to 0.508 and test mAP from 0.476 to 0.479. The re-weighting mechanism eliminated all false positive pseudo-labels in negative videos and increased the number of pseudo-labels to better match the overall number of true labels. In comparison, fixed pseudo-label thresholds resulted in worse detection performance (test mAP 0.477, 0.474, 0.476, and 0.474 for thresholds of 0.1, 0.3, 0.5, and 0.7). **Contribution of Bidirectional Teacher-Student Mutual Refinement:** Teacher-student mutual refinement (YOLO+HE+Weak+TSMR) further boosted validation mAP from 0.508 to 0.515 and test mAP from 0.479 to 0.480. Model Figure 2: Learning curves of teacher models \(\theta_{T}\) on validation set during teacher-student mutual learning. Solid lines show mean validation mAP across five repeated experiments. Ranges indicate 95% confidence intervals. repeatability was improved, as shown in Fig. 2, where variation between experiments was greatly reduced (narrow 95% confidence intervals) and model convergence occurred more quickly. Ablation experiments confirmed that a fixed EMA keep rate \(\alpha_{e}\) was unable to achieve comparable detection performance compared to the proposed bidirectional adaptive EMA updates (\(\alpha_{e}=0.9\), lower fixed rate: 0.506 and 0.471; \(\alpha_{e}=0.95\), moderate fixed rate: 0.505 and 0.476; and \(\alpha_{e}=0.99\), high fixed rate: 0.489 and 0.470, for validation mAP and test mAP respectively). **Final Results for Proposed Method:** Finally, best-performing models incorporating all proposed components achieved validation mAP of 0.519 and test mAP of 0.484, which were statistically significant improvements to both fully supervised (0.452 and 0.440) and semi-supervised (0.468 and 0.447) baselines (p-value \(<\) 0.05). Furthermore, an extra experiment suggested that comparable test mAP of baseline YOLO detector (0.412) could be achieved by the proposed method using merely one-third of the labeled data (0.395). The reference speed was 2.9ms per frame on a NVIDIA GeForce RTX 3090 GPU, enabling real-time detection. Examples of lung consolidation detection in ultrasound are seen in Fig. 4, where the proposed method demonstrates successful detection of challenging pathology not identified (or falsely identified) by the baseline YOLO detector. ## 5 Conclusion This is the first study to introduce a weakly semi-supervised framework for object detection on medical video data. Our method extends a teacher-student training strategy to provide weak supervision via a video-level loss. We also introduce improvements to the underlying teacher-student mutual learning mechanism, including methods to improve the quality of pseudo-labels and optimize knowledge transfer between the student and teacher. Empirical results on a lung ultrasound pathology detection task demonstrate that the framework leads to improved detection accuracy and robustness compared to existing baseline models, while also being more efficient in data and annotation usage. One limitation of the method is the need to empirically select hyperparameters, which could be resolved using adaptive hyperparameter tuning techniques as part of future work. Moreover, we considered the proposed components as orthogonal research directions as other pseudo-label refinement techniques in the state-of-the-art semi-supervised detection methods, which could be further combined in achieving better performance. Figure 3: Contribution of video-level supervision during teacher-student mutual learning. Models were trained with varying proportions of video labels relative to unlabeled data. Lastly, the proposed improvements to the teacher-student mechanism could potentially be adapted for other semi-supervised and weakly supervised learning tasks, including classification and segmentation. #### 5.0.1 Acknowledgements We would like to acknowledge the contributions from the following people for their efforts in data curation and annotations: Zohreh Laverriere, Xinliang Zheng (Lia), Annie Cao, Katelyn Hostetler, Yuan Zhang, Amber Halse, James Jones, Jack Lazar, Devjani Das, Tom Kennedy, Lorraine Ng, Pene-lope Lema, Nick Avitabile.
2302.00886
Read It, Don't Watch It: Captioning Bug Recordings Automatically
Screen recordings of mobile applications are easy to capture and include a wealth of information, making them a popular mechanism for users to inform developers of the problems encountered in the bug reports. However, watching the bug recordings and efficiently understanding the semantics of user actions can be time-consuming and tedious for developers. Inspired by the conception of the video subtitle in movie industry, we present a lightweight approach CAPdroid to caption bug recordings automatically. CAPdroid is a purely image-based and non-intrusive approach by using image processing and convolutional deep learning models to segment bug recordings, infer user action attributes, and generate subtitle descriptions. The automated experiments demonstrate the good performance of CAPdroid in inferring user actions from the recordings, and a user study confirms the usefulness of our generated step descriptions in assisting developers with bug replay.
Sidong Feng, Mulong Xie, Yinxing Xue, Chunyang Chen
2023-02-02T05:35:31Z
http://arxiv.org/abs/2302.00886v1
# Read It, Don't Watch It: Captioning Bug Recordings Automatically ###### Abstract Screen recordings of mobile applications are easy to capture and include a wealth of information, making them a popular mechanism for users to inform developers of the problems encountered in the bug reports. However, watching the bug recordings and efficiently understanding the semantics of user actions can be time-consuming and tedious for developers. Inspired by the conception of the video subtitle in movie industry, we present a lightweight approach CarOroid to caption bug recordings automatically. CAPdroid is a purely image-based and non-intrusive approach by using image processing and convolutional deep learning models to segment bug recordings, infer user action attributes, and generate subtitle descriptions. The automated experiments demonstrate the good performance of CAPdroid in inferring user actions from the recordings, and a user study confirms the usefulness of our generated step descriptions in assisting developers with bug replay. bug recording, video captioning, android app ## I Introduction Software maintenance activities are known to be generally expensive, and challenging [1] and one of the most important maintenance tasks is to handle bug reports [2]. A good bug report is detailed with clear information about what happened and the steps to reproduce the bug. However, writing such clear and concise bug reports takes time, especially for non-developer or non-tester users who do not have that expertise and are not willing to spend that much effort [3, 4]. The emergence of screen recording significantly lowers the bar for bug documenting. First, it is easy to record the screen as there are many tools available, some of which are even embedded in the operating system by default, like iOS [5] and Android [6]. Second, video recording can include more detail and context such as configurations, and parameters, bridging the understanding gap between users and developers. Unfortunately, in many cases, watching the bug recordings and understanding the user behaviors can be time-consuming and tedious for developers [7, 8]. First, the recording may play too fast to watch, and the developers have to pause the recording, or even replay it multiple times to recognize the bug. Second, the watching experience can be further deteriorated by blurred video resolution, poor video quality, etc. Third, the recording usually contains a visual indicator (in Fig. 1) to help developers identify the user actions performed on the screen. However, those indicators sometimes are too small to be conspicuously realized, and developers have to the recording back and forth to guess each action to repeat it in their testing environment. Besides bug recordings, those issues also apply to general videos (e.g., movies, drama, etc). To address those issues in normal video watching, captions or subtitles are provided to add clarity of details, better engage users, maintain concentration for longer periods, and translate the different languages [9, 10]. Inspired by the conception of video subtitles in the movie industry, we intend to generate the caption of an app recording to add analogous benefits to developers. Given a caption accompanying the recording, developers, especially novices can more easily identify the user behaviors in the recording and shift their focus toward bug fixing. Specifically, we segment recordings into clips to characterize the "scenes" in the movie and add action descriptions of each clip to guide developers. Existing work has investigated methods to generate a textual description for GUI screenshot [11, 12, 13, 14, 15], which has been shown useful for various downstream tasks such as GUI retrieval, accessibility enhancement, code indexing, etc. Chen et al. [11] propose an image captioning model to apply semantic labels to GUI elements to improve the accessibility of mobile apps. Clarity [12] further consider multi-modal GUI sources to generate high-level descriptions for the entire GUI screen. However, none of them can generate descriptions for video recording, which is a more challenging task, translating spatial and temporal information into a semantic natural language. To create good video subtitles, there are several standards [16], including caption synchronization with the videos, accurate content comprehension, compact and consistent word usage, etc. Similarity, we propose an image-based approach CAPdroid in this paper to non-intrusively caption each action step for a bug recording, including three phases: 1) _action segmentation_, 2) _action attribute inference_, and 3) _description generation_. Inspired by the previous work GIFdroid [4, 17] to localize keyframes in bug recording, we first develop a heuristic method to segment the recording into a sequence of action clips (i.e., TAP, SCROLL, INPUT). Then, we adopt image-processing and deep-learning methods to model the spatial and temporal features across frames in the clips to infer action attributes, such as touch location, moving offset, and input text. A simple description based on the action attribute, e.g. tap on (x,y) coordinate, cannot express the action intuitively. Therefore, we first utilize off-the-shelf GUI models to non-intrusively gather the elements information in the GUI. As the GUI elements of interest may not have enough context to be uniquely identified, we propose a novel algorithm using global information of GUI elements to generate high-level semantic descriptions. We first evaluate the performance of the CAPdroid in obtaining user actions by _action segmentation_ and _action attribute inference_, through an automated method. We collect 439 Android apps from Google Play and leverage an automated app explore tool to simulate user actions on the screen, meanwhile capturing a 10-min screen recording for each app. Results show that our tool achieves the best performance (0.84 Video F1-score and 0.93 accuracy) in action segmentation from the recordings compared with five commonly-used baselines. CAPdroid also achieves on average 91.46% in inferring action attributes, outperforming two state-of-the-art baselines. We further carry out a user study to evaluate the usefulness of _description generation_ of CAPdroid in assisting bug replay, with 10 real-world bug recordings from GitHub. Results show that participants save 59.8% time reproducing the bug with the help of the steps we described, compared with the steps written by users. Through questionnaires with participants, they also confirm the clearness, conciseness, and usefulness of our generated action descriptions. The contributions of this paper are as follows: * This is the first work to generate the caption of bug recordings to support developers in reproducing bugs. * The first systematic approach CAPdroid, to non-intrusively segment recordings into clips, infer fine-grained user actions, and create action descriptions as subtitles, with examples in online appendix1. Footnote 1: [https://github.com/sidongfeng/CAPdroid](https://github.com/sidongfeng/CAPdroid) * A comprehensive evaluation including automated experiments and a user study to demonstrate the accuracy and usefulness of our approach. ## II CAPdroid Approach Given an input GUI recording, we propose an automated approach to segment the recording into a sequence of clips based on user actions and subsequently localize the action positions to generate natural language descriptions. The overview of our approach is shown in Fig. 2, which is divided into three main phases: (i) the _Action Segmentation_ phase, which segments user actions from GUI recording into a sequence of clips, (ii) the _Action Attribute Inference_ phase that infers touch location, moving offset, and input text from action clips, and (iii) the _Description Generation_ phase that utilizes the off-the-shelf GUI understanding models to generate high-level semantic descriptions. Before discussing each phase in detail, we discuss some preliminary understanding of user actions in GUI recording. ### _Preliminary Study_ To understand the recordings from the end-users, we conducted a small pilot study of the GUI recordings from GitHub [18]. In detail, we built a crawler to automatically crawl the bug reports from GitHub issue repositories that contain GUI recordings with suffix names like.gif,.mp4, etc. To study more recent GUI recordings, we obtained the recordings from 2021. Overall, we obtained 5,231 GUI recordings from 1,274 apps. We randomly sampled 1,000 (11.5%) GUI recordings, and we recruited two annotators online to manually check the user actions from the recordings. Two students were recruited by the university's internal slack channel and they were compensated with $12 USD per hour. They have annotating experience on GUI-related (e.g., GUI element bounding box) and video-related (e.g., video classification) datasets. To ensure accurate annotations, the process started with initial training. First, we gave them an introduction to our study and also an example set of annotated screen recordings where the labels have been annotated by the authors. Then, we asked them to pass an assessment test. Two annotators were assigned the experimental set of screen recordings to label the user actions independently without any discussion. After the initial labeling, the annotators met and sanity corrected the subtle discrepancies. Any disagreement was handed over to the first author for the final decision. We observed that 89% of the recordings included a touch indicator, indicating it as a mechanism for the end-user to depict their actions on the screen. We further classified those touch indicators into three categories, following the Card Sorting [19] method: * **default (68%).** As shown in Fig. 1(a), the touch indicator renders a small semi-transparent circle, that gives visual feedback when the user presses his finger on the device screen. This is the default touch indicator on Android. * **cursor (27%).** As shown in Fig. 1(b), users/developers may test the apps in the emulator and directly record the desktop, so that the user actions are captured by the desktop cursor. * **custom (5%).** As shown in Fig. 1(c), the touch indicator is customized by third-party screen recorders, such as DU Recorder [20], etc. Those findings motivated us to develop a tailored approach, exploiting touch indicators to capture end-user intent, so to Fig. 1: Examples of touch indicators. generate semantic captions for GUI recording. Considering the diversity of touch indicators in the general GUI recordings, a more advanced approach to detect and infer user actions is required. ### _Phase 1: Action Segmentation_ A video consists of a sequence of frames to deliver the visual detail of the story for particular scenes. Different from the recognition of discontinuities in the visual-content flow of natural-scene videos, detecting clips in the GUI recording is to infer scenes of user actions that generally display significant changes in the GUIs. To that end, we leverage the similarity of consecutive frames to segment user actions (i.e., _TAP_, _SCROLL_, _INPUT_) from GUI recording. #### Ii-B1 Consecutive Frame Comparison Inspired by signal processing [4, 17], we leverage the image processing techniques to build a perceptual similarity score for consecutive frame comparisons based on Y-Difference (or Y-Diff). YUV is a color space usually used in video encoding, enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a RGB-representation [21, 22]. Y-Diff is the difference in Y (luminance) values of two images in the YUV color space, used as a major input for the human perception of motion [23]. Consider a visual recording \(\{f_{0},f_{1},...,f_{N-1},f_{N}\}\), where \(f_{N}\) is the current frame and \(f_{N-1}\) is the previous frame. To calculate the Y-Diff of the current frame \(f_{N}\) with the previous \(f_{N-1}\), we first obtain the luminance mask \(Y_{N-1},Y_{N}\) by splitting the YUV color space converted by the RGB color space. Then, we apply the perceptual comparison metric, SSIM (Structural Similarity Index) [24], to produce a per-pixel similarity value related to the local difference in the average value, the variance, and the correlation of luminances. A SSIM score is a number between 0 and 1, and a higher value indicates a strong level of similarity. #### Ii-B2 Action Classification To identify the user actions in the GUI recording, we look into the similarity scores of consecutive frames as shown in Fig. 3. The first step is to group frames belonging to the same atomic activity according to tailored pattern analysis. This procedure is necessary because discrete activities performed on the screen will persist across several frames, and thus, need to be grouped and segmented accordingly. Consequently, we observe three patterns of user actions, i.e., _TAP_, _SCROLL_, and _INPUT_. Note that we focus on the most commonly-used actions for brevity in this paper, other actions could be extended by comparing the consecutive frame similarity. _(a) TAP_: As shown in Fig. 3A (user taps a button), the similarity score starts to drop drastically which reveals an instantaneous transition from one screen to another. In addition, one common case is that the similarity score becomes steady for a small period of time \(t_{s}\) between two drastically droppings as shown in Fig. 3C. The occurrence of this short steady duration \(t_{s}\) is because GUI has not finished loading. While the GUI layout of GUI rendering is fast, resource loading may take time. For example, rendering images from the web depends on device bandwidth, image loading efficiency, etc. _(b) SCROLL_: As shown in Fig. 3D (user scrolls up/down the screen), the similarity score starts with a drastic drop and then continues to increase slightly over a period of time, which implicates a continuous transition from one GUI to another. _(c) INPUT_: As shown in Fig. 3B (user inputs text), the similarity score starts to drop and rise multiple times, revealing typing characters and digits. However, the similarity score cannot reliably detect _INPUT_ actions, as it may coincide with the _TAP_ actions. To address this, we further supplement with Optical Character Recognition (OCR) technique [25] (a detailed description is demonstrated in Section II-C3) to detect whether there is a virtual keyboard in the GUI. Note that we focus on English apps, and it may take additional efforts to Fig. 4: Examples of keyboard detection. Fig. 3: An illustration of consecutive frame similarity. Fig. 2: The overview of CAPdroid. extend our approach to other languages. In detail, we first extract the characters from the frames, and concatenate them into text-based (\(ocr_{text}\)) and number-based (\(ocr_{num}\)) string. As the OCR may not infer the text perfectly, we discern the keyboard frame by keyboard-specific substrings. For example, Fig. 4(a) is a frame of English keyboard that contains _"qwert"_ in \(ocr_{text}\), and Fig. 4(b) is a frame of numeric keypad that contains _"123"_ in \(ocr_{num}\). Therefore, the frame of a keyboard is discriminated by \[frame=\begin{cases}\exists\{\text{qwert, asdfg, zxcvb}\}\in lowercase(ocr_{text})\\ \exists\{\text{123, 456, 789}\}\in ocr_{num}\end{cases} \tag{1}\] where \(lowercase\) is to convert the uppercase characters into lowercase, in order to detect capital English keyboard. Note that we do not adopt keyboard template matching, as keyboards vary in appearance, such as customized background, different device layouts, etc. ### _Phase 2: Action Attribute Inference_ Given a recording clip of user action segmented by the previous phase, we then infer its detailed attributes, including touch location of _TAP_ action, its moving offset of _SCROLL_ action, and its input text of _INPUT_ action, to reveal where the user interacts with on the screen. The overview of our methods is shown in Fig. 5. The prediction of _TAP_ location requires a semantic understanding of the GUI transition captured in the clip, such as touch indicators (in Section II-A), transition animation, GUI semantic relation, etc. Therefore, we propose a deep-learning-based method that models the spatial and temporal features across frames to infer the _TAP_ location. To infer the moving offset of _SCROLL_, we adopt an off-the-shelf image-processing method to detect the continuous motion trajectory of GUIs, thus, measuring the user's scrolling direction and distance. To infer the input text of _INPUT_, we leverage the OCR technique to identify the text difference between the frames of keyboard opening (i.e., where the user starts entering text) and keyboard closing (i.e., where the user ends entering). #### Iii-C1 Inferring TAP location Convolutional Neural Networks of 2D (Conv2ds) [26, 27] have demonstrated remarkable success in efficiently capturing the hypothesis of spatial locality in two-dimensional images. A video that is encoded by a sequence of 2d images, aggregates another dimension: spacetime. To predict the touch location from a GUI recording clip, we adopt a Conv3d-based model X3D [28], that simultaneously models spatial features of single-frame GUIs and temporal features of multi-frames optical flow. The architecture of our X3D model is shown in Fig. 5(a). Given a video clip \(V^{T\times H\times W\times C}\) where \(T\) is the time length of the clip, \(W\), \(H\), and \(C\) are the width, height, and channel of the frame, usually \(C=3\) for RGB frame. We first apply 3d convolution layers, consisting of a set of learnable filters to extract the spatio-temporal features of the video. Specifically, the convolution is to use a 3d kernel, i.e. \(t\times d\times d\) where \(t\) and \(d\) denote the temporal and spatial kernel size, to slide around the video and calculate kernel-wise features by matrix dot multiply. After the convolutional layers, the video \(V\) will be abstracted as a 3d feature map, preserving features along both the spatial and the temporal information. We then apply a 3d pooling layer to eliminate unimportant features and enhance spatial variance of rotation and distortion. After blocks of convolutional and pooling layers, we flatten the feature map and apply a fully connected layer to infer the logits of _TAP_ location. For the detailed implementation, we adopt the convolutional layers from ResNet-50 [29] and borrow the idea of residual connection to improve the model performance and stability between layers. We use MaxPooling [30] as the pooling layer, where the highest value from the kernel is taken, for noise suppressant during abstraction. The output of the fully connected layer is 2 neurons, representing \((x,y)\) coordinates. To accelerate the training process [31], we standardize the coordinate relative to the width and height of the frame. Although the frames are densely recorded (i.e. 30fps), the GUI renders slowly. To extract discriminative features from the recording, we uniformly sample 16 frames at 5 frame intervals (\(T=16\)) as suggested in [28]. Note that if the length of the recording clip is smaller than the sample rate \(16\times 5\), we will sample the frames based on nearest neighbor interpolation. To make our training more stable, we adopt Adam as the optimizer [32] and MSELoss as the loss function [33]. Moreover, to optimize the model, we apply an adaptive learning scheduler, with an initial rate of 0.01 and decay to half after 10 iterations. The hyperparameter settings are determined empirically by a small-scale experiment. #### Iii-C2 Inferring SCROLL offset To infer the scrolling direction (i.e., upward, downward) and distance (i.e., amount of movement) from the GUI recording clip, we measure the motion trajectory of GUI elements. Since the elements may scroll off-screen [34], we adopt the K-folds template matching method as shown in Fig. 5(b). Given a GUI recording clip \(\left\{f_{0},f_{1},..,f_{N-1},f_{N}\right\}\), where \(f_{N}\) is the current frame and \(f_{N-1}\) is the previous frame. We first divide the previous GUI \(f_{N-1}\) into K pieces vertically. We set K to 10 by a small pilot study to mitigate the off-screen issue and preserve sufficient features for template matching. And then, we match the template of each fold in the current frame \(f_{N}\) to compute the scrolling offset between consecutive frames. At the end, we derive the scrolling distance by summing the offsets (\(\sum_{n=0}^{N}\textit{offset}_{n}^{n-1}\)), and infer the scrolling direction by the sign of the distance, e.g., positive for downward, otherwise upward. #### Iii-C3 Inferring INPUT text Detecting input text based on user actions on the keyboard can be error-prone, as the user may edit text from the middle of the text, switch to capital, delete text, etc. Therefore, we leverage a practical OCR technique PP-OCRv2 [25] to detect the text difference between the first frame (opening keyboard) and the last frame (closing keyboard) from the _INPUT_ recording clip, as shown in Fig. 5(c). Given a GUI frame, PP-OCRv2 detects the text areas in the GUI by using an image segmentation network and then applies a sequence and classification model to recognize the text. As the GUI text is similar to scene text [35], we directly use the pre-trained PP-OCRv2 without any fine-tuning on GUI text, that the overall performance reaches 84.3% state-of-the-art accuracy. After deriving the text from the frames of keyboard opening and keyboard closing, we first remove all the text on the keyboard to keep the text concise. Then, we detect the text difference between the frames using SequenceMatcher [36]. Albeit good performance of PP-OCRv2, it may still make wrong text recognition, e.g., missing space. To address this, SequenceMatcher measures text similarity by computing the longest contiguous matching subsequence (LCS). Finally, we extract the text that appears only in the frame where the keyboard is closed, as input text. ### _Phase 3: Description Generation_ Once the attributes of the action are derived from the previous phases, we proceed by generating in-depth and easy-to-understand natural language descriptions. To accomplish this, we first leverage mature GUI understanding models to obtain GUI information non-intrusively. Then, we propose a novel algorithm to phrase actions into descriptions and embed them as subtitles in the recording as shown in Fig. 6. #### Iii-D1 GUI understanding To understand the GUI, we adopt non-intrusive approaches to obtain GUI information, to avoid the complexity of app instrumentation or handling of the diverse software stack, especially for closed-source systems where no underlying instrumentation support is accessible [37]. An example of GUI understanding is shown in Fig. 7. Specifically, we first implement the state-of-the-art object detection model Faster-RCNN with ResNet-101 [29] and Feature Pyramid Networks [38] to detect 11 GUI element classes on the screen: button, checkbox, icon, imageview, textview, radio button, spinner, switch, toggle button, edittext, and chronometer. We train the model on the Rico dataset [39] contains 66k GUIs from 9.7k apps. Following the previous work [40], we split the GUIs in the training:validation:testing dataset by apps in the ratio of 8:1:1. As a result, the model achieves an overall Mean Average Precision (MAP) of 51.45% on the test set. For each GUI element, we adopt the OCR technique (the detailed implementation is elaborated in Section II-C3) to detect the text (if any). For the icon, annotation based on common human understanding can enhance the GUI understanding. For example, in Fig. 7, the icon of a group of people informs the semantic of "_Friend_". To achieve this, we adopt a transformer-based model from the existing work [11] to caption the icon image. We follow the implementation in their original paper to train the model and achieve 60.7% accuracy on the test set. Besides from understanding the information of GUI elements, we also attempt to obtain their global information relative to the GUI, including absolute positioning and element relationship. Absolute positioning describes the element as a spatial position in the GUI, which is particularly useful to represent an element in an image [41]. To accomplish this, we uniformly segment the GUI into \(3\times 3\) grids, delineating horizontal position (i.e, _left_, _right_), vertical position (i.e, _top_, _bottom_), and _center_ position. For example, in Fig. 7, the "100m" spinner is at _the top right_ corner. GUI element relationship aims to transform the "flat" structure of GUI elements into connected relationships. A natural way of representing the relationship is using a graph structure, where elements are linked to the nearest elements. To accomplish this, we first compute the horizontal and vertical distance between GUI elements by euclidean pixel measurement. And then, we construct the graph of the GUI elements by finding the nearest elements (neighbors) in four directions, including _left_, _right_, _top_, _and_ _bottom_. Note that we set up a threshold to prevent the neighbors from being too far apart. Ultimately, it will generate a graph representing the relationships between the elements in the GUI. For example, in Fig. 7, the "100" spinner has two neighbors: the "Advanced" element at the _top_, Fig. 5: Approaches of Action Attribute Inference. Fig. 6: Subtitle and textual steps in the GUI recording. and the "None" element at the _bottom_. Note that the "Audio cue settings" element is omitted due to large spacing, which is consistent with human viewing. #### Iii-C2 Subtitle Creation The main instruction of interest is to create a clear and concise subtitle description based on \(\left\{action,object\right\}\). The global GUI information is further used to complement the description by \(\left\{position,relationship\right\}\). Based on the \(action\) obtained in Section II-B, the attribute of \(object\) inferred in Section II-C, and the corresponding GUI element information retrieved in Section II-D1, we propose description templates for _TAP_, _SCROLL_, _INPUT_, respectively. A summary of description templates can be seen in Table I. For _TAP_ action, the goal of the description should be clear and concise, e.g., tap "OK" button. However, we find that this simple description may not articulate all _TAP_ actions due to two reasons. First, the text and caption of _object_ are prone to errors or undetected, as the OCR-obtained text and the caption-obtained annotation are not 100% accurate. Second, there may be multiple _objects_ with the same text on the GUI. To resolve this, we set up an _object_ confidence value _objconf_ as: \[\textit{objconf}=\begin{cases}\textit{OCR}_{\textit{confid}}&\text{if }\textit{obj}_{\textit{text}}\text{ is unique in GUI}\\ 0&\text{otherwise}\end{cases} \tag{2}\] where _OCRconfid_ denotes the confidence predicted by OCR. Note that the confidence value of icon _object_ is calculated likewise by captioning. The smaller the confidence value, the less intuitive the _object_ is. Therefore, only the _object_ with the highest confidence value (_objconf_\(>\alpha\)) will apply the simplest and most straightforward description (Template 1), otherwise, we add the context of absolute position to help locate the _object_ (Template 2). For the _object_ whose text is not detected or recognized with low confidence, we leverage the context of its _neighbor_ to help locate the target _object_ (Template 3), e.g., tap the checkbox next to "Dark Mode". It is easy to describe a _SCROLL_ action by its scrolling direction and offset (Template 5), e.g., scroll up a quarter of the screen. However, such an offset description is not precise and intuitive. To address this, if a new element with text appears by scrolling, we add this context to help describe where to scroll to (Template 4), e.g., scroll down half of the screen to "Advanced Setting". The description of _INPUT_ is similar to _TAP_. For the high-confidence _object_ with text (Template 6), it generates: Input [_text_] in the [_objconf_] edittext. Different from the _TAP_ descriptions, we do not apply the context of absolute position to help locate the low-confidence _object_. This is because the _objects_ are gathering at the top when the keyboard pops up, so the absolute positioning may not help. Instead, we use the relative position of _neighbor_ to describe the input _object_ of which text is not detected or recognized with low confidence (Template 7), e.g., Input "John" in the edittext below "Name". After generating the natural language description for each action clip, we embed the description into the recording as subtitles as shown in Fig. 6. In detail, we create the subtitles by using the Wand image annotation library [42] and synchronize the subtitle display at the beginning of each action clip. \begin{table} \begin{tabular}{l|l|l|l|l} \hline **Action** & **Id** & **Condition** & **Template** & **Example** \\ \hline \hline \multirow{4}{*}{_TAP_} & 1 & (\(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(>\alpha\)) & Tap [_objconf_] [_objconf_] & Tap “OK” button \\ \cline{2-4} & 2 & (\(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\beta\)\(<\textit{objconf}\)\(<\alpha\)) & Tap [_objconf_] [_objconf_] at [_objconf_] & Tap “menu” icon at top left corner \\ \cline{2-4} & 3 & (\(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(<\beta\)) & Tap the [_objconf_] [_objconf_] [_objconf_] & Tap the checkbox next to “Dark Mode” \\ \cline{2-4} & 4 & \(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(\leq\beta\)) & Scroll [_direction_] [_offset_] of the screen to [_objconf_] & Scroll down half of the screen to “Advanced Setting” \\ \cline{2-4} & 5 & \(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(\leq\) & Scroll [_direction_] [_offset_] of the screen & Scroll up a quarter of the screen \\ \hline \multirow{4}{*}{_INPUT_} & 6 & (\(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(>\alpha\)) & Input [_text_] in the [_objconf_] edittext & Input “100” in the “Amount” edittext \\ \cline{2-4} & 7 & (\(\textit{obj}_{\textit{last}}\)) \(\land\) (\(\textit{objconf}\)\(<\alpha\)) & Input [_text_] in the edittext [_\(\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{ \textit{\textit{\textit{\textit{\textit ## III Automated Evaluation In this section, we described the procedure we used to evaluate CAPdroid in terms of its performance automatically. Since our approach consists of two main automated steps to obtain the actions from the recordings, we evaluate these phases accordingly, including Action Segmentation (Section II-B), and Action Attribute Inference (Section II-C). Consequently, we formulated the following two research questions: * **RQ1**: How accurate is our approach in segmenting action clips from GUI recordings? * **RQ2**: How accurate is our approach in inferring action attributes from clips? To perform the evaluation automatically, we leveraged the existing automated app exploration tool Droidbot [43] to collect GUI recordings with ground-truth actions. In detail, we first collected 439 top-rated Android apps from Google Play covering 14 app categories (e.g., news, tools, finance, etc.). Each app was run for 10 minutes by Droidbot to automatically explore app functionalities by simulating user actions on the GUI. The simulated actions, including operation time, types, locations, etc, were dumped as metadata, representing the ground truth. Meanwhile, we captured a screen recording to record the actions for each app at 30 fps. As discussed in Section II-A, users may use different indicators to depict their touches. To make our recordings as similar to real-world recordings as possible, we adopted different touch indicators to record actions, including 181 default, 152 cursor, and 106 custom. In total, we obtained 439 10-min screen recordings as the experimental dataset for the evaluation. ### _RQ1: Accuracy of Action Segmentation_ **Experimental Setup.** To answer RQ1, we evaluated the ability of our CAPdroid to precisely segment the recordings into action clips and accurately classify the actions. To accomplish this, we utilized the metadata of action operation time as the ground-truth. During preliminary observation with many recordings, we found that, due to the delay between commands and operations on the device, it may have small time-frame differences between the ground-truth and the recorded actions. To avoid these small differences, we broadened the ground-truth of the actions by 5 frames. In total, we obtained 12k _TAP_, 4k _SCROLL_, and 1k _INPUT_ clips from 439 screen recordings. **Metrics.** We employed two widely-used evaluation metrics, e.g., video segmentation F1-score, and accuracy. To evaluate the precision of segmenting the action clips from recordings, we adopted video segmentation F1-score [44], which is a standard video segmentation metric to measure the difference between two sequences of clips that properly accounts for the relative amount of overlap between corresponding clips. Consider the clips segmented by our method (\(c_{our}\)) and ground truth (\(c_{gt}\)), vs-score is computed as \(\frac{2|c_{our}\cap c_{gt}|}{|c_{one}|+|c_{gt}|}\), where \(|c|\) denotes the duration of the clip. The higher the score value, the more precise the method can segment the video. We further adopted accuracy to evaluate the performance of our approach to discriminate action types from clips. The higher the accuracy score, the better the approach can classify the actions. **Baselines.** To demonstrate the advantage of using SSIM as the image similarity metric to segment actions from GUI recordings, we compared it with 5 image-processing baselines, including pixel level (e.g, absolute differences ABS [45], color histogram HIST [46]), structural level (e.g., SIFT [47], SURF [48]), and motion-estimation level (e.g., edge detection EDGE [49]). Due to the page limit, we omitted the details of these well-known methods. **Results.** Table II shows the overall performance of all baselines. The performance of our method is much better than that of other baselines, i.e., 20%, 17% boost in video segmentation F1-score and accuracy compared with the best baseline (HIST). Although HIST achieves the best performance in the baselines, it does not perform well as it is sensitive to the pixel value. This is because the recordings can often have image noise due to fluctuations of color or luminance. The image similarity metrics based on structural level (i.e., SIFT, SURF) are not sensitive to image pixel, however, they are not robust to compare GUIs. This is because, unlike images of natural scenes, features in the GUIs may not distinct. For example, a GUI contains multiple identical checkboxes, and the duplicate features of checkboxes can significantly affect similarity computation. Besides, motion-estimation baseline (EDGE) cannot work well in segmenting actions from GUI recordings, as GUI recordings are artificial artifacts with different rendering processes. In contrast, our method using SSIM achieves better performance as it takes similarity measurements in many aspects from spatial and pixel, which allows for a more robust comparison. Our method also makes mistakes in action segmentation due to two reasons. First, we wrongly segment one action clip into multiple ones due to the unexpected slow resource loading, e.g., one clip for the GUI transition of a user action, and the other clip for the GUI's resource loading. Second, some GUIs may contain animated app elements such as advertisements or movie playing, which will change dynamically, resulting in mistake action segmentation and classification. ### _RQ2: Accuracy of Action Attribute Inference_ **Experimental Setup.** To answer RQ2, we evaluated the ability of our CAPdroid to accurately infer the action attributes from the segmented clips. To accomplish this, we \begin{table} \begin{tabular}{l|c|c|c|c|c|c||c|c} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**TAP**} & \multicolumn{2}{c|}{**SCROLL**} & \multicolumn{2}{c||}{**INPUT**} & \multicolumn{2}{c}{**Overall**} \\ \cline{2-9} & VS & Acc & VS & Acc & VS & Acc & VS & Acc \\ \hline ABS & 0.56 & 0.69 & 0.59 & 0.69 & 0.67 & 0.73 & 0.61 & 0.71 \\ HIST & 0.71 & 0.80 & 0.62 & 0.71 & 0.75 & 0.84 & 0.70 & 0.79 \\ SIFT & 0.61 & 0.71 & 0.60 & 0.73 & 0.63 & 0.79 & 0.62 & 0.75 \\ SURF & 0.55 & 0.71 & 0.59 & 0.72 & 0.60 & 0.77 & 0.58 & 0.74 \\ EDGE & 0.61 & 0.75 & 0.55 & 0.70 & 0.66 & 0.78 & 0.61 & 0.75 \\ **Ours** & **0.81** & **0.89** & **0.83** & **0.92** & **0.90** & **0.97** & **0.84** & **0.93** \\ \hline \end{tabular} \end{table} TABLE II: Performance comparison of action segmentation. “VS” denotes the video segmentation F1-score, and “Acc” denotes the accuracy of action classification. leveraged the metadata of action attributes as the ground-truth. Since our approach employs a deep-learning-based model (Section II-C1) to infer _TAP_ location, we trained and tested our model based on the metadata of _TAP_ actions. Note that a simple random split cannot evaluate the model's generalizability, as tapping on the screens in the same app may have very similar visual appearances. To avoid this data leakage problem [50], we split the _TAP_ actions in the dataset by apps, with the 8:1:1 app split for the training, validation, and testing sets, respectively. We also ensure a similar number of three types of touch indicators (i.e. default, cursor, custom) in the split dataset. The resulting split has 9k actions in the training dataset, 1.5k in the validation dataset, and 1.5k in the testing dataset. The model was trained in an NVIDIA GeForce RTX 2080Ti GPU (16G memory) with 30 epochs. In total, we obtained 1.5k _TAP_ locations, 4k _SCROLL_ offsets, and 1k _INPUT_ text as the attributes of testing data. **Metrics.** We employed accuracy as the evaluation metric to measure the performance of our approach in inferring _TAP_, _SCROLL_, and _INPUT_ action attributes, respectively. As one element occupies a certain area, tapping any specific point within that area can successfully trigger the action. So, we measured whether our predictions are within the ground-truth element. For _SCROLL_ actions, we measured whether our inferred scroll offset is the same as the ground-truth. For _INPUT_ actions, we measured whether our approach can infer the correct input text. The higher the accuracy score, the better the approach to infer action attributes. **Baselines.** We set up 2 state-of-the-art methods as our baselines to compare with our CAPdroid. _V2S_[8] proposed the first GUI video analysis technique, that utilizes deep-learning models to detect the touch indicator for each frame in a video and then classify them to user actions. As V2S only detects the default touch indicator, we followed their procedure to train corresponding deep-learning models to detect cursor and custom indicators. _GIFdroid_[4] developed a novel lightweight tool to detect the user actions by first extracting the keyframes from the GUI recording and then mapping it to the GUI transition graph (UTG) to extract the execution actions. We also followed the details in their paper to obtain the UTG graph. **Results.** Table III shows the overall performance of all methods. Our method outperforms in all actions, e.g., on average 91.33%, 94.87%, 88.19% for _TAP_, _SCROLL_, and _INPUT_, respectively. Our method is on average 30.2% more accurate compared with V2S in action attribute inference, due to three main reasons. First, our method models the features from both the spatial (i.e., touch indicator) and temporal (i.e., GUI animation) across the frames to enhance the performance of the model in inferring _TAP_ actions, i.e., on average 91.33% vs 63.32% for CAPdroid and V2S respectively. Second, our method achieves better performance in inferring action attributes even for the recordings with different touch indicators. This is because, CAPdroid proposes a novel touch indicator-independent method by leveraging the similarity of consecutive frames to identify actions, while V2S leverages the opacity of the indicator, e.g., a fully solid touch indicator represents the user first touches the screen, and it fades to less opaque when a finger is lifted off the screen. The opacity of the indicator works well for the default touch indicator (on average 84.69%), but not for the others (on average 66.48%, 32.55% for cursor and custom). Third, CAPdroid can accurately (on average 88.19%) infer the input text from the clips, while V2S cannot detect semantic actions. CAPdroid is on average 28% (91.46% vs 63.35%) more accurate even compared with the best baseline (GIFdroid). This is because, the content in GUIs of some apps (e.g., financial, social, music apps) are dynamic, causing the keyframes wrongly map to the states in the UTG. This issue further exacerbates input text inference, as the input text from the recording is specific but the input text in UTG is randomly generated. Albeit the good performance of our approach, we still make wrong inferences about some actions. We manually check those wrong cases and find two common causes. First, as shown in Fig. 8, the overlap of similar colors between the touch indicators and icons leads to less distinct features of the indicators, causing false-positive action localization. Second, although the good performance of our OCR method, it still makes wrong text recognition, especially missing spaces. We believe the emergence of advanced OCR methods can further improve the accuracy of our approach. ## IV Usefulness Evaluation In this section, we conducted a user study to evaluate the usefulness of our generated descriptions (reproduction steps) for replaying bug recordings in real-world development environments. **Procedure:** We recruited another 8 participants including 6 graduate students (4 Master, 2 Ph.D) and 2 software developers Fig. 8: Examples of bad cases in action localization. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{**TAP**} & \multicolumn{3}{c|}{**SCROLL**} & \multicolumn{3}{c|}{**INPUT**} & \multirow{2}{*}{**Overall**} \\ \cline{2-2} \cline{6-11} & default & cursor & custom & default & cursor & custom & default & cursor & custom \\ \hline V2S [8] & 84.19\% & 69.66\% & 36.10\% & 85.19\% & 63.31\% & 29.00\% & - & - & 61.24\% \\ GIFdroid [4] & 85.78\% & 88.01\% & 87.16\% & 72.84\% & 71.01\% & 69.77\% & 35.13\% & 32.11\% & 28.39\% & 63.35\% \\ **CAPdroid** & **91.06\%** & **90.28\%** & **92.67\%** & **94.87\%** & **94.63\%** & **95.12\%** & **87.86\%** & **88.62\%** & **88.11\%** & **91.46\%** \\ \hline \end{tabular} \end{table} TABLE III: Performance comparison of action attribute inference. to participate in the experiment. All students have at least one-year experience in developing Android apps and have worked on at least one Android apps project as interns in the company. Two software developers are more professional and have two-year working experience in a large company in Android development. Given that they all have experience in Android app development and bug replay, they are recognized as substitutes for developers in software engineering research experiments [51]. To mitigate the threat of user distraction, we conducted the experiment in a quiet room individually without mutual discussion. We first gave them an introduction to our study and also a real example to try. Each participant was then asked to reproduce the same set of 10 randomly selected bug recordings from real-world issue reports in GitHub, on average, 3.6 TAP, 1.2 SCROLL, and 1.0 INPUT per recording. The experimental bug recordings can be seen in our online appendix2. The study involved two groups of four participants: the control group \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\) who gets help with the reproduction steps written by reporters from GitHub, and the experimental group \(P_{5}\), \(P_{6}\), \(P_{7}\), \(P_{8}\) who gets help with the natural language description generated by our tool. Each pair of participants \(\langle P_{x}\), \(P_{x+4}\rangle\) has comparable development experience, so the experimental group has similar capability to the control group in total. Note that we did not ask participants to finish half of the tasks with our tool while the other half without assisting tool to avoid potential tool bias. We recorded the time used to reproduce the bug recordings in Android. Participants had up to 10 minutes for each bug replay. To minimize the impact of stress, we gave a few minutes break between each bug replay. At the end of the tasks, we provided 5-point Likert-scale questions to collect their feedback, in terms of clearness, conciseness, and usefulness. We further collected participants' feedback through a few open-ended questions, which can help us bring more insight into our tool, including how could the subtitles be improved, are there any software engineering tasks that would benefit from subtitles, etc. Footnote 2: [https://github.com/sidongfeng/CAPdroid](https://github.com/sidongfeng/CAPdroid) **Results:** Overall, participants appreciate the usefulness of our approach for providing them with clear and concise step descriptions to describe the actions performed on the bug recordings, so that they can easily replay them. Box plot in Fig. 9 shows that, given our generated reproduction steps, the experimental group reproduces the bug recording much faster than that of the control group (with an average of 3.46min versus 5.53min, saving 59.8% of the time). This brings a preliminary insight of the usefulness of our generated reproduction steps to help participants locate and replay the actions. Table IV shows the overall results received from participants. All participants admit that our approach can provide more easy-to-understand step descriptions for them, in terms of 4.25 vs 2.50 in clearness, and 4.50 vs 1.75 in conciseness, compared with the control group. In addition, they demonstrate several advantages of our reproduction steps, such as complete steps, region/text of interest, technical language, etc. Since the steps we generate are matched with each action one-to-one, participants can easily track each step, while the missing steps in the control group may confound participants: whether the step description corresponds to the current GUI. \(P_{5}\) also finds the absolute positioning and element relationship description particularly useful to him, because such description can narrow down the spatial regions in GUI and easily locate the GUI element in which a bug occurs. \(P_{3}\) reports that some users may use inconsistent words to describe the steps. For example, users may use "play the film" to describe the button with the text "movie", making the developers hard to reproduce in practice. In contrast, the descriptions we generate are entirely based on GUI content, so it is easy to find the GUI elements. The participants strongly agree (4.75) with the usefulness of our approach due to two reasons. One is the potential of our structured text to benefit short- and long-term downstream tasks, such as bug triaging, test migration, etc. The potential downstream is discussed in Section V. The other is the usefulness of the subtitle in the recording, revealing the action segmentation of our approach. \(P_{2}\) in the control group finds the touch indicator to be inconspicuous and sometimes GUI transitions are too abrupt to realize. In contrast, with the help of our approach, \(P_{6}\) praises the subtitle in the recording as it informs the timing of each action. To understand the significance of the differences, we further carry out the Mann-Whitney U test [52] (specifically designed for small samples) on the replaying time, clearness, conciseness, and usefulness between the experimental and the control group respectively. The test results suggest that our approach does significantly help the participants reproduce bug recordings more efficiently (\(p<0.01\)). There is also some valuable feedback provided by the participants to help improve the CAPdroid. For example, participants want higher-level semantic step descriptions, e.g., tap the first item in the list group, which can lead to more insights into the bugs. We will \begin{table} \begin{tabular}{l|c|c} \hline **Measures** & **Control** & **Experiment** \\ \hline Clearness & 2.50 & 4.25\({}^{*}\) \\ Conciseness & 1.75 & 4.50\({}^{*}\) \\ Usefulness & - & 4.75 \\ \hline \end{tabular} \end{table} TABLE IV: Performance comparison between the experimental and control group. \({}^{*}\) denotes \(p<\) 0.01. Fig. 9: Bug replay time. investigate the possible solution as our future work. ## V Discussion We have discussed the limitations of our approach at the end of each subsection of the evaluation in Section III, such as errors due to slow rendering in action segmentation (Section III-A), low contrast between touch indicators and icons in action attribute inference (Section III-B), etc. In this section, we discuss the implication of our approach and future work. **Downstream tasks supported by video captioning.** There are many downstream tasks based on the textual bug reports, such as automated bug replay [53, 54], test migration [55, 56], duplicate bug detection [57, 58, 59], etc. Few of them can be applied to visual bug recordings. Our approach to automatically caption bug recording provides a semantic bridge between textual and visual bug reports. In detail, CAPdroid complement the existing methods, as the first process of these downstream tasks is usually to employ natural language processing (NLP) techniques to extract the representations of bug steps into a structural grammar, such as action, object, and position, which can be automatically extracted by our approach in visual bug recording. **Generality across platforms.** Results in the usefulness evaluation in Section IV have demonstrated the usefulness of our approach in generating high-quality descriptions for Android bug recordings to help developers with bug replay in real-world practice. Supporting bug recordings of different platforms (e.g., iOS, Web) can bring analogous benefits to developers [60]. As the actions from different platforms exert almost no difference, and our approach is purely image-based and non-intrusive, it can be generalized to caption bug recordings for other platforms with reasonable customization efforts to our approach. In the future, we will conduct thorough experiments to evaluate the performance of CAPdroid in supporting those platforms. **Accessibility of GUI recording.** Tutorial videos (e.g., app usage recordings) are widely used to guide users to access unfamiliar functionalities in mobile apps. However, it is hard for people with vision impairments (e.g., the aged or blind) to understand those videos unless asking for caregivers to describe the action steps from the tutorial videos to help them access the video content [61]. Our approach might be applied to enhance the accessibility of tutorial videos by generating clear and concise subtitles for reproduction steps, enabling people with vision impairments to easily access information and service of the mobile apps for convenience. ## VI Related Work Vision to Language semantically bridges the gap between visual information and textual information. The most well-known task is image captioning, describing the content of an image in words. Many of the studies proposed novel methods to generate a textual description for GUI image, in order to enhance app accessibility [11, 62, 14, 15], screen navigation [63, 64], GUI design search [65, 66, 67], automate testing [68, 69, 70, 71, 72, 73], etc. Chen et al. [13] designed an approach that uses a machine translator to translate a GUI screenshot into a GUI skeleton, a functional natural language description of GUI structure. Moran et al. [12] proposed image captioning methods Clarity to describe the GUI functionalities in varying granularity. In contrast, we focused on a more difficult task - video captioning, generating natural language to describe the semantic content of a sequence of images. To the best of our knowledge, this is the first work translating the GUI recording into textual descriptions. Earlier works [74, 75] proposed sequence-to-sequence video captioning models that extract a sequence of image features to generate a sequence of text. These models showed their advantage in video summarization, but it was hard to achieve the goal of generating multiple concrete captions with their temporal locations from the video (a.k.a dense video captioning). Intuitively, dense video captioning can be decomposed into two phases: event segmentation and event description. Existing methods tackled these two sub-problems using event proposal and captioning modules, and exploited two ways to combine them for dense video captioning. We borrowed the two-phase idea to generate a natural language description for GUI recording, denoting events as user actions. To segment the events from the videos, Krishna et al. [76] proposed the first segmentation method by using a multi-scale proposal module. Some of the following works [77, 78] aimed to enrich the event representations by context modeling, event-level relationships, or multi-modal feature fusion, enabling more accurate event segmentation. However, these methods were designed for general videos which contain more natural scenes like human, plants, animals, etc. Different from those videos, our GUI recordings belonged to artificial artifacts with different image motions (i.e., GUI rendering). While some previous studies worked on domain-specific GUI recordings, they focused on high-level GUI understanding, such as duplicate bug detection [60], GUI animation listing [79, 80], etc. In contrast, we focused on the fine-grained user actions in the GUI recording. To analyse and segment actions from the GUI recording, many record-and-replay tools were developed based on different types of information, including the runtime information [81] and app artifacts [82, 4, 17]. Nurmuradov et al. [83] introduced an advanced lightweight tool to record user interactions by displaying the device screen in a web browser. Feng et al. [4, 17] proposed an image processing method to extract the keyframes from the recording and mapped them to states in the GUI transitions graph to replay the execution trace. However, they required the installation of underlying frameworks, or instrumenting apps which is too heavy and time-consuming. Bernal et al. [8] implemented a deep learning-based tool named V2S to detect and classify user actions from specific recordings, a high-resolution recording with a default Android touch indicator. But more than 32% of end-users cannot meet that requirement in real-world bug reports according to our analysis in Section II-A. In contrast, considering the diversity of touch indicators in the general GUI recordings from end-users, we propose a more advanced approach to capture the spatial features of touch indicators and the temporal features of touch effects, to achieve better performance on user action identification. To generate video captions, many works [78, 84] started using one single unified deep-learning model (one-fit-all). Recent works infused knowledge about objects in the video by using object detectors to generate more informative captions. For example, Zhang et al. [85] adopted an object detector to augment the object feature to yield object-specific video captioning. Different from the natural scenes, generating action-centric descriptions for GUI recording requires a more complex GUI understanding, as there are many aspects to consider, such as the elements in the GUI, their relationships, the semantics of icons, etc. Therefore, we modeled GUI-specific features by using mature methods, and then proposed a tailored algorithm to automatically generate natural language descriptions for GUI recordings. ## VII Conclusion The bug recording is trending in bug reports due to its easy creation and rich information. However, watching the bug recordings and understanding the user actions can be time-consuming. In this paper, we present a lightweight approach CAPdroid to automatically generate semantic descriptions of user actions in the recordings, without requiring additional app instructions, recording tools, or restrictive video requirements. Our approach proposes image-processing and deep-learning models to segment bug recordings, infer user actions, and generate natural language descriptions. The automated evaluation and user study demonstrate the accuracy and usefulness of CAPdroid in boosting developers' productivity. In the future, we will keep improving our method for better performance in terms of action segmentation and action attribute inference. According to user feedback, we will also improve the understanding of GUI to achieve higher-level semantic descriptions.
2301.08752
Optimized learned entropy coding parameters for practical neural-based image and video compression
Neural-based image and video codecs are significantly more power-efficient when weights and activations are quantized to low-precision integers. While there are general-purpose techniques for reducing quantization effects, large losses can occur when specific entropy coding properties are not considered. This work analyzes how entropy coding is affected by parameter quantizations, and provides a method to minimize losses. It is shown that, by using a certain type of coding parameters to be learned, uniform quantization becomes practically optimal, also simplifying the minimization of code memory requirements. The mathematical properties of the new representation are presented, and its effectiveness is demonstrated by coding experiments, showing that good results can be obtained with precision as low as 4~bits per network output, and practically no loss with 8~bits.
Amir Said, Reza Pourreza, Hoang Le
2023-01-20T18:01:31Z
http://arxiv.org/abs/2301.08752v1
# Optimized Learned Entropy Coding Parameters for Practical Neural-Based Image and Video Compression ###### Abstract Neural-based image and video codecs are significantly more power-efficient when weights and activations are quantized to low-precision integers. While there are general-purpose techniques for reducing quantization effects, large losses can occur when specific entropy coding properties are not considered. This work analyzes how entropy coding is affected by parameter quantizations, and provides a method to minimize losses. It is shown that, by using a certain type of coding parameters to be learned, uniform quantization becomes practically optimal, also simplifying the minimization of code memory requirements. The mathematical properties of the new representation are presented, and its effectiveness is demonstrated by coding experiments, showing that good results can be obtained with precision as low as 4 bits per network output, and practically no loss with 8 bits. Amir Said Reza Pourreza Hoang Le Qualcomm AI Research,1 San Diego, CA, USA Learned image and video compression, neural network quantization, entropy coding. Footnote 1: Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. ## 1 Introduction After years of research showing the advantages of neural-based techniques for video coding [1, 2], and recent demonstrations of real-time decoding in mobile devices [3, 4], there is growing interest in their standardization and commercial deployment [5], and a main practical implementation challenge is the minimization of decoding power requirements. Implementing neural network (NN) computations with low-precision integers significantly reduces power needs, but requires minimizing the effects of quantizing networks weights and activations [6, 7, 8]. For neural image and video codecs, efficacy is strongly defined by the entropy coding stage and related networks, and can be severely degraded. Analyzing quantization effects in NNs can be difficult because they implement non-linear processes, and learned variables have unknown properties. Techniques based on re-training under quantization constraints help to reduce the losses, but have limited efficacy. In this work it is shown that NN outputs used for entropy coding have well-defined meaning and properties, which can be used for predicting quantization effects. Furthermore, those properties can be used for modifying the loss function used while learning, and it is demonstrated that networks can be trained to be minimally sensitive to quantization. Further analysis shows that this optimized representation is remarkably independent of the quantization, and thus networks do not need to be retrained even after significant changes to entropy coding design choices. Thus, in those networks the uniform quantization can also be extended beyond the hardware requirements, and used to minimize the memory needed for storing entropy coding tables. Experimental results show that the proposed solution can guarantee good performance (\(\approx\) 2% redundancy) with precision as low as 4 bits, and with relative redundancy reduction by a factor of 4 for each additional bit, reaching less than 0.01% for 8 bits. ## 2 Entropy Coding for NN-Based Codecs Entropy coding in conventional codecs is based on estimating the probabilities of variables to be coded. For example, in video standards like HEVC and VVC [9, 10, 11], adaptive binary arithmetic coding contexts are used for estimating probabilities related to data elements. Neural-based codecs use a different principle: a parameterized probability distribution (pdf) is chosen for training, and a learned scheme (table or NN) is used to determine the parameter value to be used for coding a quantized random variable. Fig. 1 shows an example of the commonly used normal distribution, and unit-step quantization. The sequence of probabilities of random variable \(y\sim\mathcal{N}(0,\sigma^{2})\), quantized to value \(n\in\mathbb{Z}\), is \[p_{n}(\sigma)=\frac{1}{2}\left[\mathrm{erfc}\!\left(\frac{2n-1}{\sqrt{8}\sigma }\right)-\mathrm{erfc}\!\left(\frac{2n+1}{\sqrt{8}\sigma}\right)\right], \tag{1}\] Figure 1: Quantized values from normal distribution with std. dev. \(\sigma\), commonly used in neural-based codecs. with entropy \[H(\sigma)\stackrel{{\text{\tiny def}}}{{=}}-\sum_{n=-\infty}^{\infty}p _{n}(\sigma)\log_{2}(p_{n}(\sigma))\,, \tag{2}\] where the _complementary error function_ is \[\operatorname{erfc}(x)\stackrel{{\text{\tiny def}}}{{=}}\frac{2} {\sqrt{\pi}}\int_{x}^{\infty}e^{-t^{2}}\,\mathrm{d}t. \tag{3}\] In practical implementations special symbols are used to code values in _underflow_ and _overflow_ infinite sets, which have very small but nonzero probabilities, as shown in Fig. 1. Within those sets the values are sub-optimally coded with, for example, Elias universal codes [12]. Also shown in Fig. 1 are the elements of _code vector_\(\mathbf{c}(\sigma)\) (CV), which contain the probabilities to be used for arithmetic coding (AC), within a given range \(R\) \[c_{i}(\sigma)=\begin{cases}\sum_{n=-\infty}^{-R}p_{n}(\sigma),&i=0,\\ p_{i-R}(\sigma),&i=1,2,\ldots,2R-1,\\ \sum_{n=R}^{\infty}p_{n}(\sigma),&i=2R.\end{cases} \tag{4}\] To simplify the notation we ignore implementation details like conversions to integer-valued cumulative sums for AC [13, 14]. It is assumed that \(R\) and AC precision are chosen to obtain compression very close to entropy. This type of entropy coding is used within a NN-based codec as shown in Fig. 2, corresponding to the scale hyperprior architecture [15]. There are other codec configurations, but our main interest is in the per-element coding stages shown in Fig. 3, which is used with all factorized priors. The latent \(y\) to be coded is quantized with \(Q_{y\to q}\), which is taken into account during floating-point training. The problems addressed here are caused by the quantization stage \(Q_{\sigma\rightarrow\hat{\sigma}}\), representing the use of low-precision network outputs during coding, but not during training. Since code vectors must be bitwise identical at the encoder and decoder [16], given a range \(\mathcal{U}=[\sigma_{\min},\sigma_{\max}]\), we assume they share a monotonic sequence \(\{t_{k}\}_{k=0}^{N}\) partitioning \(\mathcal{U}\), and that latent \(y\) is coded using \(\mathbf{c}(\rho_{k}^{*})\) whenever \(\hat{\sigma}\in[t_{k},t_{k+1})\). To determine the optimal values of \(\rho_{k}^{*}\) for code vector computation we measure the _average redundancy_, defined by the Kullback-Leibler divergence \[R(\sigma,\rho)\stackrel{{\text{\tiny def}}}{{=}}\sum_{n=-\infty}^ {\infty}p_{n}(\sigma)\log_{2}\!\left(\frac{p_{n}(\sigma)}{p_{n}(\rho)}\right). \tag{5}\] Since practical image and video codecs are meant to be efficient in a wide range of bit rates, it is interesting to have entropy coding designed to minimize _relative redundancy_ \[L(\sigma,\rho)\stackrel{{\text{\tiny def}}}{{=}}\frac{R(\sigma, \rho)}{H(\sigma)}. \tag{6}\] and to enable good performance for all use cases use \[\rho_{k}^{*}=\operatorname*{arg\,min}_{\rho}\int_{t_{k}}^{t_{k+1}}L(\sigma, \rho)\,\mathrm{d}\sigma. \tag{7}\] ## 3 New PDF parameterization From these definitions it is possible to optimize non-uniform quantization schemes as shown in the example of Fig. 4, where similarly to other quantization solutions, all intervals have the same maximum relative redundancy and averages \[\bar{L}_{k}\stackrel{{\text{\tiny def}}}{{=}}\frac{1}{t_{k+1}-t_ {k}}\int_{t_{k}}^{t_{k+1}}L(\sigma,\rho_{k}^{*})\,\mathrm{d}\sigma. \tag{8}\] With sufficiently fine quantization of \(\sigma\), \(L(\sigma,\rho_{k}^{*})\) is approximately quadratic, and we can use \[\rho_{k}^{*} \approx \left\{\rho:L(t_{k},\rho)=L(t_{k+1},\rho)\right\}, \tag{9}\] \[\bar{L}_{k} \approx \frac{L(t_{k},\rho_{k}^{*})}{3}=\frac{L(t_{k+1},\rho_{k}^{*})}{3}. \tag{10}\] Neural-based codecs using precise floating-point computations can be implemented with arrays of pre-computed Figure 4: Example of nonuniform quantization of parameter \(\sigma\), designed for constant maximum relative redundancy. Figure 3: Per-element entropy coding, including quantization of pdf parameter \(\sigma\). Figure 2: Example of a neural-based codec architecture, including scale hyperprior networks for entropy coding. code vectors, and the approach shown in the example of Fig. 4. However, for practical codecs, which must handle a wide range of bit rates, it is necessary to support, for example, \(\sigma\in[0.1,1000]\) with the precision of hyperprior network output \(\hat{\sigma}\) constrained to only 8 bits. This means that \(\{t_{k}\}_{k=0}^{N}\) must also use only 8 bits, severely constraining the design and resulting in significant losses. To solve this problem we can exploit the fact that \(\sigma\) is only used for rate estimation during training, and while computing code vectors. The proposed solution is shown in Fig. 5: hyperprior networks are trained to compute a new pdf parameter \(u\in[0,1]\) that is converted as \(\sigma=T(u)\). For analysis we define \(u\in\mathbb{R}\), but note that during actual coding the quantized networks only need to generate integer \(k\). This approach is used in [16], where the conversion is chosen to be in the form \(\sigma=e^{\alpha u+\beta}\). Fig. 6 shows how optimal \(T(u,N)\) can be defined, for a given \(N\), to obtain constant maximum relative redundancy, and the following algorithm can be used for its computation. 1. Given \(N\) and \([\sigma_{\min},\sigma_{\max}]\), choose initial maximum redundancy \(\epsilon\); initialize \(t_{0}=\sigma_{\min}\). 2. For \(n=1,2,\ldots,N\): 1. Set \(\rho_{n-1}^{*}=\{\rho>t_{n-1}:L(t_{n-1},\rho)=\epsilon\}\); 2. Set \(t_{n}=\{t>\rho_{n-1}^{*}:L(t,\rho_{n-1}^{*})=\epsilon\}\). 3. If \(t_{N}\approx\sigma_{max}\) then stop. 4. If \(t_{N}<\sigma_{max}\) then increase \(\epsilon\), otherwise decrease \(\epsilon\), using a method for unidimensional search. 5. Go to step 2. The surprising conclusion from computing \(T(u,N)\) for several values of \(N\), and interpolating between the sampled values, is that all results are practically identical. This means that if we compute \[T(u)=\lim_{N\rightarrow\infty}T(u,N), \tag{11}\] we obtain a single reference function that can be used for all network trainings, and the value of \(N\) can be chosen and modified later, without the need for re-training. The rationale for determining \(T(u)\) can be seen from the lower part of Fig. 6. For increasing values of \(N\) the redundancy curves in all intervals should approximate quadratic equations on \(u\), with same second derivatives. This can be achieved if we can find \(T(u)\) and constant \(\alpha\) such that \[\frac{\partial^{2}}{\partial u^{2}}\,L(T(u),\rho)\bigg{|}_{\rho=T(u)}=\alpha, \quad u\in[0,1]. \tag{12}\] Defining \[\psi(\sigma)\stackrel{{\text{\tiny def}}}{{=}}\left.\frac{ \partial^{2}}{\partial\sigma^{2}}\,L(\sigma,\rho)\right|_{\rho=\sigma} \tag{13}\] it can be shown that, thanks to properties of \(L(\sigma,\rho)\), eq. (12) is equivalent to \[\frac{\mathrm{d}}{\mathrm{d}u}\,T(u)=\sqrt{\frac{\alpha}{\psi(T(u))}}, \tag{14}\] and thus function \(T(u)\) can be determined by solving this ordinary differential equation (ODE) using boundary conditions \(T(0)=\sigma_{\min},\ T(1)=\sigma_{\max}\). For any sequence of differentiable parameterized probabilities it can be shown that \[\psi(\sigma)=\frac{1}{\ln(2)H(\sigma)}\sum_{n=-\infty}^{\infty}\frac{1}{p_{n} (\sigma)}\ \left[\frac{\mathrm{d}p_{n}(\sigma)}{\mathrm{d}\sigma}\right]^{2}, \tag{15}\] and for normal pdf we have \[\psi(\sigma) = \frac{1}{\ln(2)H(\sigma)}\left\{\frac{[\phi_{1}(\sigma)]^{2}}{1- \omega_{1}(\sigma)}+\right.\] \[\left.\sum_{n=1}^{\infty}\frac{[\phi_{2n-1}(\sigma)-\phi_{2n+1}( \sigma)]^{2}}{\omega_{2n-1}(\sigma)-\omega_{2n+1}(\sigma)}\right\},\] where \[\omega_{k}(\sigma)\stackrel{{\text{\tiny def}}}{{=}}\mathrm{erfc} \!\left(\frac{k}{\sqrt{8}\sigma}\right),\quad\phi_{k}(\sigma)\stackrel{{ \text{\tiny def}}}{{=}}\frac{k\,e^{-k^{2}/(8\sigma^{2})}}{\sqrt{2\pi} \sigma^{2}}. \tag{17}\] ## 4 Experimental Results The ODE of eq. (14) was solved using the 4th-order Runge-Kutta method [17, SS17.1], and boundary conditions \(\sigma\in[0.1,1000]\). The solution \(T(u)\) is shown in Fig. 7. A quick observation may lead to the incorrect conclusion that it can be approximated by a function like \(10^{4u^{2}-1}\) (red line). Since \(T(u)\) is defined by a differential equation, the quality of the approximation must be measured by relative errors of derivatives. Those are shown in Fig. 8, where we can Figure 5: Proposed modification of codec in Fig. 3, replacing pdf parameter \(\sigma\) with parameter \(u\) optimized for quantization. Figure 6: Mapping from non-uniform to uniform quantization. also see a much better approximation \(T_{\pi}(u)=10^{\pi(u)}\), where polynomial \[\pi(u)=2.49284\,u^{3}+0.93703\,u^{2}+0.57013\,u-1, \tag{18}\] minimizes the maximum relative derivative error. Fig. 9 shows how the average relative redundancy \(\bar{L}(\sigma)\) varies with \(\sigma\) when uniform quantization is applied to pdf parameter \(u\), and \(N\) code vectors are used for entropy coding latent variables \(y\), as shown in Fig. 5. Following the condition defined by eq. (13), \(\bar{L}(\sigma)\) does not change when \(T(u)\) is used for pdf parameter conversion. On the other hand, there are relatively small variations when approximation \(T_{\pi}(u)\) is used instead, and those deviations correspond to errors in approximating the derivatives (cf. Fig. 8). We also observe that the curves are nearly perfectly parallel, confirming the predominance of the quadratic term in the Taylor series of the relative redundancy. This is also verified by the fact that \(\bar{L}(\sigma)\) quadruples when the \(N\) is halved. Non-uniform quantizations defined by transformation \(T(u)\) were used for entropy coding values from a floating-point implementation of the scale hyperprior codec [15], in the Kodak image dataset [18]. Redundancy results are shown in Table 1, for average bit rates between 0.25 and 2.5 bits/pixel. For values of \(N\) larger than about 128 (7 bit representation) the redundancy from quantization is 0.01% or below, which is within the experimental variation. For smaller values of \(N\) we can observe that * The redundancies are practically constant for all bit rates, meaning that the proposed coding method achieves the design objective of having constant relative redundancies. * The average redundancy levels match those of Fig. 9, showing that features predicted by theory match the practical implementation. The second column of Table 1 contains the total amount of memory needed to store the code vectors, when using 16 bits/element. This shows that the proposed method can be directly used to minimize memory. For example, even if the network is able to support 8 bit outputs, and redundancy below 0.5% is acceptable, the pdf parameter can be further quantized to 5 bits, by simply discarding the 3 least significant bits, to reduce CV memory from 51.6 to 6.5 Kbytes. ## 5 Conclusions It is shown that the effects of quantization on the entropy coding can be analyzed by evaluating coding redundancy, and this can be used for changing the pdf parameter to be learned so that average relative redundancy becomes practically constant. In consequence, networks do not need to be re-trained when the pdf parameter quantization is changed, which we show can also be done intentionally to minimize memory for code tables. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(N\) & CV mem. & \multicolumn{3}{c|}{Rel. redundancy (\%) @ bit rate} \\ \cline{2-7} & (Kbytes) & 0.25 & 0.50 & 1.00 & 1.50 & 2.50 \\ \hline \hline 256 & 51.6 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ 192 & 38.7 & 0.00 & 0.00 & 0.00 & 0.01 & 0.01 \\ 128 & 25.8 & 0.04 & 0.04 & 0.04 & 0.03 & 0.03 \\ 96 & 19.4 & 0.03 & 0.04 & 0.04 & 0.04 & 0.05 \\ 64 & 12.9 & 0.13 & 0.12 & 0.12 & 0.12 & 0.11 \\ 48 & 9.7 & 0.24 & 0.23 & 0.22 & 0.22 & 0.20 \\ 32 & 6.5 & 0.38 & 0.38 & 0.40 & 0.41 & 0.42 \\ 24 & 4.9 & 0.85 & 0.81 & 0.81 & 0.82 & 0.79 \\ 16 & 3.4 & 1.79 & 1.76 & 1.76 & 1.78 & 1.75 \\ \hline \end{tabular} \end{table} Table 1: Coding results obtained using scale hyperprior codec applied to 24 images of Kodak dataset. Figure 8: Derivatives of pdf parameter conversion function \(T(u)\) and its approximations. Figure 7: The optimal pdf parameter conversion function \(T(u)\) and two approximations. Figure 9: Average relative redundancies from \(T(u)\) (dashed) and approximation \(10^{\pi(u)}\) (solid), for different number of quantization intervals \(N\).
2310.10884
Diamond-lattice photonic crystals assembled from DNA origami
Colloidal self-assembly allows rational design of structures on the micrometer and submicrometer scale. One architecture that can generate complete 3D photonic band gaps is the diamond cubic lattice, which has remained difficult to realize at length scales comparable to the wavelength of visible or ultraviolet light. Here, we demonstrate three-dimensional photonic crystals self-assembled from DNA origami that act as precisely programmable patchy colloids. Our DNA-based nanoscale tetrapods crystallize into a rod-connected diamond cubic lattice with a periodicity of 170 nm. This structure serves as a scaffold for atomic layer deposition of high refractive index materials such as TiO$_2$, yielding a tunable photonic band gap in the near-ultraviolet.
Gregor Posnjak, Xin Yin, Paul Butler, Oliver Bienek, Mihir Dass, Ian D. Sharp, Tim Liedl
2023-10-16T23:33:22Z
http://arxiv.org/abs/2310.10884v2
# Diamond photonic crystals assembled from DNA origami ###### Abstract Colloidal self-assembly allows rational design of structures on the micron and submicron scale, potentially leading to physical material properties that are rare or non-existent in nature. One of the architectures that can generate complete 3D photonic band gaps is the diamond cubic lattice, which has remained difficult to realize at length scales comparable to the wavelength of visible light. Here, we demonstrate 3D photonic crystals self-assembled from DNA origami that act as precisely programmable patchy colloids. Our DNA-based nanoscale tetrapods crystallize into a rod-connected diamond cubic lattice with a periodicity of 170 nm that serves as a scaffold for atomic layer deposition of high refractive index materials such as TiO\({}_{2}\), yielding a tunable photonic band gap in the near UV range. ## Introduction Control over the structure of materials is of fundamental importance as it can lead to the emergence of non-trivial physical properties, from topological semiconductors on the atomistic scale to metamaterials on micron and larger scales. Design and assembly of periodic structures at the intermediate length scale of hundreds of nanometers is of particular interest for optics and photonics since it allows construction of three-dimensional (3D) photonic crystals. Photonic crystals are materials possessing periodicity on length scales comparable to the wavelength of light, which together with the symmetry of their Brillouin zone leads to the formation of forbidden bands of energies for photons - the photonic band gap (PBG) [1, 2, 3]. This makes them optical analogues of electron semiconductors and results in remarkable effects such as omnidirectional reflections, lossless waveguiding, and suppressed emission. While such systems have been fabricated in 3D for longer wavelengths [4, 5, 6], their realization for visible and UV light remains limited because of the difficulty of their manufacturing [7, 8, 9]. Moreover, most realizations of photonic crystals to date are based on surface-based microfabrication in two dimensions or posses only a limited number of layers forming the third dimension. A reliable method of assembling tunable 3D structures would open a new design paradigm and allow the full potential of optical circuits to be harnessed. A promising approach for fabrication of 3D structures with sub-micron features is self-assembly of colloidal crystals, with which materials can be structured on the length scale of several 100 nm [10, 11, 12]. However, colloids typically form close-packed crystals of different symmetries yielding a narrower and harder to achieve photonic band gaps, which limits their potential for photonic applications. While this can be circumvented by back-filling the voids within the colloidal crystals and selectively etching away the colloidal particles [6, 8], there remain limitations on achievable volume fill ratios and symmetries of the structures with this approach. Recently, significant progress has been made by sequential synthesis and etching with different metals to generate cage-like particles that form close-packed crystals of different symmetries with much larger voids than in the case of spherical colloids [13]. However, the lossy metallic lattices are more suitable for different types of metamaterials rather than photonic crystals and there are still considerable constraints in the types of structures that can be produced. One of the most demanding geometries for colloidal crystallization is the diamond cubic lattice with its non-close packed structure. The symmetry of the lattice requires that each building block has four neighbours in a tetrahedral configuration, which can be theoretically achieved with directional bonds [14, 15, 16], e.g. with patchy particles that are difficult to realize experimentally. Growth of such crystals is not favored because the hexagonal diamond lattice has the same free energy as the diamond cubic lattice, which causes the resulting material to be a mix of both structures [15]. In principle, this degeneracy can be broken either by introducing an orientation-dependent bonding potential on the binding patches [14] or by precisely controlling the connectivity of the four patches on each monomer [17] - both of which have yet to be demonstrated experimentally. Recently, several realizations of diamond-like structures have been achieved through assembly of gold nanoparticles into a diamond lattice with DNA origami connectors [18], by directly connecting DNA origami frameworks into a diamond cubic lattice with 86 nm periodicity [19, 20], and by forming a close-packed diamond crystal of micron-sized tetrahedral clusters of spheres [21]. However, none of these realizations has been experimentally demonstrated to lead to a photonic band gap. As a bottom-up approach, DNA origami [22] is uniquely suited for the task of 3D assembly, enabling unprecedented control over structure and corresponding physical properties. This versatile method combines the sequence-dependent selective binding of DNA, which has already been utilized in many colloidal crystal realizations [12], with unique particle design flexibility [23, 24] and control over binding avidity and orientation [25, 26, 27, 28, 29]. The selective binding of adenine (A) to thymine (T) and cytosine (C) to guanine (G) enables \(\sim\)200 units of 20 - 50 nucleotide long staples to fold an \(\sim\)8000 nucleotide long scaffold strand into pre-designed shapes by formation of double-stranded helices [22]. DNA origami has been used to assemble patchy particle-like objects, which form non-close packed crystals through DNA-covered nanoparticle connectors [18], shape complementarity [30], or selective binding of short sequences of DNA [19]. Here, we demonstrate the use of DNA origami as patchy colloids with a programmed torsional potential on their binding patches to reliably orient neighboring monomers and assemble them into the diamond cubic lattice. For this purpose we designed a DNA origami tetrapod in which each of its four arms serves as a connecting patch to its neighbors. The pattern of binding extensions on the end surface of each arm provides the torsional binding potential favoring a 60\({}^{\circ}\) rotation between tetrapods (Fig. 1A). The tetrapods crystallize into a rod-connected diamond cubic lattice (Fig. 1B) which is predicted to show one of the widest and most robust photonic band gaps [31, 32]. The 170 nm periodicity of our lattice leads to a photonic band gap in the UV, which only opens when the refractive index of the dielectric material in the lattice is higher than \(\sim\)2 [31, 32]. Because of this requirement, we first silicified our structure to ensure mechanical stability during drying and then used atomic layer deposition (ALD) [33] to grow varying thicknesses of high-refractive index materials on the surface of the lattice (Fig. 1C). We observed a strong reflection in the UV that is specific to the structure and tunable with the thickness of the high-refractive index cladding. With this approach, we realized a rod-connected diamond cubic lattice on the scale of a few hundred nanometers, as well as demonstrated a photonic effect with a DNA origami-based material. ## Crystal growth We designed the DNA origami tetrapod structure in cadnano [34] (for design details see fig. S1 and S2). Each tetrapod has 4 equivalent 35 nm long "arms", oriented at the tetrahedral angle of 109.5\({}^{\circ}\) with respect to each other (Fig. 1A). The cross-section of the arms is a 24-helix bundle (24hb) with an approximately circular cross section and a diameter of 15 nm. In the central part of the tetrapod, each arm splits into three 8-helix bundles (8hb) that bend by \(\sim\)70\({}^{\circ}\) due to insertions and deletions of base pairs that induce over- and under- twisting of the DNA double helices [35] (fig. S2). Each of these bent 8hbs merges with two other 8hbs after the bend to form the neighboring arms of the tetrapod. The double-stranded helices in each of the arms terminate at the same distance from the center of the tetrapod, resulting in a flat end surface. These end surfaces are modified with single-stranded DNA extensions of the staple strands to control the interaction between the monomers. With no extensions and hence blunt ends, the monomers tend to aggregate uncontrollably and form disordered networks even at temperatures close to the melting temperature of the tetrapods (around 55 \({}^{\circ}\)C [36], fig. S5). Therefore, we extended 18 of the end staples of each arm with 3 cytosines (C\({}_{3}\); Fig. 1A) to lower the temperature at which the monomers begin to form lattices. Bonds between the tetrapods are formed by extending the staple strands on 6 of the helices on each end surface with a binding sequence TTTGGAAGG. Even though the tetrapods with the binding sequences have the same binding sequences, the binding sequences are not formed by the monomers. Figure 1: Design and growth of the diamond cubic crystals. (**A**) A model of two DNA origami tetrapods in the staggered configuration. Each gray cylinder in the model represents a double-stranded DNA helix. The binding sequence and the extensions of staples are shown in red and blue. The left inset shows the positions of extension on one of the end surfaces of the tetrapod. The right inset shows a TEM image of three tetrapods, with each showing only three arms, as the fourth arm is pointing out of the plane of the image. (**B**) Unit cell of the designed rod-connected diamond cubic lattice with a periodicity of 170 nm. (**C**) The unit cell, covered in layers of SiO\({}_{2}\) and ALD-grown high refractive index material. (**D**) Two DNA origami cubic diamond crystals, covered with a layer of SiO\({}_{2}\). (**E**) A 25 \(\mu\)m DNA origami diamond cubic crystal, covered with layers of SiO\({}_{2}\) and TiO\({}_{2}\). The octahedrons in the lower right corners of (D) and (E) illustrate the shape and orientation of the crystals. correct symmetry to form the desired diamond cubic lattice, this is not a sufficient condition because the they can form crystalline lattices by binding in two different conformations - the staggered conformation, where two neighbouring tetrapods are rotated by 60\({}^{\circ}\) (fig. S5A), and the eclipsed conformation, where there is no rotation between two neighbours (fig. S5B). While the diamond cubic lattice is formed by tetrapods with only staggered conformations (Fig. 1B), in the hexagonal diamond lattice each tetrapod is in the eclipsed configuration with one of its neighbors and in the staggered with the other three (fig. S5C,D). Crystallization of the diamond cubic lattice is further complicated by the fact that both types of diamond lattice have the same free energy, which means the tetrapods would crystallize in a mix of both lattices with many defects [15, 17]. To break this degeneracy, we placed the 6 binding sequences on each surface in a pattern with a three-fold symmetry so that the binding between two neighbouring tetrapods will be strongest when their terminal surfaces are rotated by 60\({}^{\circ}\) and all 12 binding extensions can form bonds between their guanines and the C\({}_{3}\) extensions on the other tetrapod (inset in Fig. 1A ). Because all four arms of the tetrapod are equivalent, this means that each pair of tetrapod neighbours is rotated by 60\({}^{\circ}\), which leads to the diamond cubic structure (Fig. 1B). In our initial experiments, the folded and purified monomers were slowly annealed with an 80 h ramp from 52 \({}^{\circ}\)C to 20 \({}^{\circ}\)C to form crystal lattices. This protocol already yielded crystals of moderate quality (fig. S6). After growth, all crystals were coated with a thin layer of silica (SiO\({}_{2}\)) to increase their mechanical stability before drying (detailed protocol in Materials & Methods). After the silicification procedure, the thickness of our DNA origami structures is \(\sim\) 20 nm and the pores in the crystal are \(\sim\) 100 nm in diameter. The designed center-to-center distance of the tetrapods is 75 nm, which corresponds to a lattice periodicity of 170 nm. The quality of the crystal structure was improved significantly by incubating the crystal growth solution for 8 h at 40 \({}^{\circ}\)C after the first annealing ramp, pipetting away the top \(70\%\) of the supernatant and then repeating the ramp. Under optimized conditions, diamond cubic single crystals exhibit regular octahedral crystal habits with all eight facets comprising equilateral triangles of {111} planes and dihedral angles over shared edges of 109.5\({}^{\circ}\). Our crystal growth protocol results in most crystals having sizes between 5 and 10 \(\mu\)m (Fig. 1D and figs. S7 and S8), though some can grow to as large as 25 \(\mu\)m (Fig. 1E). Based on the lattice periodicity of 170 nm and 8 tetrapods per unit cell, this means these crystals incorporate \(10^{5}\) - \(10^{7}\) tetrapods. The {111} planes can have different appearances depending on the angle at which they are observed. When one of the faces of the octahedron is viewed along the {111} direction, it presents a hexagonal pattern in which the tetrapods of lower layers are also visible (Fig. 2A). If the same face is viewed at a slight tilt, connections between the tetrapods from the lower layers become visible (Fig. 2B). At an even larger tilt angle, the hexagonal grid appears as two overlapping rectangular grids (Fig. 2C). The crystals we observe have surfaces with only a few steps between different {111} planes (Fig. 2D) and well-defined edges (Fig. 2E). However, the edges and especially the vertices of the octahedra are often ragged with missing tetrapods and facets that exhibit irregular steps (figs. S9 and S10). The steps on the facets depend on the quality and speed of crystal growth. The non-uniform edges and vertices could additionally be caused by damage during handling of the crystals, e.g. pipetting. In rare cases we can observe twinning of crystals, where the structure of the crystals is mirrored across a single plane (Fig. 2F). Closer inspection reveals that the mirror plane is an interface across which the tetrapods are not rotated by 60\({}^{\circ}\), or in other words, a layer of hexagonal diamond (Fig. 2G). This can be understood as a hexagonal diamond defect in the otherwise diamond cubic structure and it appears to be stable only if it forms an extended plane since we do not observe localized defects of this type. Such twinning planes appear only in a few percent of the crystals, which suggests they are relatively unstable. Under non-optimal crystal growth conditions, where there are more disordered structures without well-defined habits, these hexagonal diamond defects are much more prevalent (fig. S11). ## Optical properties To open a complete photonic band gap in an inverse diamond structure the refractive index contrast between the air gaps and the dielectric material needs to be higher than \(\sim\) 2 [31, 32]. Since silica has a refractive index of \(\sim\)1.5 we had to coat the silicified structure with a higher refractive index material. As a starting point, we used MPB [37] to simulate the photonic band structure of our crystal with an additional high-refractive index cladding of varying thickness. Figure 2: Structure of silica-coated diamond crystals. (**A** - **C**) SEM images of \(\{111\}\) facets of the crystal viewed at (A) normal orientation or tilted by (B) 20\({}^{\circ}\) and (C) 40\({}^{\circ}\) from the normal as shown in the upper insets. The lower insets show the appearance of a model of the crystal at the same orientation. (**D**) A zoomed-in view of steps of \(\{111\}\) planes on a crystal facet. (**E**) A well-defined edge between two crystal facets. (**F** and **G**) An example of a twinned crystal (F) where the mirror plane is a single plane of hexagonal diamond, indicated with the red dashed lines in the zoomed-in SEM image and the model shown in the inset of panel (G). The location and width of the photonic band gap for different thicknesses of cladding changes considerably with the refractive index of the cladding material and can be tuned across a relatively wide range of wavelengths, from 220 nm to 350 nm (fig. S12). Experimentally, we chose to coat our crystals with TiO\({}_{2}\) (titania) as this high refractive index material can be coated with atomic layer deposition (ALD). The main advantage of ALD in our application is that the material grows layer by layer, guaranteeing a conformal coating on the structure with uniform thickness throughout the crystal. Titania absorbs light below \(\sim\)350 nm, which consequently means its refractive index varies considerably across the range of wavelengths where we would expect a photonic band gap for our structure (fig. S13). Figure 3B shows a composite graph of calculated photonic band gaps for different thicknesses of TiO\({}_{2}\) coatings, where for each wavelength the relevant refractive index, measured experimentally using spectroscopic ellipsometry on a thin film reference sample, was used in the simulation (fig. S13). The relatively large variation of refractive index of TiO\({}_{2}\) results in non-linear variation of the PBG with varying volume fill ratio - at lower thicknesses of TiO\({}_{2}\) cladding, the simulations show a very wide PBG that shifts to longer wavelengths and becomes progressively narrower with increasing thickness of TiO\({}_{2}\), influenced by the lower refractive index of titania at longer wavelengths. The measured extinction coefficient of titania (red curve in Fig. 3B) shows that a large part of the possible photonic band gap is affected by relatively strong absorption. This could be mitigated by using a different ALD-compatible material, such as Ta\({}_{2}\)O\({}_{5}\), which absorbs at shorter wavelengths than TiO\({}_{2}\) (fig. S14). However, it also has a lower refractive index, which leads to a narrower band gap with a more limited range of tunability of its central wavelength (fig. S15). Films of diamond crystals coated with titania, as shown in Fig. 3A, were characterized by Figure 3: Optical properties of DNA origami photonic crystals. (**A**) Wide field and zoomed-in (inset) SEM images of a deposited film of silicified DNA origami diamond crystals, covered with a 13 nm layer of titania (TiO\({}_{2}\)). (**B**) Calculated photonic band gaps for different thicknesses of TiO\({}_{2}\) cladding on the diamond cubic structure and the measured extinction coefficient (k) of ALD-grown TiO\({}_{2}\). The photonic band gap is calculated using measured values of the refractive index of ALD-deposited films. (**C**) Measured reflection spectra of DNA origami diamond crystal films with different thicknesses of titania coating.
2310.01751
A nonmonotone proximal quasi-Newton method for multiobjective optimization
This paper proposes a nonmonotone proximal quasi-Newton algorithm for unconstrained convex multiobjective composite optimization problems. To design the search direction, we minimize the max-scalarization of the variations of the Hessian approximations and nonsmooth terms. Subsequently, a nonmonotone line search is used to determine the step size, we allow for the decrease of a convex combination of recent function values. Under the assumption of strong convexity of the objective function, we prove that the sequence generated by this method converges to a Pareto optimal. Furthermore, based on the strong convexity, Hessian continuity and Dennis-Mor\'{e} criterion, we use a basic inequality to derive the local superlinear convergence rate of the proposed algorithm. Numerical experiments results demonstrate the feasibility and effectiveness of the proposed algorithm on a set of test problems.
Xiaoxue Jiang
2023-10-03T02:21:07Z
http://arxiv.org/abs/2310.01751v1
# A nonmonotone proximal quasi-Newton method for multiobjective optimization ###### Abstract This paper proposes a nonmonotone proximal quasi-Newton algorithm for unconstrained convex multiobjective composite optimization problems. To design the search direction, we minimize the max-scalarization of the variations of the Hessian approximations and nonsmooth terms. Subsequently, a nonmonotone line search is used to determine the step size, we allow for the decrease of a convex combination of recent function values. Under the assumption of strong convexity of the objective function, we prove that the sequence generated by this method converges to a Pareto optimal. Furthermore, based on the strong convexity, Hessian continuity and Dennis-More criterion, we use a basic inequality to derive the local superlinear convergence rate of the proposed algorithm. Numerical experiments results demonstrate the feasibility and effectiveness of the proposed algorithm on a set of test problems. Keywords:Multiobjective optimization Composite optimization Proximal gradient Quasi-Newton Superlinear convergence Msc: 90C29 90C30 ## 1 Introduction In this paper, we consider the following multiobjective composite optimization problems (MCOPs): \[\min_{x\in\mathbb{R}^{n}}\ F(x)=(F_{1}(x),F_{2}(x),...,F_{m}(x))^{T},\] (MCOP) where \(F_{j}:\mathbb{R}^{n}\rightarrow\mathbb{R},j=1,2,...,m\), have the following special separable structure: \[F_{j}(x)=f_{j}(x)+g_{j}(x),\] \(f_{j}(x)\) is convex and continuously differentiable, the gradient \(\nabla f_{j}\) is Lipschitz continuous and \(g_{j}(x)\) is convex and continuous but not necessarily differentiable. Some of the applications of MCOPs include problems in the areas of image processing [22], robust optimization [20] and machine learning [8], etc. In the past two decades, descent methods have attracted extensive attention in multiobjective optimization community (see, e.g., [1; 6; 18; 19; 21; 26; 29; 34] and references therein). Initially, Fliege et al. [18] proposed the multiobjective steepest descent method, which obtains the descent direction by solving an auxiliary and non-parametric scalar subproblem at each iteration, eliminating the need for predefined parameters. Since then, the steepest descent method has led to a series of multiobjective optimization algorithms. Recently, Tanabe et al. [32] proposed a multiobjective proximal gradient algorithm (MPGA) for solving MCOPs. Subsequently, in [33], they used a merit function to analyze the convergence rate of MPGA under nonconvex, convex, and strongly convex conditions. These results are consistent with their counterparts in scalar optimization. The convergence rate analysis in [33] revealed that the MPGA demonstrates slow convergence when encountering ill-conditioned problems. In order to deal with this problem, Ansary [2] uses the idea of multiobjective Newton method introduced by Filege et al. [19] and proposes a Newton-type proximal gradient algorithm (NPGA) for MCOPs, which extends the corresponding method in scalar optimization [24]. Specifically, in each iteration of NPGA, a subproblem is constructed using the second-order approximation of the smooth term, followed by a Armijo line search strategy to determine the step size. Under reasonable assumptions, Ansary proved that every accumulation point of the sequence generated by the NGPA converges to the critical point. However, the convergence rate of the NGPA wasn't obtained in [2]. Subsequently, Chen et al. [11] analyzed the convergence rate of NPGA for MCOPs by using a fundamental inequality, and obtained the quadratic convergence rate of NPGA under the strongly convex assumption. Likewise, a challenge associated with this method is that NPGA uses precise Hessian information, which is usually not easy to obtain. In the conclusion of [2], Ansary pointed out that NPGA is restricted to convex multiobjective optimization problems and Hessian of each smooth function is used in each iteration which is computationally expensive. And also pointed out that it is a research path to overcome this limitation by using the idea of quasi-Newton method. In such cases, approximate information can be used to approximate the second order Hessian. Lately, Peng et al. [28] proposed a proximal quasi-Newton algorithm (PQNA) for MCOPs. The subproblem of PQNA is constructed by using the BFGS quadratic approximation the Hessian of the smooth term and an additional quadratic regularization term. Then the monotone line search is used to determine the step size. Under mild assumptions, they obtained that each accumulation point of the sequence generated by PQNA, if exists, is a Pareto stationary point. However, it's worth mentioning that the superlinear convergence rate of the PQNA wasn't obtained in [28]. It is well known that the single-objective proximal quasi-Newton algorithm leverages the non-expansiveness of the prox imal operator to establish superlinear convergence [24]. But, in the case of multiobjective, when the subproblem is formulated as a proximal operator by involving dual variables, the dual variables are different and variable in each iteration. Consequently, as in scalar optimization, it is not feasible to prove the superlinear convergence of the multiobjective proximal quasi-Newton algorithm according to the characteristics of the proximal operator. Meanwhile, in the realm of multiobjective optimization, nonmonotone line search techniques are extensively employed and have been demonstrated to be more effective than monotone algorithms [1]. In this paper, we revisit PQNA for MCOPs. Taking inspiration from the quasi-Newton method, which approximates second derivative matrices instead of explicitly evaluating them. By incorporating the nonmonotone line search technique, we propose a nonmonotone proximal quasi-Newton algorithm (NPQNA) for MCOPs. It must be pointed out that the NPQNA of MCOPs differs from the PQNA of MCOPs proposed by Peng et al. in [28] in several ways. Firstly, we use the nonmonotone linear search strategy to find the step size, whereas the latter uses the monotone linear search strategy. Secondly, if Hessian approximation matrix is positive definite and the subproblem exhibits strong convexity, we do not require the additional quadratic regularization term considered by Peng et al. [28] to achieve strong convexity. Consequently, we omit the regularization term in the construction of the subproblem. Under standard assumptions, we establish that the sequence generated by NPQNA converges to a Pareto optimal. Furthermore, under the assumptions of Hessian continuity, the Dennis-More criterion and strong convexity, we use a basic inequality to obtain that the local superlinear convergence rate of NPQNA, which was not obtained in Peng et al. [28]. Numerical experiments show that the NPQNA outperform the NPGA [2] and the PQNA [28] in terms of the number of iterations, the number of function evaluations and CPU time for some test problems. The remainder of this paper is organized as follows. Section 2 gives some preliminary remarks. In Section 3, we recall the nonmonotone line search. Section 4 presents the nonmonotone proximal quasi-Newton method. In Section 5, we establish the global convergence, and the local superlinear convergence rate of NPQNA. Section 6 presents the results of our numerical experiments. ## 2 Preliminaries Throughout this paper, the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) is equipped with the inner product \(\langle\cdot,\cdot\rangle\) and the induced norm \(\|\cdot\|\). For a positive definite matrix \(H\), the notation \(\|x\|_{H}=\sqrt{\langle x,Hx\rangle}\) is used to represent the norm induced by \(H\) on vector \(x\). For simplicity, we utilize the notation \(\Lambda_{m}=\{1,2,...,m\}\) for any \(m\in\mathbb{N}\), and define \[\Delta_{m}:=\left\{\lambda:\sum_{j\in\Lambda_{m}}\lambda_{j}=1,\lambda_{j}\geq 0 \right\}.\] Let \(\mathbb{R}^{m}\) be the \(m\)-dimensional Euclidean space with the partial order "\(\preceq\)" in \(\mathbb{R}^{m}\) induced by the Paretian cone \(\mathbb{R}^{m}_{+}\), given by \(y\preceq z\) (or \(z\succeq y\)) if and only if \(z-y\in\mathbb{R}^{m}_{+}\) with its associate relation "\(\prec\)", given by \(y\prec z\) (or \(z\succ y\)) if and only if \(z-y\in\mathbb{R}^{m}_{++}\), where \[\mathbb{R}^{m}_{+}:=\{x\in\mathbb{R}^{m}|x_{i}\geq 0,\ \forall i\in\Lambda_{m}\},\] and \[\mathbb{R}^{m}_{++}:=\{x\in\mathbb{R}^{m}|x_{i}>0,\ \forall i\in\Lambda_{m}\}.\] Since no unique solution which minimizes all objective functions simultaneously exists, we must decide which objective to improve. Hence, the concept of optimality has to be replaced by the concept of Pareto optimality or efficiency. Given a vector-valued function \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), we analyze the NPQNA for finding a Pareto optimum of (MCOP). A point \(x^{*}\in\mathbb{R}^{n}\) is said to be an Pareto optimum of the \(F\) if there does not exist \(x\in\mathbb{R}^{n}\) such that \(F(x)\leq F(x^{*})\) and \(F(x)\neq F(x^{*})\). A feasible point \(x^{*}\in\mathbb{R}^{n}\) is said to be a weakly Pareto optimum of the \(F\) if there does not exist \(x\in\mathbb{R}^{n}\) such that \(F(x)<F(x^{*})\). It is clear that every efficient solution of the \(F\) is a weakly Pareto solution, but the converse is not true. Definition 2.1 ([30]) \(x^{*}\in\mathbb{R}^{n}\) is a critical point of the (MCOP), if \[R(\partial F_{j}(x^{*};d))\cap(-\mathbb{R}^{m}_{++})=\emptyset,\] where \(R(\partial F_{j}(x^{*};d))\) denotes the range or image space of the generalized Jacobian of the function \(F_{j}\) at \(x^{*}\). The classic directional derivative of \(F_{j}\) at \(x^{*}\) in direction \(d\) is defined as \[F^{{}^{\prime}}_{j}(x^{*};d):=\lim_{\alpha\downarrow 0}\frac{F_{j}(x^{*}+ \alpha d)-F_{j}(x^{*})}{\alpha}.\] Definition 2.1 shows that \[(F^{{}^{\prime}}_{1}(x^{*};d),F^{{}^{\prime}}_{2}(x^{*};d),...,F^{{}^{\prime} }_{m}(x^{*};d))\notin-\mathbb{R}^{m}_{++},\ \text{for all}\ d\in\mathbb{R}^{n}.\] That is \[\max_{j\in\Lambda_{m}}F^{{}^{\prime}}_{j}(x^{*};d)\geq 0,\ \text{for all}\ d\in \mathbb{R}^{n}. \tag{1}\] Definition 2.2 ([2]) A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is said to be a \(m\)-strongly convex if for every \(x,y\in\mathbb{R}^{n}\) and \(\alpha\in[0,1]\), \[f(\alpha x+(1-\alpha)y)\leq\alpha f(x)+(1-\alpha)f(y)-\frac{1}{2}m\alpha(1- \alpha)\|x-y\|^{2}.\] Definition 2.3 ([17]) Suppose \(h:\mathbb{R}^{n}\rightarrow(-\infty,+\infty]\) be a proper function and \(x\in\text{dom}(h)\). Then subdifferential of \(h\) at \(x\) is denoted by \(\partial h(x)\) and defined as \[\partial h(x):=\{\xi\in\mathbb{R}^{n}|h(y)\geq h(x)+\xi^{\top}(y-x),\ \text{for all}\ y\in\mathbb{R}^{n}\}.\] If \(x\notin\text{dom}(h)\) then we define \(\partial h(x)=\emptyset\). Next we review some properties of subdifferentials. Theorem 2.1 (Proposition 2.82, [17]): _Let \(h:\mathbb{R}^{n}\rightarrow(-\infty,\infty]\) be a proper convex function, and assume that \(x\in\operatorname{int}(\operatorname{dom}h)\). Then \(\partial h(x)\) is nonempty and bounded. Moreover, if \(h\) is continuous at \(x\in\operatorname{dom}(h)\), then \(\partial h(x)\) is compact._ Theorem 2.2 (Theorem 2.91, [17]): _Consider two proper convex functions \(h_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R},i=1,2\). Suppose that \(\operatorname{ri}\,\operatorname{dom}(h_{1})\cap\operatorname{ri}\, \operatorname{dom}(h_{2})\neq\emptyset\). Then_ \[\partial(h_{1}(x)+h_{2}(x))=\partial h_{1}(x)+\partial h_{2}(x),\] _for every \(x\in\operatorname{dom}(h_{1}+h_{2})\)._ Theorem 2.3 (Theorem 2.96, [17]): _Consider convex functions \(h_{j}:\mathbb{R}^{n}\rightarrow\mathbb{R},j\in\Lambda_{m}\) and let \(h(x)=\max\{h_{1}(x),h_{2}(x),...,h_{m}(x)\}\). Then_ \[\partial h(x)=Co\mathop{\cup}\limits_{j\in I(x)}\partial h_{j}(x),\] _where \(I(x)=\{j\in\Lambda_{m}|h(x)=h_{j}(x)\}\) is the active index set and \(Co\) is the convex hull._ According to the properties of the subdifferential, we recall the necessary conditions for the critical point of (MCOP). Lemma 2.1 ([2] Lemma 1): _Suppose \(0\in Co\mathop{\cup}\limits_{j\in\Lambda_{m}}\partial F_{j}(x^{*})\) for some \(x^{*}\in\mathbb{R}^{n}\), then \(x^{*}\) is a critical point of the (MCOP)._ Proposition 2.1: _[_19_]_ _The following statements hold:_ 1. _If_ \(x^{*}\) _is locally weakly Pareto optimal, then_ \(x^{*}\) _is a critical point for_ \(F\)_._ 2. _If_ \(F\) _is_ \(\mathbb{R}^{m}_{+}\)_-convex and_ \(x^{*}\) _is critical for_ \(F\)_, then_ \(x^{*}\) _is weakly Pareto optimal._ 3. _If_ \(F\) _is_ \(\mathbb{R}^{m}_{+}\)_-strictly convex and_ \(x^{*}\) _is critical for_ \(F\)_, then_ \(x^{*}\) _is Pareto optimal._ Define \[Q_{j}(x,d):=\langle\nabla f_{j}(x),d\rangle+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d )-g_{j}(x),j\in\Lambda_{m}.\] The matrix \(B_{j}(x)\) aims to approximate the Hessian matrix of the function \(f_{j}\) at iteration \(x\). Define \[Q(x,d):=\max_{j\in\Lambda_{m}}Q_{j}(x,d).\] For any fixed \(x\in\mathbb{R}^{n}\), \(Q_{j}\) is continuous for \(d\), and hence \(Q\) is continuous for \(d\). Denote \[I(x,d):=\{j\in\Lambda_{m}\ |\ Q(x,d)=Q_{j}(x,d)\}.\] For \(x\in\mathbb{R}^{n}\), we solve the following subproblem to find a suitable descent direction of the (MCOP) \[\min_{d\in\mathbb{R}^{n}}Q(x,d)\Leftrightarrow\min_{d\in\mathbb{R}^{n}}\max_{ j\in\Lambda_{m}}Q_{j}(x,d). \tag{2}\] Denote \[\begin{split} d(x)&=\operatorname*{arg\,min}_{d\in \mathbb{R}^{n}}Q(x,d)\\ &=\operatorname*{arg\,min}_{d\in\mathbb{R}^{n}}\max_{j\in\Lambda_{ m}}\left\{\nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x)\right\},\end{split} \tag{3}\] and \[\theta(x) =Q(x,d(x)) \tag{4}\] \[=\max_{j\in\Lambda_{m}}\left\{\nabla f_{j}(x)^{T}d(x)+\frac{1}{2} d(x)^{T}B_{j}(x)d(x)+g_{j}(x+d(x))-g_{j}(x)\right\}.\] Remark 2.1: If \(g_{j}=0\) for all \(j\in\Lambda_{m}\), then (2) coincides with the subproblem in [1], if \(m=1\) then (2) coincides with the subproblem used in Step 3 of Algorithm 1 in [24], and if \(B_{j}(x)=\nabla^{2}f_{j}(x)\) then (2) coincides with the subproblem used in Step 2 of Algorithm 1 in [2]. We can see that if the matrix \(B_{j}(x)\succeq mI\) for all \(j\), then \(Q_{j}(x,d)\) is \(m\)-strongly convex function for \(d\). Hence \(Q(x,d)\) is \(m\)-strongly convex function for \(d\). This implies that (2) has a unique finite minimizer. Clearly for every \(x\in\mathbb{R}^{n}\), \[\theta(x)=Q(x,d(x))\leq Q(x,0)=0. \tag{5}\] Since \(d(x)\) is the solution of (2), according to the Lemma 2.1, we have \[0\in\partial_{d}Q_{j}(x,d(x)).\] The original problem (2) can be transformed into: \[\min t \tag{6}\] \[\text{s.t.} \nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x )-t\leq 0,\ j\in\Lambda_{m},\] \[(t,d)\in\mathbb{R}\times\mathbb{R}^{n},\] which is a convex quadratic problem, we assume that the \(B_{j}(x),j\in\Lambda_{m}\) are positive definite. The Lagrangian function of the problem (6) is: \[L(t,d,\lambda)=t+\sum_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)^{T}d+ \frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x)-t\right).\] As reported in [21], in the smooth case, there are the well-known Karush-Kuhn-Tucker (KKT) conditions, which are based on the gradients of the objective functions. In case the objective functions are nonsmooth, the KKT conditions can be generalized using the concept of subdifferentials. Thus, there exist \(\lambda\in\Delta_{m}\) and \(\xi_{j}\in\partial_{d}g_{j}(x+d)\), such that \[\sum_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)+B_{j}(x)d+\xi_{j} \right)=0, \tag{7}\] \[\nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x)\leq t,\ j\in \Lambda_{m}, \tag{8}\] \[\lambda_{j}\left(\nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j }(x)-t\right)=0,\ j\in\Lambda_{m}. \tag{9}\] Thus, if \(d(x)\) is the solution of (6) and \(t=\theta(x)\), then there exists \(\lambda\in\triangle_{m}\) such that \((d(x),\theta(x))\) satisfy the condition (7)-(9). Theorem 2.4: _Suppose \(B_{j}(x)\) is a positive definite matrix for all \(x\in\mathbb{R}^{n}\) and consider \(\theta(x)\) as defined by equality (4). Then,_ _(a)For all \(x\in\mathbb{R}^{n}\), \(\theta(x)\leq 0\)._ _(b)The following conditions are equivalent:_ _(1) The point \(x\) is not critical._ _(2) \(\theta(x)<0\)._ _(3) \(d(x)\neq 0\)._ _(c)The function \(\theta\) is continuous._ _Proof_ Note that, by equality (4), \[\theta(x) =\min_{d\in\mathbb{R}^{n}}\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^ {T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d))-g_{j}(x)\] \[\leq\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}0+\frac{1}{2}0^{T}B_ {j}(x)0+g_{j}(x)-g_{j}(x)\] \[=0.\] So, item (a) holds. Let us now prove the equivalences of item (b). First, assume that (1) holds, i.e. \(R(\partial F_{j}(x))\bigcap(-\mathbb{R}^{m}_{++})\neq\emptyset\), which in turn means that there exists \(\widetilde{d}\in\mathbb{R}^{n}\) such that \(\partial F_{j}(x)^{T}\widetilde{d}<0\), Thus, using equality (4), for all \(t>0\), we have \[\theta(x) \leq\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}t\widetilde{d}+ \frac{1}{2}t^{2}\widetilde{d}^{T}B_{j}(x)\widetilde{d}+g_{j}(x+t\widetilde{d })-g_{j}(x)\] \[=t\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}\widetilde{d}+\frac{t }{2}\widetilde{d}^{T}B_{j}(x)\widetilde{d}+\frac{g_{j}(x+t\widetilde{d})-g_{j }(x)}{t}\] \[\leq t\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}\widetilde{d}+ \frac{t}{2}\widetilde{d}^{T}B_{j}(x)\widetilde{d}+\xi_{j}^{T}\widetilde{d},\ (\xi_{j}\in\partial g_{j}(x+t \widetilde{d}))\] \[=t\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}\widetilde{d}-\nabla f _{j}(x+t\widetilde{d})^{T}\widetilde{d}+\nabla f_{j}(x+t\widetilde{d})^{T} \widetilde{d}+\frac{t}{2}\widetilde{d}^{T}B_{j}(x)\widetilde{d}+\xi_{j}^{T} \widetilde{d},\] \[\leq t\max_{j\in\Lambda_{m}}L(-t\widetilde{d})\widetilde{d}+ \nabla f_{j}(x+t\widetilde{d})^{T}\widetilde{d}+\frac{t}{2}\widetilde{d}^{T}B_ {j}(x)\widetilde{d}+\xi_{j}^{T}\widetilde{d},\] \[=t\max_{j\in\Lambda_{m}}-tL\widetilde{d}^{2}+(\nabla f_{j}(x+t \widetilde{d})+\xi_{j})^{T}\widetilde{d}+\frac{t}{2}\widetilde{d}^{T}B_{j}(x) \widetilde{d}.\] Therefore, for \(t>0\) sufficiently small, the third term on the right hand side of the above inequality can be viewed as an infinitesimal of \(t\), and \(\nabla f_{j}\) is Lipschitz continuous, the right hand side of the above inequality is negative and (2) holds. To prove that (2) implies (3), recall that \(\theta(x)\) is the optimal valve of problem (2) and so, being negative, the solution to this problem cannot be \(d(x)=0\). Finally, let us see that (3) implies (1). For the purpose of contradiction that, let \(x\) be a critical point. Since \(B_{j}(x)\) is a positive definite matrix for all \(x\in\mathbb{R}^{n}\), from \(\theta(x)\leq 0\), \[\nabla f_{j}(x)^{T}d(x)+g_{j}(x+d(x))-g_{j}(x)\leq-\frac{1}{2}d(x)^{T}B_{j}(x) d(x)<0. \tag{10}\] Since \(g_{j}\) is convex, for any \(\alpha\in(0,1)\) \[g_{j}(x+\alpha d(x))-g_{j}(x) =g_{j}\left(\alpha(x+d(x))+(1-\alpha)x\right)-g_{j}(x)\] \[\leq\alpha g_{j}(x+d(x))+(1-\alpha)g_{j}(x)-g_{j}(x) \tag{11}\] \[=\alpha(g_{j}(x+d(x))-g_{j}(x)).\] From inequality(10) and (11), \[\alpha\nabla f_{j}(x)^{T}d(x)+g_{j}(x+\alpha d(x))-g_{j}(x) \leq\alpha\nabla f_{j}(x)^{T}d(x)+\alpha\left(g_{j}(x+d(x))-g_{j}( x)\right)\] \[=\alpha\left(\nabla f_{j}(x)^{T}d(x)+g_{j}(x+d(x))-g_{j}(x)\right)\] \[<0.\] This implies \[\frac{1}{\alpha}\left(\alpha\nabla f_{j}(x)^{T}d(x)+g_{j}(x+\alpha d(x))-g_{j }(x)\right)<0.\] Taking the limit \(\alpha\to 0^{+}\) in the above inequality we have \(F_{j}^{{}^{\prime}}(x,d(x))<0\) for all \(j\in\Lambda_{m}\). This contradicts with \(x\) is a critical point. Therefore, \(x\) is not a critical point. We now prove item (c). It is easy to see that the function \(\max\limits_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}d(x)+\frac{1}{2}d(x)^{T}B_{j} (x)d(x)+g_{j}(x+d(x))-g_{j}(x)\) is continuous with respect to \(x\) and \(d(x)\). Therefore, the optimal value function \(\theta(x)\) is also continuous from [(5], Maximum Theorem]. Moreover, since the optimal set mapping \(d(x)\) is unique, \(d(x)\) is continuous from [(23], Corollary 8.1] Lemma 2: _Suppose \(B_{j}(x)\succeq mI\), a point \(x\in\mathbb{R}^{n}\) is critical for \(f\) if and only if \(d(x)=0\), or equivalently, if and only if \(\theta(x)=0\)._ Proof: The proof of sufficiency is the previous Theorem 2.4. Conversely suppose \(d(x)=0\). Then from condition (7) - (9), there exists \(\lambda\in\Delta_{m}\) such that \(\sum\limits_{j\in\Lambda_{m}}(\nabla f_{j}(x)+\xi_{j})=0\). Where \(\xi_{j}\in\partial g_{j}(x)\) for \(j\in\Lambda_{m}\). This implies \[0\in Co\underset{j\in\Lambda_{m}}{\cup}\partial F_{j}(x).\] Hence, \(x\) is a critical point of the (MCOP). We develop a nonmonotone line search technique to find a suitable step length that allows the objective function values to increase in some iterations. We introduce a classical average-type nonmonotone line search. ## 3 Nonmonotone Line Search In nonmonotone line search, some growth in the function value is permitted. The well-known nonmonotone line search is proposed by Zhang and Hager [35], which takes the average value of the continuous function. We introduce an average-type nonmonotone line search extended to the multiobjective case in the [27]. \[F_{j}(x^{k}+\alpha_{k}d^{k})\leq C_{j}^{k}+\tau\alpha_{k}\theta(x^{k}),\mbox{ for all }j, \tag{12}\] with \(\tau\in(0,1)\) and \(C_{j}^{k}\geq F_{j}(x^{k})\), where \[q_{k+1}=\eta q_{k}+1, \tag{13}\] \[C_{j}^{k+1}=\frac{\eta q_{k}}{q_{k+1}}C_{j}^{k}+\frac{1}{q_{k+1}}F_{j}(x^{k+1}). \tag{14}\] We prove that the average-type nonmonotone line search for proximal quasi-Newton method of MCOPs is well defined. Theorem 3.1: _Suppose \(B_{j}(x)\succeq mI\) for every \(j\) and \(d(x)\) is the solution of (2). Then_ \[\theta(x)\leq-\frac{m}{2}\|d(x)\|^{2}.\] _Further, if \(x\) is noncritical, the inequality (12) holds for every \(\alpha>0\) sufficiently small._ Proof: Suppose \(d(x)\) is the solution of (2) and \(\theta(x)=Q(x,d(x))\). Then there exists \(\lambda\in\Delta_{m}\) such that \((d(x),\theta(x),\lambda)\) satisfies conditions (7) - (9). Since \(g_{j}\) is convex and \(\xi_{j}\in\partial g_{j}(x+d(x))\), we have \[g_{j}(x+d(x))-g_{j}(x)\leq\xi_{j}^{T}d(x). \tag{15}\] Then, multiplying both sides of (7) by \(d(x)\), we obtain \[\sum_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)^{T}d(x)+d(x)^{T}B_{j}( x)d(x)+\xi_{j}^{T}d(x)\right)=0.\] Hence from inequality (15), we get \[\sum_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)^{T}d(x)+d (x)^{T}B_{j}(x)d(x)+g_{j}(x+d(x))-g_{j}(x)\right)\] \[\leq\sum_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)^{T}d( x)+d(x)^{T}B_{j}(x)d(x)+\xi_{j}^{T}d(x)\right)\] \[=0.\] Take sum over \(j\in\Lambda_{m}\) in (9) and using \(\sum\limits_{j\in\Lambda_{m}}\lambda_{j}=1\), we have \[\sum\limits_{j\in\Lambda_{m}}\lambda_{j}\left(\nabla f_{j}(x)^{T}d (x)+d(x)^{T}B_{j}(x)d(x)+g_{j}(x+d(x))-g_{j}(x)\right)\] \[=\frac{1}{2}\sum\limits_{j\in\Lambda_{m}}\lambda_{j}d(x)^{T}B_{j} (x)d(x)+\theta(x)\] \[\leq 0.\] Using the above two formulas can be obtained \[\theta(x)\leq-\frac{1}{2}\sum\limits_{j\in\Lambda_{m}}\lambda_{j}d(x)^{T}B_{j} (x)d(x). \tag{16}\] Since \(B_{j}(x)\succeq mI\) for every \(j\), \(d(x)^{T}B_{j}(x)d(x)\geq m\|d(x)\|^{2}\) holds for every \(j\). Then from inequality (16) and \(\sum\limits_{j\in\Lambda_{m}}\lambda_{j}=1\), we have \[\theta(x)\leq-\frac{m}{2}\left\|d(x)\right\|^{2}.\] Suppose \(x\) is noncritical, then from Theorem 2.4, we get \(d(x)\neq 0\). So, \[\theta(x)\leq-\frac{m}{2}\|d(x)\|^{2}<0.\] Since \(g_{j}\) is convex, for any \(\alpha\in[0,1]\), we have \[F_{j}(x+\alpha d(x))-F_{j}(x) =f_{j}(x+\alpha d(x))-f_{j}(x)+g_{j}(x+\alpha d(x))-g_{j}(x)\] \[\leq\alpha\nabla f_{j}(x)^{T}d(x)+(1-\alpha)g_{j}(x)+\alpha g_{j} (x+d(x))\] \[\quad-g_{j}(x)+o(\alpha^{2})\] \[=\alpha\left(\nabla f_{j}(x)^{T}d(x)+g_{j}(x+d(x))-g_{j}(x) \right)+o(\alpha^{2})\] \[\leq\alpha\theta(x)+o(\alpha^{2})\quad\ (\frac{1}{2}d(x)^{T}B_{j}(x)d(x)>0).\] Then for each \(j\in\Lambda_{m}\), subtract \(\tau\alpha\theta(x)\) on both sides of the about inequality, we obtain \[F_{j}(x+\alpha d(x))-C_{j}^{k}-\tau\alpha\theta(x) <F_{j}(x+\alpha d(x))-F_{j}(x)-\tau\alpha\theta(x)\] \[<\alpha(1-\tau)\theta(x)+o(\alpha^{2}) \tag{17}\] \[<0.\] Since \(\tau\in(0,1)\) and \(\theta(x)<0\), the right hand side term in inequality (17) becomes nonpositive for every \(\alpha>0\) sufficiently small. This implies that the average-type Armijo condition holds for every \(\alpha>0\) sufficiently small, and the proof is complete. ## 4 Nonmonotone proximal quasi-Newton method for MCOPs The framework for a complete NPQNA for MCOPs is described as follows: \begin{tabular}{l} \hline **Algorithm 1** NPQNA for MCOPs \\ \hline **Step 1** Choose initial approximation \(x^{0}\in\mathbb{R}^{n}\), scalars, \(\rho,\eta,\tau\in(0,1)\), \\ and \(\epsilon>0\), \(\mu>0\), \(q_{0}=1\), \(C_{j}^{0}=F_{j}(x^{0})\), \(B_{j}(x^{0})=I,j\in\Lambda_{m}\), \\ \(0<\eta_{min}\leq\eta_{max}<1\), set \(k:=0\). \\ **Step 2** Solve the subproblem (2) to find \(d^{k}\) and \(\theta(x^{k})\). \\ Where, for \(k\geq 1\), \\ \(B_{j}(x^{k})=\left\{\begin{array}{ll}B_{j}(x^{k-1})-\frac{B_{j}(x^{k-1})S_{k- 1}S_{k-1}^{T}B_{j}(x^{k-1})}{S_{k-1}^{T}B_{j}(x^{k-1})S_{k-1}}+\frac{y_{j}^{k-1 }(y_{j}^{k-1})^{T}}{S_{k-1}^{T}y_{j}^{k-1}},&S_{k-1}^{T}y_{j}^{k-1}>0,\\ B_{j}(x^{k-1}),&otherwise.\end{array}\right.\) \\ With \(S_{k-1}=x^{k}-x^{k-1}\) and \(y_{j}^{k-1}=\nabla f_{j}(x^{k})-\nabla f_{j}(x^{k-1})\). \\ **Step 3** If \(|\theta(x^{k})|<\epsilon\), then stop. Else, go to step 4. \\ **Step 4** Take \(\alpha_{k}=\mu\rho^{h_{k}}\), where \(h_{k}\) is the smallest nonnegative integer such that the average-type condition (12) holds. \\ Where choose \(\eta_{min}\leq\eta\leq\eta_{max}\), the update of \(q_{k}\) and \(C_{j}^{k}\) is in (13) and (14). \\ **Step 5** Update \(x^{k+1}=x^{k}+\alpha_{k}d^{k}\). \\ **Step 6** Set \(k=k+1\) and return to Step 2. \\ \hline \end{tabular} ## 5 Convergence Analysis Before proving the convergence of NPQNA for MCOPs, we give the following related technical lemma. Lemma 5.1: _[_27_]_ _Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1, and \(F_{j}\) is bounded from below. Then, \(\{C_{j}^{k}\}\) is nonincreasing and admits a limit when \(k\rightarrow\infty\)._ Lemma 5.2: _[_27_]_ _For a sequence of iterates \(\{x^{k}\}\) and search directions \(\{d^{k}\}\). Then there exist positive constants \(\Gamma_{1}\in(0,1)\) such that_ \[\max_{j\in\Lambda_{m}}\{\nabla f_{j}(x^{k})^{T}d^{k}+g_{j}(x^{k}+d^{k})-g_{j} (x^{k})\}\leq-\Gamma_{1}|\theta(x^{k})|. \tag{18}\] Lemma 5.3: _Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1. Assume that \(\eta<1\), Then, we have_ \[\lim_{k\rightarrow\infty}\alpha_{k}|\theta(x^{k})|=0.\] _proof_ Recall that from Lemma 5.1. \(\{C_{j}^{k}\}_{k}\) admits a limit for \(k\to\infty\). From the definition of \(C_{j}^{k+1}\) in (14), we get \[C_{j}^{k+1} =\frac{\eta q_{k}}{q_{k+1}}C_{j}^{k}+\frac{1}{q_{k+1}}F_{j}(x^{k+1})\] \[\leq\frac{\eta q_{k}}{q_{k+1}}C_{j}^{k}+\frac{1}{q_{k+1}}(C_{j}^{ k}+\tau\alpha_{k}\theta(x^{k}))\] \[=(\frac{\eta q_{k}}{q_{k+1}}+\frac{1}{q_{k+1}})C_{j}^{k}+\frac{1} {q_{k+1}}\tau\alpha_{k}\theta(x^{k})\] \[=C_{j}^{k}+\frac{1}{q_{k+1}}\tau\alpha_{k}\theta(x^{k}),\] where the above inequality holds from inequality (12) and the last equality follows from inequality (13). Since \(\frac{\tau\alpha_{k}}{q_{k+1}}\geq 0\) and \(\theta(x^{k})\leq 0\), since \(\alpha_{k}=\rho^{h_{k}}\), as \(k\to\infty\), \(\alpha_{k}\to 0\), while \(q_{k+1}\in(0,1)\). So we have \[\lim_{k\to\infty}\frac{\alpha_{k}}{q_{k+1}}\theta(x^{k})=0. \tag{19}\] Furthermore, from inequality (18) in Lemma 5.2, we obtain \[\frac{\alpha_{k}}{q_{k+1}}\left(\nabla f_{j}(x^{k})^{T}d^{k}+g_{ j}(x^{k}+d^{k})-g_{j}(x^{k})\right)\] \[\leq-\frac{\alpha_{k}}{q_{k+1}}\Gamma_{1}|\theta(x^{k})| \tag{20}\] \[\leq 0.\] Now, observe that inequality (13) yields \[q_{k+1}=1+\sum_{j=0}^{k}\eta^{j+1}\leq\sum_{j=0}^{+\infty}\eta^{j}=\frac{1}{1- \eta}.\] Since \(0<\eta<1\), we have that \(\{\frac{1}{q_{k+1}}\}\) is bounded from below. So, we can take the limit in inequality (20) and use inequality (19) to show that \[\lim_{k\to\infty}\alpha_{k}|\theta(x^{k})|=0,\] and the conclusion follows. We will establish the convergence of the NPQNA for MCOPs. Theorem 5.1: _Suppose that \(\{x^{k}\}\) is a sequence generated by Algorithm 1, \(B_{j}(x)\succeq mI\), \(f_{j}(x)\) is strongly convex with module \(m>0\) and bounded from below, \(\nabla f_{j}\) is Lipschitz continuous. Then every accumulation point of \(\{x^{k}\}\) is a Pareto optimal of the (MCOP)._ _proof_ Since \(\nabla f_{j}\) is Lipschitz continuous for every \(j\) with Lipschitz constant \(L_{j}\), where \(L=\max\limits_{j\in\Lambda_{m}}L_{j}\), from the so-called descent Lemma (7, Proposition A.24) for any \(\alpha\) we have \[f_{j}(x^{k}+\alpha d^{k})\leq f_{j}(x^{k})+\alpha\nabla f_{j}(x^{k})^{T}d^{k}+ \frac{L}{2}\alpha^{2}\|d^{k}\|^{2},\ \forall j\in\Lambda_{m}, \tag{21}\] also since \(g_{j}\) is convex, for any \(\alpha\in(0,1)\), we have \[\begin{split} g_{j}(x^{k}+\alpha d^{k})-g_{j}(x^{k})& =g_{j}\left(\alpha(x^{k}+d^{k})+(1-\alpha)x^{k}\right)-g_{j}(x^{k} )\\ &\leq\alpha g_{j}(x^{k}+d^{k})+(1-\alpha)g_{j}(x^{k})-g_{j}(x^{k} )\\ &=\alpha\left(g_{j}(x^{k}+d^{k})-g_{j}(x^{k})\right).\end{split} \tag{22}\] From inequality (21) and inequality (22) for any \(\alpha\in(0,1)\), we get \[\begin{split} F_{j}(x^{k}+\alpha d^{k})-F_{j}(x^{k})& =f_{j}(x^{k}+\alpha d^{k})+g_{j}(x^{k}+\alpha d^{k})-f_{j}(x^{k})- g_{j}(x^{k})\\ &\leq\alpha\nabla f_{j}(x^{k})^{T}d^{k}+\frac{L}{2}\alpha^{2}\|d ^{k}\|^{2}+\alpha\left(g_{j}(x^{k}+d^{k})-g_{j}(x^{k})\right)\\ &\leq\alpha\left(\nabla f_{j}(x^{k})^{T}d^{k}+g_{j}(x^{k}+d^{k})- g_{j}(x^{k})\right)+\frac{L}{2}\alpha^{2}\|d^{k}\|^{2}\\ &\leq\alpha\left(\nabla f_{j}(x^{k})^{T}d^{k}+\frac{1}{2}(d^{k})^ {T}B_{k}d^{k}+g_{j}(x^{k}+d^{k})-g_{j}(x^{k})\right)\\ &\quad+\frac{L}{2}\alpha^{2}\|d^{k}\|^{2}\\ &\leq\alpha\theta(x^{k})+\frac{L}{2}\alpha^{2}\|d^{k}\|^{2},\ j\in\Lambda_{m},\end{split}\] where the third inequality holds since \(B_{j}(x)\succeq mI\). From Step 4 of Algorithm 1, either \(\alpha_{k}\geq\rho\) or there exists \(h_{k}\) such that \(\alpha_{k}=\rho^{h_{k}}<\rho\) satisfies inequality (12). Since \(F_{j}(x^{k})\leq C_{j}^{k}\), if \(\alpha=\frac{\alpha_{k}}{\rho}\) doesn't satisfy inequality (12), we can get \[\begin{split} F_{j}(x^{k}+\frac{\alpha_{k}}{\rho}d^{k})& >C_{j}^{k}+\tau\frac{\alpha_{k}}{\rho}\theta(x^{k})\\ &\geq F_{j}(x^{k})+\tau\frac{\alpha_{k}}{\rho}\theta(x^{k}),\ j\in \Lambda_{m}.\end{split}\] From the above two inequalities, we have \[\tau\frac{\alpha_{k}}{\rho}\theta(x^{k})\leq\frac{\alpha_{k}}{\rho}\theta(x^{ k})+\frac{L}{2}(\frac{\alpha_{k}}{\rho})^{2}\|d^{k}\|^{2}.\] Simplify the above inequality \[(\tau-1)\theta(x^{k})\leq\frac{L}{2}\frac{\alpha_{k}}{\rho}\|d^{k}\|^{2}.\] Therefore \[\alpha_{k}\geq\frac{2\rho(1-\tau)}{L}\frac{|\theta(x^{k})|}{\|d^{k}\|^{2}}.\] Furthermore, by inequality (12), we have \[F_{j}(x^{k+1})\leq C_{j}^{k}+\tau\frac{2\rho(1-\tau)}{L}\frac{|\theta(x^{k})|}{ \|d^{k}\|^{2}}\theta(x^{k}),\ j\in\Lambda_{m}.\] So, \[F_{j}(x^{k+1})\leq C_{j}^{k}-\frac{2\tau\rho(1-\tau)}{L}\frac{|\theta(x^{k})|^{ 2}}{\|d^{k}\|^{2}},\ j\in\Lambda_{m}.\] By Theorem 3.1, we have \[|\theta(x^{k})|\geq\frac{m}{2}\|d(x^{k})\|^{2}.\] Finally, we have \[\begin{split} F_{j}(x^{k+1})&\leq C_{j}^{k}-\frac{ m}{2}\frac{2\tau\rho(1-\tau)}{L}|\theta(x^{k})|\\ &=C_{j}^{k}-\frac{m\tau\rho(1-\tau)}{L}|\theta(x^{k})|,\ j\in \Lambda_{m}.\end{split} \tag{23}\] Denote \(\frac{m\tau\rho(1-\tau)}{L}\) as \(\beta\), combining the relations (13) and (14) with inequality (23), we obtain: \[\begin{split} C_{j}^{k+1}&=\frac{\eta q_{k}C_{j}^{ k}+F_{j}(x^{k+1})}{q_{k+1}}\\ &\leq\frac{\eta q_{k}C_{j}^{k}+C_{j}^{k}-\beta|\theta(x^{k})|}{q _{k+1}}\\ &=C_{j}^{k}-\frac{\beta|\theta(x^{k})|}{q_{k+1}},\ \forall j\in\Lambda_{m}.\end{split} \tag{24}\] Since \(F_{j},j\in\Lambda_{m}\) are bounded from below and \(F_{j}(x^{k})\leq C_{j}^{k}\), for all \(k\), we conclude that \(C_{j}^{k},j\in\Lambda_{m}\) are bounded from below. It follows from (24) and Lemma (5.1) that \[\sum_{k=0}^{+\infty}\frac{|\theta(x^{k})|}{q_{k+1}}\leq C_{j}^{0}-C_{j}^{ \infty}<\infty. \tag{25}\] Since \(f_{j}\) is strongly convex and continuously differentiable, \(g_{j}\) is convex and continuous, the level set \(\mathcal{L}_{F}(x^{0})\subset\{x:F_{j}(x)\leq C_{j}^{0}\}\) is compact. We can get \(\{x^{k}\}\subset\mathcal{L}_{F}(x^{0})\) by the fact that \(F_{j}(x^{k})\leq C_{j}^{k}\leq C_{j}^{0}\). From the compactness of \(\mathcal{L}_{F}(x^{0})\), it follows that \(\{x^{k}\}\) has an accumulation point. Now, suppose that \(x^{*}\) is an accumulation point of the sequence \(\{x^{k}\}\). So, there exists a subsequence \(\{x^{k}\}_{k\in K}\) which converges to \(x^{*}\). Then we can prove \(\theta(x^{*})=0\), by contradiction. Assume \(\theta(x^{*})<0\), which implies that there are \(\epsilon>0\) and \(\delta_{0}>0\) so that for all \(0<\delta<\delta_{0}\) and for all \(k\in K\) such that \(|x^{k}-x^{*}|\leq\delta\), we have \[|\theta(x^{k})|\geq\epsilon>0.\] This means that \[\sum_{k=0}^{+\infty}\frac{|\theta(x^{k})|}{q_{k+1}}\geq\sum_{k\in\{k\in K\|\|x^{k }-x^{*}\|\leq\delta\}}\frac{\epsilon}{q_{k+1}}. \tag{26}\] According to inequality (13) and \(\eta_{max}<1\), we have \[q_{k+1} =1+\sum_{i=0}^{k}\prod_{l=0}^{i}\eta^{k-l}\leq 1+\sum_{i=0}^{k} \eta_{max}^{i}\] \[\leq\sum_{i=0}^{+\infty}\eta_{max}^{i}=\frac{1}{1-\eta_{max}}.\] Consequently, following inequality (26), we have \[\sum_{k=0}^{+\infty}\frac{|\theta(x^{k})|}{q_{k+1}} \geq\sum_{k\in\{k\in K\|\|x^{k}-x^{*}\|\leq\delta\}}\frac{\epsilon }{q_{k+1}}\] \[\geq\sum_{k\in\{k\in K\|\|x^{k}-x^{*}\|\leq\delta\}}(1-\eta_{max})\epsilon\] \[=+\infty.\] This contradicts inequality (25). Hence, we have \(\theta(x^{*})=0\) and it follows from Theorem 2.4 and Lemma 2.2 that \(x^{*}\) is critical for \(F\), since \(F\) is \(\mathbb{R}_{+}^{m}\)-strongly convex, on the basis of Proposition 2.1, so \(x^{*}\) is Pareto optimal for \(F\). We can get the strong convergence of NPQNA for MCOPs. Theorem 5.2: _Suppose \(f_{j}\) is strongly convex with module \(m>0\) for all \(j\in\Lambda_{m}\). Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1. Then \(\{x^{k}\}\) converges to some Pareto optimal \(x^{*}\)._ proof According to Theorem 5.1, we have that \(x^{*}\) is a Pareto optimal of \(F\). By Lemma 5.1, we have \(\{C^{k}\}\) is nonincreasing and admits a limit when \(k\to\infty\). Let \(\lim_{k\to\infty}C^{k}=F^{*}\). According to the definition of \(C^{k}\) in (14) and the definition of \(q_{k}\) in (13), we have \[F(x^{k})=q_{k}C^{k}-\eta q_{k-1}C^{k-1},\] Since the limits of \(\{C^{k}\}\) exist, we take the limits on both sides of the above equation and obtain \[\lim_{k\to\infty}F(x^{k}) =\lim_{k\to\infty}q_{k}C^{k}-\eta q_{k-1}C^{k-1}\] \[=\frac{1}{1-\eta}F^{*}-\frac{\eta}{1-\eta}F^{*}\] \[=F^{*}\] Next, we prove the uniqueness of \(x^{*}\) by proof by contradiction. Suppose instead that there is another accumulation point \(x_{1}^{*}\). Due to the strong convexity of \(F\), the following inequality holds: \[F(\lambda x^{*}+(1-\lambda)x_{1}^{*})\prec\lambda F(x^{*})+(1-\lambda)F(x_{1}^{* })=F^{*},\] where the equality follows from the convergence of \(\{F(x^{k})\}\). However, this contradicts the fact that \(x^{*}\) is a Pareto optimal. The uniqueness of accumulation point of \(\{x^{k}\}\) implies that \(\{x^{k}\}\) converges to \(x^{*}\). Referring to the proof of the convergence rate of the Newton-type proximal gradient method proposed by Chen et al. in [11], we establish the local superlinear convergence of the NPQNA for MCOPs. In the same way, we denote by \[h_{\lambda}(x) :=\sum_{j\in\Lambda_{m}}\lambda_{j}h_{j}(x),\] \[\nabla h_{\lambda}(x) :=\sum_{j\in\Lambda_{m}}\lambda_{j}\nabla h_{j}(x),\] \[\nabla^{2}h_{\lambda}(x) :=\sum_{j\in\Lambda_{m}}\lambda_{j}\nabla^{2}h_{j}(x).\] According to Sion's minimax theorem [31], there exists \(\lambda^{k}\in\Delta_{m}\) such that \[d^{k}=\operatorname*{arg\,min}_{d\in\mathbb{R}^{n}}\left\{\nabla f_{\lambda^{ k}}(x^{k})^{T}d+\frac{1}{2}d^{T}B_{\lambda^{k}}(x^{k})d+g_{\lambda^{k}}(x^{k}+d)-g_ {\lambda^{k}}(x^{k})\right\},\] by means of the first-order optimality condition, we have \[-\nabla f_{\lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k})d^{k}\in\partial g_{ \lambda^{k}}(x^{k}+d^{k}). \tag{27}\] Similar to the modified fundamental inequality in [11], we propose the following inequality. Proposition 5.1: _Suppose \(f_{j}\) is strictly convex for all \(j\in\Lambda_{m}\). Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1. If \(\alpha_{k}=1\), then there exists \(\lambda^{k}\in\Delta_{m}\) such that_ \[F_{\lambda^{k}}(x^{k+1})-F_{\lambda^{k}}(x)\] \[\leq\frac{1}{2}\|x^{k}-x\|_{B_{\lambda^{k}}}^{2}-\frac{1}{2}\|x^ {k+1}-x^{k}\|_{B_{\lambda^{k}}}^{2}-\frac{1}{2}\|x^{k+1}-x\|_{B_{\lambda^{k}}} ^{2} \tag{28}\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle^{(28)}\] \[\quad-\langle x-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x-x^{k}))ds(t(x-x^{k}))dt\rangle.\] _proof_ In the light of the twice continuity of \(f_{j}\), we can deduce that \[F_{j}(x^{k+1})-F_{j}(x)\] \[=f_{j}(x^{k+1})-f_{j}(x^{k})-(f_{j}(x)-f_{j}(x^{k}))+g_{j}(x^{k+1}) -g_{j}(x)\] \[=\langle\nabla f_{j}(x^{k}),x^{k+1}-x^{k}\rangle+\langle\nabla f_ {j}(x^{k}),x^{k}-x\rangle+g_{j}(x^{k+1})-g_{j}(x)\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ j}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{j}(x^{k }+st(x-x^{k}))ds(t(x-x^{k}))dt\rangle\] \[=\langle\nabla f_{j}(x^{k}),x^{k+1}-x\rangle+g_{j}(x^{k+1})-g_{j}(x)\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ j}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{j}(x^{ k}+st(x-x^{k}))ds(t(x-x^{k}))dt\rangle.\] On the other hand, from (27), we have \[-\nabla f_{\lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k})d^{k}\in\partial g_{ \lambda^{k}}(x^{k}+d^{k}),\lambda^{k}\in\Delta_{m}.\] This, together with the fact that \(\alpha_{k}=1\), implies \[g_{\lambda^{k}}(x^{k+1})-g_{\lambda^{k}}(x) \leq\langle-\nabla f_{\lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k}) d^{k},x^{k+1}-x\rangle\] \[=\langle-\nabla f_{\lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k})(x^{ k+1}-x^{k}),x^{k+1}-x\rangle.\] with the help of the last two relations, we have \[F_{\lambda^{k}}(x^{k+1})-F_{\lambda^{k}}(x)\] \[\leq\langle B_{\lambda^{k}}(x^{k})(x^{k}-x^{k+1}),x^{k+1}-x\rangle\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x-x^{k}))ds(t(x-x^{k}))dt\rangle\] \[=\frac{1}{2}\|x^{k}-x\|_{B_{\lambda^{k}}}^{2}-\frac{1}{2}\|x^{k+1} -x^{k}\|_{B_{\lambda^{k}}}^{2}-\frac{1}{2}\|x^{k+1}-x\|_{B_{\lambda^{k}}}^{2}\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x-x^{k}))ds(t(x-x^{k}))dt\rangle.\] Again, assuming that \(f_{j}\) is twice continuously differentiable and strongly convex with constant \(m\), and \(\nabla^{2}f_{j}\) are continuous, and the following assumption holds. **Assumption 5.1**: _[_1_]_ _For all \(\epsilon>0\) there exist \(k^{0}\in\mathbb{N}\) such that for all \(k\geq k^{0}\), we have_ \[\frac{\left\|(\nabla^{2}f_{j}(x^{*})-B_{j}(x^{k}))d^{k}\right\|}{\|d^{k}\|}\leq \epsilon,j\in\Lambda_{m}, \tag{29}\] _where \(\{x^{k}\}\) is a sequence generated by Algorithm 1 and \(\{B_{j}(x^{k})\},j\in\Lambda_{m},\) are the sequences obtained by the BFGS updates._ \[B_{j}(x^{k+1})=B_{j}(x^{k})-\frac{B_{j}(x^{k})s^{k}(s^{k})^{T}B_{j}(x^{k})}{(s^ {k})^{T}B_{j}(x^{k})s^{k}}+\frac{y_{j}^{k}(y_{j}^{k})^{T}}{(y_{j}^{k})^{T}s^{k }},\] _where_ \[s^{k}=x^{k+1}-x^{k},\ y_{j}^{k}=\nabla f_{j}(x^{k+1})-\nabla f_{j}(x^{k}).\] _The Assumption 5.1 is also called Dennis-More criterion._ **Lemma 5.4**: _Suppose \(f_{j}\) is twice continuously differentiable and strongly convex with constant \(m\), \(\nabla^{2}f_{j}(x^{k})\) is continuous, the sequence \(\{B_{j}(x^{k})\}\) satisfies the Assumption (5.1). Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1. Then, for any \(0<\epsilon\leq\frac{m(1-\tau)}{3}\), the unit step length satisfies the nonmonotone line search conditions (12) after sufficiently many iterations. proof_ Since \(f_{j}\) is twice continuously differentiable, we have \[f_{j}(x^{k}+d^{k})\leq f_{j}(x^{k})+\nabla f_{j}(x^{k})^{T}d^{k}+\frac{1}{2}(d ^{k})^{T}\nabla^{2}f_{j}(x^{k})d^{k}+\frac{\epsilon}{2}\left\|d^{k}\right\|^{ 2},\] we add \(g_{j}(x^{k}+d^{k})\) to both sides to obtain \[F_{j}(x^{k}+d^{k})\leq f_{j}(x^{k})+\nabla f_{j}(x^{k})^{T}d^{k}+\frac{1}{2}(d ^{k})^{T}\nabla^{2}f_{j}(x^{k})d^{k}+\frac{\epsilon}{2}\left\|d^{k}\right\|^{ 2}+g_{j}(x^{k}+d^{k}),\] we then add and subtract \(g_{j}(x^{k})\) from the right-hand side to obtain \[F_{j}(x^{k}+d^{k})\leq f_{j}(x^{k})+g_{j}(x^{k})+\nabla f_{j}(x^{k})^{T}d^{k}+g_{j}(x^ {k}+d^{k})-g_{j}(x^{k})\] \[+\frac{1}{2}(d^{k})^{T}\nabla^{2}f_{j}(x^{k})d^{k}+\frac{\epsilon }{2}\left\|d^{k}\right\|^{2}+\frac{1}{2}(d^{k})^{T}B_{j}(x^{k})d^{k}\] \[-\frac{1}{2}(d^{k})^{T}B_{j}(x^{k})d^{k}\] \[\leq F_{j}(x^{k})+\theta(x^{k})+\frac{1}{2}(d^{k})^{T}\left( \nabla^{2}f_{j}(x^{k})-\nabla^{2}f_{j}(x^{*}))\right)d^{k}\] \[+\frac{1}{2}(d^{k})^{T}\left(\nabla^{2}f_{j}(x^{*})-B_{j}(x^{k}) \right)d^{k}+\frac{\epsilon}{2}\left\|d^{k}\right\|^{2} \tag{30}\] \[\leq F_{j}(x^{k})+\theta(x^{k})+\frac{3\epsilon}{2}\left\|d^{k} \right\|^{2}\] \[\leq F_{j}(x^{k})+\tau\theta(x^{k})+(1-\tau)\theta(x^{k})+\frac{3 \epsilon}{2}\left\|d^{k}\right\|^{2}\] \[\leq F_{j}(x^{k})+\tau\theta(x^{k})+(\frac{3\epsilon}{2}-\frac{m( 1-\tau)}{2})\left\|d^{k}\right\|^{2}\] \[\leq C_{j}^{k}+\tau\theta(x^{k}),\] where the third inequality comes from the Assumption 5.1 and \(\nabla^{2}f_{j}\) is continuous. The second to last inequality is derived from the Theorem 3.1, and \(0<\epsilon\leq\frac{(1-\tau)m}{3}\), we have \[\frac{3\epsilon}{2}-\frac{m(1-\tau)}{2}\leq 0,\] so, the last inequality holds. therefore, the nonmonotone line search conditions hold for \(\alpha_{k}=1\). Theorem 5.3: _Suppose \(f_{j}\) is strongly convex with module \(m>0\), and its Hessian is continuous for \(j\in\Lambda_{m}\), and \(\{B_{j}(x^{k})\}\) satisfies the Assumption 5.1 and \(B_{j}(x^{k})\succeq mI\) for some \(m>0\). Let \(\{x^{k}\}\) denote a bounded sequence generated by Algorithm 1. Then, for any \(0<\epsilon\leq\frac{m(1-\tau)}{3}\), there exists \(K_{\epsilon}>0\) such that_ \[\|x^{k+1}-x^{*}\|\leq\sqrt{\frac{3\epsilon(1+\tau_{k}^{2})}{m}}\|x^{k}-x^{*}\|.\] _holds for all \(k\geq K_{\epsilon}\), where \(\tau_{k}:=\frac{\|x^{k+1}-x^{k}\|}{\|x^{k}-x^{*}\|}\in[\frac{m-\sqrt{6m \epsilon-9\epsilon^{2}}}{m-3\epsilon},\frac{m+\sqrt{6m\epsilon-9\epsilon^{2}} }{m-3\epsilon}]\). Furthermore, the sequence \(\{x^{k}\}\) converges superlinearly to \(x^{*}\)._ _proof_ Since the assumptions of Lemma 5.4 are satisfied, unit step lengths satisfy the nonmonotone line search conditions after sufficiently many iterations: \[\alpha_{k}=1,\] \[x^{k+1}=x^{k}+d^{k}.\] Substituting \(x=x^{*}\) into inequality (28), we obtain \[0 \leq F_{\lambda^{k}}(x^{k+1})-F_{\lambda^{k}}(x^{*})\] \[\leq \frac{1}{2}\|x^{k}-x^{*}\|_{B_{\lambda^{k}}(x^{k})}^{2}-\frac{1} {2}\|x^{k+1}-x^{k}\|_{B_{\lambda^{k}}(x^{k})}^{2}-\frac{1}{2}\|x^{k+1}-x^{*}\| _{B_{\lambda^{k}}(x^{k})}^{2}\] \[+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{k+1}-x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle^{31}\] \[-\langle x^{*}-x^{k},\int_{0}^{1}\int_{0}^{1}\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{*}-x^{k}))ds(t(x^{*}-x^{k}))dt\rangle.\] where the first inequality comes from the fact \(F(x^{*})\preceq F(x^{k})\) for all \(k\). On the other hand, \[\frac{1}{2}\|x-x^{k}\|_{B_{\lambda^{k}}(x^{k})}^{2}=\langle x-x^{k},\int_{0}^ {1}\int_{0}^{1}B_{\lambda^{k}}(x^{k})ds(t(x-x^{k}))dt\rangle,\] then, inserting \(x=x^{*}\) and \(x=x^{k+1}\) respectively into the above formula, we have \[\frac{1}{2}\|x^{*}-x^{k}\|_{B_{\lambda^{k}}(x^{k})}^{2}=\langle x^{*}-x^{k}, \int_{0}^{1}\int_{0}^{1}B_{\lambda^{k}}(x^{k})ds(t(x^{*}-x^{k}))dt\rangle,\] \[\frac{1}{2}\|x^{k+1}-x^{k}\|^{2}_{B_{\lambda^{k}}(x^{k})}=\langle x^{k+1}-x^{k}, \int_{0}^{1}\int_{0}^{1}B_{\lambda^{k}}(x^{k})ds(t(x^{k+1}-x^{k}))dt\rangle.\] Since the accumulation point of \(\{x^{k}\}\) is Pareto optimal, there exists \(K_{\epsilon}\geq K_{\epsilon}^{1}\) such that, for all \(k\geq K_{\epsilon}\), \[\|\nabla^{2}f_{\lambda^{k}}(x^{k}+st(x^{k+1}-x^{k})-\nabla^{2}f_{\lambda^{k}}( x^{k})\|\leq\epsilon,\forall s,t\in[0,1],\] \[\|\nabla^{2}f_{\lambda^{k}}(x^{k}+st(x^{*}-x^{k})-\nabla^{2}f_{\lambda^{k}}(x^ {k})\|\leq\epsilon,\forall s,t\in[0,1].\] Substituting the above relation into (31), we obtain \[\frac{1}{2}\|x^{k+1}-x^{*}\|^{2}_{B_{\lambda^{k}}(x^{k})}\] \[\leq\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{k+1}-x^{k}))-B_{\lambda^{k}}(x^{k}))ds(t(x^{k+1}-x^{ k}))dt\rangle\] \[\quad-\langle x^{*}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_{ \lambda^{k}}(x^{k}+st(x^{*}-x^{k}))-B_{\lambda^{k}}(x^{k}))ds(t(x^{*}-x^{k}))dt\rangle\] \[\leq\frac{\epsilon}{2}\|x^{k+1}-x^{k}\|^{2}+\frac{\epsilon}{2}\| x^{*}-x^{k}\|^{2}\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_ {\lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x^{*}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_{ \lambda^{k}}(x^{k})-B_{\lambda^{k}}(x^{k}))ds(t(x^{*}-x^{k}))dt\rangle\] \[\leq\epsilon\|x^{k+1}-x^{k}\|^{2}+\epsilon\|x^{*}-x^{k}\|^{2}\] \[\quad+\langle x^{k+1}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_ {\lambda^{k}}(x^{*})-B_{\lambda^{k}}(x^{k}))ds(t(x^{k+1}-x^{k}))dt\rangle\] \[\quad-\langle x^{*}-x^{k},\int_{0}^{1}\int_{0}^{1}(\nabla^{2}f_{ \lambda^{k}}(x^{*})-B_{\lambda^{k}}(x^{k}))ds(t(x^{*}-x^{k}))dt\rangle\] \[\leq\frac{3\epsilon}{2}\|x^{k+1}-x^{k}\|^{2}+\frac{3\epsilon}{2} \|x^{*}-x^{k}\|^{2}.\] where the last inequality holds because \(\nabla^{2}f_{j}\) is continuous and the Assumption (5.1) holds. This together with \(B_{j}\succeq mI\), implies \[m\|x^{k+1}-x^{*}\|^{2}\leq 3\epsilon\|x^{k+1}-x^{k}\|^{2}+3\epsilon\|x^{*}-x^{k} \|^{2}. \tag{32}\] By direct calculation, we have \[3\epsilon\|x^{k+1}-x^{k}\|^{2}+3\epsilon\|x^{*}-x^{k}\|^{2}\] \[\geq m\|x^{k+1}-x^{*}\|^{2}\] \[=m\|x^{k+1}-x^{k}+x^{k}-x^{*}\|^{2}\] \[\geq m\|x^{k+1}-x^{k}\|^{2}+m\|x^{k}-x^{*}\|^{2}-2m\|x^{k+1}-x^{k} \|\|x^{k}-x^{*}\|.\] Rearranging, we have \[2m\|x^{k+1}-x^{k}\|\|x^{k}-x^{*}\|\geq(m-3\epsilon)\|x^{k+1}-x^{k}\|^{2}+(m-3 \epsilon)\|x^{k}-x^{*}\|^{2},\] dividing by \(\|x^{k}-x^{*}\|^{2}\), we get \[2m\frac{\|x^{k+1}-x^{k}\|}{\|x^{k}-x^{*}\|}\geq(m-3\epsilon)\frac{\|x^{k+1}-x^{k} \|^{2}}{\|x^{k}-x^{*}\|^{2}}+(m-3\epsilon),\] let \(\tau_{k}=\frac{\|x^{k+1}-x^{k}\|}{\|x^{k}-x^{*}\|}\), so we get \[(m-3\epsilon)\tau_{k}^{2}-2m\tau_{k}+m-3\epsilon\leq 0.\] Since \(\epsilon\leq\frac{m(1-\tau)}{3}\), we deduce that \(\tau_{k}\in[\frac{m-\sqrt{6m\epsilon-9\epsilon^{2}}}{m-3\epsilon},\frac{m+ \sqrt{6m\epsilon-9\epsilon^{2}}}{m-3\epsilon}]\). Substituting \(\tau_{k}\) into relation (32), we derive that \[\|x^{k+1}-x^{*}\|\leq\sqrt{\frac{3\epsilon(1+\tau_{k}^{2})}{m}}\|x^{k}-x^{*}\|.\] Furthermore, since \(\epsilon\) tends to \(0\) as \(k\) tends to infinity, it follows that \[\lim_{k\rightarrow\infty}\tau_{k}\in\lim_{\epsilon\to 0}[\frac{m-\sqrt{6m \epsilon-9\epsilon^{2}}}{m-3\epsilon},\frac{m+\sqrt{6m\epsilon-9\epsilon^{2}} }{m-3\epsilon}]=\{1\}.\] We use the relation to get \[\lim_{k\rightarrow\infty}\frac{\|x^{k+1}-x^{*}\|}{\|x^{k}-x^{*}\|}=0.\] This concludes the proof. ## 6 Numerical Experiment In this section, we presents some numerical experiments in order to illustrate the applicability of our Algorithm NPQNA for MCOPs. Based on it, we compare: * A nonmonotone proximal quasi-Newton algorithm (NPQNA) for MCOPs. * A proximal quasi-Newton algorithm (PQNA) for MCOPs in [28]. * A Newton-type proximal gradient algorithm (NPGA) for MCOPs in [2]. We use python 3.9 to program the Algorithm 1, and run on a computer with CPU Intel Core i7 2.90GHz and 32GB of memory, and the subproblems is solved by the solver 'SLSQP' in python. \(\|d^{k}\|\leq 10^{-6}\) or maximum 300 iterations is considered as stopping criteria. Solution of a multiobjective optimization problem is not isolated optimum points, but a set of efficient solutions. To generate an approximate set of efficient solutions we have considered multi-start technique. Following steps are executed in this technique. (1) A set of 100 uniformly distributed random initial points between lower bounds \(lb\) and upper bounds \(ub\) are considered, where \(lb\), \(ub\in\mathbb{R}^{n}\) and \(lb<ub\). (2) Algorithm 1 is executed individually. (3) We implemented the methods using the nonmonotone line search step size strategy with parameters \(\eta=10^{-4}\). **Test Problems**: We have constructed a set of nonlinear 2 or 3 objective optimization problems to illustrate the effectiveness of our proposed NPQNA for MCOPs and found a good pareto front. Some differentiable multiobjective test problems \(f_{j}(x)\) are paired with the nonsmooth multiobjective problems listed below. Details are provided in Table 1. The dimension of variables and objective functions are presented in the second columns. Since the subproblem is a convex nonsmooth quadratic constrained optimization problem, where the objective function is a linear function and the constraint is a quadratic constraint function with a nonsmooth term. \[\min\ t\] \[s.t. \nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x )-t\leq 0,\ j\in\Lambda_{m}, \tag{33}\] \[(t,d)\in\mathbb{R}\times\mathbb{R}^{n}.\] We refer to the way of dealing with nonsmooth term \(g_{j}\) in [4], when we similar consider the linear programming with box constraints in the nonsmooth part of the original multiobjective optimization problem. For each \(j\in J\), we assume that \(\text{dom}(g_{j})=\{x\in\mathbb{R}^{n}|lb\preceq x\preceq ub\}=:C\), where define \(g_{j}:C\rightarrow\mathbb{R}\) by \[g_{j}(x)=\max_{z\in Z_{j}}\langle x,z\rangle, \tag{34}\] where \(Z_{j}\in\mathbb{R}^{n}\) is the uncertainty set. Let \(B_{j}\in\mathbb{R}^{n}\times\mathbb{R}\) a nonsingular matrix and \(\delta>0\) be given. We set \[Z_{j}:=\{z\in\mathbb{R}^{n}|-\delta e\preceq B_{j}z\preceq\delta e\}, \tag{35}\] where \(e=(1,...,1)^{T}\in\mathbb{R}^{n}\). Since \(Z_{j}\) is a nonempty and compact, \(g_{j}(x)\) is well-defined. In our tests, the elements of the matrix \(B_{j}\) were randomly chosen between 0 and 1. In turn, given an arbitrary point \(\bar{x}\in C\), parameter \(\delta\) was taken as \[\delta:=\bar{\delta}\|\bar{x}\|, \tag{36}\] where \(0.02\leq\bar{\delta}\leq 0.10\) was also chosen at random. Next, we refer to the method of dealing with the nonsmooth term \(g_{j}\) in the subproblem of the generalized conditional gradient for MCOPs proposed by Assuncao in [4] to deal with the nonsmooth term in the subproblem of our algorithm NPQNA for MCOPs. We define \(A_{j}=[B_{j};-B_{j}]\in\mathbb{R}^{2n\times n}\) and \(b_{j}:=\delta e\in\mathbb{R}^{2n}\), then (34) and (35) can be rewritten as \[\max_{z}\langle x,z\rangle \tag{37}\] \[s.t. A_{j}z\preceq b_{j}.\] Since the variables in (33) are \(t\) and \(d\), we convert the nonsmooth term \(g_{j}(x+d)\) into a dual form. \[\min_{w}\langle b_{j},w\rangle \tag{38}\] \[s.t. A_{j}^{T}w=x+d,\] \[w\succeq 0.\] By using duality theory, substituted the expression of (38) into (6), we can get \[\begin{split}&\min\ t\\ s.t.&\ \nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+b_{j }^{T}w-g_{j}(x)-t\leq 0,\\ &\ A_{j}^{T}w_{j}=x+d,\\ &\ w_{j}\succeq 0,\ j\in\Lambda_{m},\\ &\ (t,d)\in\mathbb{R}\times\mathbb{R}^{n}.\end{split} \tag{39}\] We briefly review the subproblem of the proximal Newton-type algorithm NPGA for MCOPs. \[\min_{d\in\mathbb{R}^{n}}\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}d+\frac{1}{2 }d^{T}\nabla^{2}f_{j}(x)d+g_{j}(x+d)-g_{j}(x).\] Likewise, the subproblem of the NPGA for MCOPs can be reformulated as the following quadratic programming problem \[\begin{split}&\min\ t\\ s.t.&\ \nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}\nabla^{2}f_{j} (x)d+b_{j}^{T}w-g_{j}(x)-t\leq 0,\\ &\ A_{j}^{j}w_{j}=x+d,\\ &\ w_{j}\succeq 0,\ j\in\Lambda_{m},\\ &\ (t,d)\in\mathbb{R}\times\mathbb{R}^{n}.\end{split} \tag{40}\] And the subproblem of the proximal quasi-Newton algorithm PQNA for MCOPs. \[\min_{d\in\mathbb{R}^{n}}\max_{j\in\Lambda_{m}}\nabla f_{j}(x)^{T}d+\frac{1}{2 }d^{T}B_{j}(x)d+g_{j}(x+d)-g_{j}(x)+\frac{w}{2}\|d\|^{2}.\] Likewise, the subproblem of the PQNA for MCOPs can be reformulated as the following quadratic programming problem \[\begin{split}&\min\ t\\ s.t.&\ \nabla f_{j}(x)^{T}d+\frac{1}{2}d^{T}B_{j}(x)d+ \frac{w}{2}\|d\|^{2}+b_{j}^{T}w-g_{j}(x)-t\leq 0,\\ &\ A_{j}^{j}w_{j}=x+d,\\ &\ w_{j}\succeq 0,\ j\in\Lambda_{m},\\ &\ (t,d)\in\mathbb{R}\times\mathbb{R}^{n}.\end{split} \tag{41}\] In our codes, we use the solver 'SLSQP' for solving convex quadratic programming problem in python 3.9 to solve problem (39), (40) and (41). Pareto fronts using NPQNA, PQNA and NPGA of a two objective and a three objective test problem (Problem 1, 3, 7, 9 and 14 in Table 1) are provided in Figures 1, 2, 3, 4 and 5 respectively. Similarly we have generated approximate Pareto fronts of all test problems mentioned in Table 1. \begin{table} \begin{tabular}{l l l l l l} \hline **S1.No** & **(m,n)** & **f** & **lb\({}^{T}\)** & **ub\({}^{T}\)** & **Ref** \\ \hline [MISSING_PAGE_POST] LT1 & (-3,-3,-3) & (3,3,3) & [12] \\ \hline \end{tabular} \end{table} Table 1: Details of test problems. Figure 1: Approximate Pareto fronts of problem 1. (a)Approximate Pareto fronts by NPQNA; (b)Approximate Pareto fronts by PQNA; (c)Approximate Pareto fronts by NPGA. Figure 4: Approximate Pareto fronts of problem 7. (a)Approximate Pareto fronts by NPQNA; (b)Approximate Pareto fronts by PQNA; (c)Approximate Pareto fronts by NPGA. Figure 3: Approximate Pareto fronts of problem 9. (a)Approximate Pareto fronts by NPQNA; (b)Approximate Pareto fronts by PQNA; (c)Approximate Pareto fronts by NPGA. Figure 2: Approximate Pareto fronts of problem 3. (a)Approximate Pareto fronts by NPQNA; (b)Approximate Pareto fronts by PQNA; (c)Approximate Pareto fronts by NPGA. Numerical experiments show that our algorithm NPQNA of MCOPs has less iteration times, function evaluation times and CPU running time than NPGA of MCOPs in [2] and PQNA of MCOPs in [28] on most test problems. Although our algorithm NPQNA has more iterations than NPGA and PQNA on some test problems, our algorithm has less CPU running time, which also shows the superiority of our algorithm. By analysing the related problems that the number of iterations of NPQNA is more than that of NPGA, we find that if the Hessian matrix of the objective function has an explicit expression, the number of iterations updated by Newton is smaller than that updated by \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Problem} & \multicolumn{2}{c}{NPQNA} & \multicolumn{3}{c}{PQNA} & \multicolumn{3}{c}{NPGA} \\ \cline{2-10} & it & f & CPU & it & f & CPU & it & f & CPU \\ \hline [MISSING_PAGE_POST] 57** & **183.57** & **2.46** & 300 & 301 & 6.34 & 225.25 & 226.25 & 3.06 \\ \hline \hline \end{tabular} \end{table} Table 2: Computational Details. Figure 5: Approximate Pareto fronts of problem 14. (a)Approximate Pareto fronts by NPQNA; (b)Approximate Pareto fronts by PQNA; (c)Approximate Pareto fronts by NPGA. quasi-Newton. However, if the Hessian matrix of the objective function is not easy to obtain, our NPQNA algorithm has obvious advantages. ## 7 Conclusions In this paper, we have developed a nonmonotone proximal quasi-Newton algorithm for unconstrained convex MCOPs. Under standard assumptions, we proved that the sequence generated by this method converges to a Pareto optimal. Furthermore, under the assumptions that the Hessian continuity, the Dennis-More criterion and strong convexity, the local superlinear convergence rate of the algorithm is obtained. We consider convex MCOPs, but in real life, many MCOPs are nonconvex, so it is of practical value to study the algorithm for solving nonconvex MCOPs. And this method only starts from the quadratic approximation of the smooth function, and does not take full advantage of the characteristics of the nonsmooth term. Lin et al.[25] proposed a proximal quasi-Newton algorithm in scalar optimization, which makes full use of the properties of nonsmooth function, performs Moreau envelope processing on nonsmooth term, and then uses the properties of Moreau envelope to perform quasi-Newton acceleration on the whole smooth problem. This is a very nice approach, but there is no research on MCOPs yet. In the future, we try to study smoothing techniques for MCOPs and nonconvex MCOPs.
2303.01669
Learning Common Rationale to Improve Self-Supervised Representation for Fine-Grained Visual Recognition Problems
Self-supervised learning (SSL) strategies have demonstrated remarkable performance in various recognition tasks. However, both our preliminary investigation and recent studies suggest that they may be less effective in learning representations for fine-grained visual recognition (FGVR) since many features helpful for optimizing SSL objectives are not suitable for characterizing the subtle differences in FGVR. To overcome this issue, we propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes, dubbed as common rationales in this paper. Intuitively, common rationales tend to correspond to the discriminative patterns from the key parts of foreground objects. We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective without using any pre-trained object parts or saliency detectors, making it seamlessly to be integrated with the existing SSL process. Specifically, we fit the GradCAM with a branch with limited fitting capacity, which allows the branch to capture the common rationales and discard the less common discriminative patterns. At the test stage, the branch generates a set of spatial weights to selectively aggregate features representing an instance. Extensive experimental results on four visual tasks demonstrate that the proposed method can lead to a significant improvement in different evaluation settings.
Yangyang Shu, Anton van den Hengel, Lingqiao Liu
2023-03-03T02:07:40Z
http://arxiv.org/abs/2303.01669v2
Learning Common Rationale to Improve Self-Supervised Representation for Fine-Grained Visual Recognition Problems ###### Abstract Self-supervised learning (SSL) strategies have demonstrated remarkable performance in various recognition tasks. However, both our preliminary investigation and recent studies suggest that they may be less effective in learning representations for fine-grained visual recognition (FGVR) since many features helpful for optimizing SSL objectives are not suitable for characterizing the subtle differences in FGVR. To overcome this issue, we propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes, dubbed as common rationales in this paper. Intuitively, common rationales tend to correspond to the discriminative patterns from the key parts of foreground objects. We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective without using any pre-trained object parts or saliency detectors, making it seamlessly to be integrated with the existing SSL process. Specifically, we fit the GradCAM with a branch with limited fitting capacity, which allows the branch to capture the common rationales and discard the less common discriminative patterns. At the test stage, the branch generates a set of spatial weights to selectively aggregate features representing an instance. Extensive experimental results on four visual tasks demonstrate that the proposed method can lead to a significant improvement in different evaluation settings.1 Footnote 1: The source code will be publicly available at: [https://github.com/GAWerf/LCR](https://github.com/GAWerf/LCR) ## 1 Introduction Recently, self-supervised representations Learning (SSL) has been shown to be effective for transferring the learned representations to different downstream tasks [2, 4, 5, 14, 18]. Methods such as contrastive learning [7, 9, 16, 18] have demonstrated state-of-the-art feature learning capability and have been intensively studied recently. However, recent studies [11] suggest that contrastive learning may have a "coarse-grained bias" and could be less effective for fine-grained classification problems whose goal is to distinguish visually similar subcategories of objects under the basic-level category. This phenomenon is rooted in fine-grained visual recognition (FGVR) properties and the training objective of SSL. SSL tries to minimize a pretext task, e.g., contrastive learning minimizes the distance between same-instance features while maximizing the distance among different-instance features, and any visual patterns that could contribute to loss minimization will be learned. On the other hand, the discriminative patterns for FGVR can be subtle. It is more likely to reside on key object parts. Thus, the feature learned Figure 1: GradCAM [30] visualization for MoCo v2 [18], VICReg [4] and our method on the CUB-200-2011, Stanford Cars and FGVC Aircraft datasets. Compared with the existing method MoCo v2 and VICReg, which are prone to be distracted by background patterns, our method can identify features from the foreground and potentially the key parts of the object. from SSL may not necessarily be useful for FGVR. Figure 1 shows our investigation of this issue. As seen, the existing SSL paradigms, such as MoCo [18] and VICReg [4] are prone to learn patterns from irrelevant regions2. Existing works [31, 38, 42] usually handle this issue by recoursing to pre-trained object part detectors or saliency detectors to regularize SSL using patterns from valid regions to achieve the training objective. However, both the part detectors and saliency detectors could restrict the applicability of SSL for FGVR since part detectors are trained from human annotations, and the saliency regions are not always coincident with the discriminative regions. Footnote 2: The weakness of existing SSL methods on FGVR can also be quantitively demonstrated by their performance on retrieval tasks shown in Section 4.3. As will be seen from our experiments, using only the features learned from SSL and without any further supervision from the target data, existing SSL methods achieve very poor performance in the retrieval task, i.e., based on feature similarity to identify same-class images. Therefore, this work aims to directly solve the problem from the target domain data. Specifically, we propose to learn an additional screening mechanism in addition to the contrastive learning process. The purpose of the screening mechanism is to filter out the patterns that might contribute to SSL objectives but are irrelevant to FGVR. Somehow surprisingly, we find that such a screening mechanism can be learned from the GradCAM [30] of the SSL loss via an extremely simple method. The whole process can be described as a "fitting and masking" procedure: At the training time, we use an additional branch with limited fitting capacity (see more discussion about it in Section 3.2) to fit the GradCAM calculated from the SSL objective. At the testing time, we apply this additional branch to predict an attention mask to perform weighted average pooling for the final presentation. The motivation for such a design is that the GradCAM fitting branch tends to characterize the discriminative patterns that commonly occur across samples due to its limited fitting capacity, and those common patterns, dubbed common rationale in this paper, are more likely corresponding to the discriminative clues from key object parts or at least foreground regions. We implement our method based on MoCo v2 [8], one of the state-of-the-art approaches in unsupervised feature learning, which also produces the best performance on FGVR in our setting. Our implementation uses the training objective of MoCo v2 to learn feature representations and derive the GradCAM. Through extensive experiments, we show that our approach can significantly boost the quality of the learned feature. For example, when evaluating the learned feature for retrieval tasks, our method achieves 49.69% on the CUB-200-2011 retrieval task, which is 32.62% higher than our baseline method (MoCo v2 [8]). In the linear evaluation phase, the proposed method achieves new state-of-the-art 71.31% Top-1 accuracy, which is 3.01% higher than MoCo v2. ## 2 Related Work Self-supervised learning aims to learn feature representation from unlabeled data. It relies on a pretext task whose supervision could be derived from the data itself, e.g., image colorization [41], image inpainting [28], and rotation [24] prediction. Contrastive learning is recently identified as a promising framework for self-supervised learning and has been extensively studied [5, 7, 10, 13, 18, 36]. Despite the subtle differences, most contrastive learning approaches [15, 37, 43] try to minimize the distance between different views of the same images and push away the views of different images. The representative methods are SimCLR [7] and MoCo [18]. Besides contrastively learning, consistency-based approaches, such as BYOL [16], SimSiam [9] and masking-and-prediction-based approaches, such as MAE [17], BEiT [3], and ADIOS [32] are also proven effective for SSL. **Improving SSL via Better Region Localization.** A pipeline to improve the distinguishing ability is to design better data augmentations of SSL. Three such methods were recently proposed: DiLo [42], SAGA [39], and Contrastive-Crop [29]. DiLo uses a copy-and-pasting approach as a kind of augmentation to create an image with a different background. In such a way, the proposed method distills localization via learning invariance against backgrounds and improves the ability of SSL models to localize foreground objects. SAGA adopts self-augmentation with a guided attention strategy to augment input images based on predictive attention. In their method, an attention-guided crop is used to enhance the robustness of feature representation. ContrastiveCrop shows a better crop as an augmentation to generate better views in SSL and keep the semantic information of an image. All of these works locate the object by improving the data augmentations for SSL. In this work, our method is adaptive to locate the key regions for self-supervised learning without needing external augmentations. Another family of approaches tries to target the same problem as ours: making the learned feature capture more of the foreground region. CVSA [38] proposed a cross-view saliency alignment framework that first crops and swaps saliency regions of images as a novel view generation. Then it adopts a cross-view saliency alignment loss to encourage the model to learn features from foreground objects. CAST [31] encourages Grad-CAM attention to fit the salient image regions detected by a saliency detector. Those methods often rely on pre-trained saliency detection. This implicitly assumes that salient regions are more likely to be discriminative regions. This assumption, however, does not always hold, especially for FGVR. ## 3 Method In this section, we first briefly review self-supervised contrastive learning [8] and gradient-weighted class activation mapping (GradCAM) [30] as preliminary knowledge. Then, we introduce the proposed approach. ### Preliminary **Self-Supervised Contrastive Learning.** Given an image \(I\) from a batch of samples, \(x=t(I)\) and \(x^{\prime}=t^{\prime}(I)\) are the two augmented views, where the \(t\) and \(t^{\prime}\) are two different transformations sampled from a set of data augmentations \(\mathcal{T}\). Then the views \(x\) and \(x^{\prime}\) are used as the input of an encoder \(f_{\theta}\) to generate their feature representations \(u=f_{\theta}(x)\) and \(u^{\prime}=f_{\theta}(x^{\prime})\). Finally, the projector heads \(g_{\theta}\) are used to map \(u\) and \(u^{\prime}\) onto the embeddings \(z=g_{\theta}(u)\) and \(z^{\prime}=g_{\theta}(u^{\prime})\). Let \((z,z^{\prime})\) be embeddings from the same image and are used as a positive pair. Let \(z_{k}\) be the embedding from a different image, and \((z,z_{k})\) thus composes a negative pair. SimCLR [7] adopts a contrastive loss to maximize the agreement of positive pairs over those of negative pairs. The MoCo [1, 10, 18] family adopts the same contrastive loss but adds a queue to store the image embeddings to alleviate the memory cost due to the large batch size. Formally, the contrastive loss takes the following form \[\mathcal{L}_{CL}=-\log\frac{\exp(z\cdot z^{\prime}/t)}{\sum_{i=0}^{Q}\exp(z \cdot z_{i}/t)}, \tag{1}\] where \(t\) denotes a temperature parameter, and \(z_{i}\) is the embedding in the queue. **Gradient-weighted Class Activation Mapping (GradCAM)** is a commonly used way to produce a visual explanation. It identifies the important image regions contributing to the prediction by using the gradient of the loss function with respect to the feature map or input images. In this paper, we consider the gradient calculated with respect to the last convolutional layer feature maps. Formally, we consider the feature map of the last convolutional layer denoting as \(\phi(I)\in\mathbb{R}^{H\times W\times C}\), \(H\), \(W\) and \(C\) are the height, width and number of channels of the feature map, respectively. In standard C-way classification, the GradCAM is calculated by: \[[\mathrm{Grad-CAM}(\hat{y})]_{i,j}=ReLU\left(\alpha_{\hat{y}}^{ \top}[\phi(I)]_{i,j}\right)\] \[where,\,\alpha_{\hat{y}}=\frac{\partial\mathcal{L}_{CE}(P(y),\hat {y})}{\partial[\phi(I)]_{i,j}}\in\mathbb{R}^{C}, \tag{2}\] where \(\mathcal{L}_{CE}(P(y),\hat{y})\) is the cross-entropy loss measuring the compatibility between the posterior probability \(P(y)\) and ground-truth class label \(\hat{y}\)3. \([\phi(I)]_{i,j}\in\mathbb{R}^{C}\) denotes the feature vector located at the \((i,j)\)-th grid. \(\mathrm{Grad-CAM}(\hat{y})\) denotes GradCAM for the \(\hat{y}\)-th class. \([\mathrm{Grad-CAM}(\hat{y})]_{i,j}\) refers to the importance value of the \((i,j)\)th spatial grid for predicting the \(\hat{y}\)-th class. Footnote 3: \(\mathcal{L}_{CE}(P(y),\hat{y})=\log P(\hat{y})\) for the multi-classification problem and the gradient of \(\log P(y)\) is proportional to the gradient of the corresponding logit. Those equivalence forms lead to the different definitions of GradCAM in the literature. Note that although GradCAM is commonly used for supervised classification problems, it can be readily extended to other problems by changing the corresponding loss function. ### Our Method: Learning Common Rationale Figure 2 gives an overview of the proposed method. Our method is extremely simple: we merely add one GradCAM fitting branch (GFB) and fit the GradCAM generated from the CL loss at the training time. At the test time, we let this GFB predict (normalized) GradCAM and use the prediction as an attention mask to perform weighted average pooling over the convolutional feature map. The details about this framework are elaborated as follows. **GradCAM Calculation.** For self-supervised learning, we do not have access to ground-truth class labels. Therefore, we use the contrastive learning loss \(\mathcal{L}_{CL}\) in Eq. 1 to replace \(\mathcal{L}_{CE}\) in Eq. 2: \[[\mathbf{G}]_{i,j}=ReLU\left(\frac{\partial\mathcal{L}_{CL}(\psi(\phi(I)))}{ \partial[\phi(I)]_{i,j}}^{\top}[\phi(I)]_{i,j}\right), \tag{3}\] where \(\psi()\) denotes the feature extractor. Note that in the Figure 2: The overview of our method. At the structure level, it has the following components: 1)_Encoder_: MoCo-based contrastive learning is used to create the image-level representation. 2) _GradCAM fitting branch_, which is a convolutional layer with \(k\) filters followed by a channel-wise max-out operator. 3) _Inference_: at the inference time, the prediction from GFB is used as a spatial attention mask, and it is used to replace the GAP operation by weighted average pooling to obtain the image representation. original GradCAM, the GradCAM weight indicates how important each region contributes to classifying the image into the \(y\)-th class. Similarly, the GradCAM weight from \(\mathcal{L}_{CL}\) indicates the contribution of each region to the instance discrimination task in the contrastive learning objective. From Figure 1, we can see that existing Contrastive Learning approaches learn a diverse set of visual patterns, and not all of them are relevant to the FGVR task. **Architecture of the GFB.** The structure of the GFB plays an important role in our method. We expect this branch has a limited fitting capability, such that the branch will not overfit the GradCAM but only capture the commonly occurring discriminative patterns, i.e., common rationales. Inspired by [33], we use a convolutional layer with \(K\) filter and \(1\times 1\) kernel size followed by the max-out operation [33] as the fitting branch. Formally, such a branch applies the following operation to the local feature \([\phi(I)]_{i,j}\)at the \((i,j)\)-th grid of the feature map: \[A_{i,j}=\max_{k}\{\mathbf{w}_{k}^{\top}[\phi(I)]_{i,j}\}, \tag{4}\] where \(A_{i,j}\) will be the predicted GradCAM weight at the \((i,j)\)-th grid and \(\mathbf{w}_{k}\) is the \(k\)-th filter (projector) in the convolutional layer. Intuitively, the convolutional layer can be seen as a collection of \(K\) detectors and the above operation can be understood as follows: each projection vector \(\mathbf{w}_{k}\) detects an object part \(\mathcal{P}_{k}\); the max-out operation takes the maximum of \(K\) projections at a given location, which will result in a high value if _any of the \(K\) object parts are detected_. Varying \(K\) could adjust the size of the detector pool and influence the fitting capability. **Training Loss.** In addition to the contrastive learning loss, we require the GFB to produce a similar attention map as the one produced from the GradCAM of \(\mathcal{L}_{CL}\). We follow [33] to normalize the GradCAM into a probability distribution and adopt KL divergence as the loss function. Formally, we normalize the GradCAM \(\mathcal{G}\) and \(\mathcal{A}\) via the softmax function: \[[\mathbf{\bar{G}}]_{i,j} =\frac{\exp{([\mathbf{G}]_{i,j}/\tau)}}{\sum_{i=1}^{H}\sum_{j=1}^{ W}\exp{([\mathbf{G}]_{i,j}/\tau)}},\] \[[\mathbf{\bar{A}}]_{i,j} =\frac{\exp{([\mathbf{A}]_{i,j}/\tau)}}{\sum_{i=1}^{H}\sum_{j=1}^{ W}\exp{([\mathbf{A}]_{i,j}/\tau)}}, \tag{5}\] where \([\cdot]_{i,j}\) denotes the \(i,j\)-th element of the feature map. \(\tau\) is an empirical temperature parameter and we set it to 0.4 here. Thus, the objective can be expressed as follows: \[\mathcal{L}_{KL}=\sum_{i=1}^{H}\sum_{j=1}^{W}[\mathbf{\bar{A}}]_{i,j}\log\frac {[\mathbf{\bar{A}}]_{i,j}}{[\mathbf{\bar{G}}]_{i,j}}, \tag{6}\] Thus, the final loss function taken on all images over an unlabelled dataset is shown as followings. \[\mathcal{L}=\lambda\mathcal{L}_{CL}+\nu\mathcal{L}_{KL}. \tag{7}\] **Inference.** At the inference time, the GradCAM fitting branch will be firstly used to produce an attention mask, i.e., prediction of GradCAM. This attention mask is firstly normalized using \(A^{\prime}_{i,j}=\frac{A_{i,j}-\min(A)}{1e^{-\tau}+\max(A)}\). Then we use the normalized attention to perform weighted average pooling of features from the last-layer convolutional feature map. Formally, it calculates: \[\mathbf{f}=\sum_{i,j}[A^{\prime}]_{i,j}[\phi(I)]_{i,j}\in\mathbb{R}^{C}. \tag{8}\] \(\mathbf{f}\) is used for the downstream tasks. **Discussion.** Our approach is inspired by the SAM method in [33]. However, there are several important differences: * SAM method was proposed for supervised FGVC under a low-data regime, while our approach is designed for self-supervised learning. * Most importantly, our study discovers that for self-supervised feature learning, the best strategy to interact with the GFB and the feature learning branch is different from what has been discovered in [33]. Table 1 summarizes the main differences. As seen, unlike either SAM or SAM-bilinear, we do not introduce any interaction between the feature learning branch and the GradCAM fitting branch during training but allow them to perform weighted average pooling at the test stage. In fact, we find that applying cross-branch interaction at the training stage will undermine the feature learning process. This is because applying cross-branch interaction, e.g., using \(\mathbf{A}\) to weighted pool features will prevent CL from exploring discriminative patterns, especially when \(\mathbf{A}\) has not been properly learned. * For multiple projections, SAM uses bilinear pooling to aggregate \(\mathbf{A}\) and the original feature map, resulting in high-dimensional feature representation. Our work performs max-out on multiple projections, resulting in a single attention mask for performing weighted average pooling. Consequently, we could achieve a significant reduction of the feature dimensionality. To make our study comprehensive, we also explore several variants as baseline approaches. From the comparison with those methods, the benefit of our design could become more evident. For comparison, we use the following architectures as baselines for self-supervised pre-training: _SAM-SSL:_ this baseline extends SAM by changing its objective function from cross-entropy to the contrastive loss in MoCo V2. It shares the same architecture as SAM [33], where a projection is used as the GFB, and GradCAM fitting is trained as an auxiliary task to contrastive learning. _SAM-SSL-Bilinear:_ this baseline extends SAM-bilinear by using the contrastive loss in MoCo V2. The cross-branch interaction of _SAM-SSL-Bilinear_ follows the original SAM-Bilinear method. Besides the above two extensions of the SAM methods, we also consider two variants of our method. The first is called _Ours-MultiTask_, which does not perform weighted average pooling at the test stage but merely uses Grad-CAM fitting as an auxiliary task. Another is called _Ours-DualPooling_, which performs weighted average pooling at both training and testing stages. Those two variants are summarized in Table 1. ## 4 Experiments In this section, we will evaluate our proposed method on three widely used fine-grained visual datasets (Caltech-UCSD Birds (CUB-200-2011) [35], Stanford Cars [25] and FGVC-Aircraft [27]), and a large-scale fine-grained dataset (iNaturalist2019 [34]). Our experiments aim to understand the effectiveness and the components of the proposed algorithm. ### Datasets and Settings **Datasets.**CUB-200-2011 contains 11,788 images with 200 bird species, where 5994 images are used for training and 5794 images for testing. Stanford Cars contains 16,185 images with 196 categories, where 8144 images are for training and 8041 images for testing. FGVC-Aircraft contains 10,000 images with 100 categories, where 6667 images are for training, and 3333 images are for testing. iNaturalist2019 in its 2019 version contains 1,010 categories, with a combined training and validation set of 268,243 images. Note that fine-grained visual task focus on distinguishing similar subcategories within a super-category, while there are six super-categories in iNaturalist2019. **Implementation Details.** We adopt the ResNet-50 [19] as the network backbone, which is initialized using ImageNet-trained weights, and build our method on top of MoCo v2 [8]. Therefore the SSL loss term is identical to MoCo v2. The momentum value and memory size are set similarly to MoCo v2, i.e., 0.999 and 65536, respectively. The projector head \(g_{\theta}\) in MoCo v2 is composed of two fully-connected layers with ReLU and a third linear layer with batch normalization (BN) [21]. The size of all three layers is 2048 \(\times\)2048\(\times\)256. We set the mini-batch size as 128 and used an SGD optimizer with a learning rate of 0.03, a momentum of 0.9, and a weight decay of 0.0001. 100 epochs are used to train the feature extractor. The images from the four FGVR datasets are resized to 224\(\times\)224 pixels during training times. During testing time, images are firstly resized to 256 pixels and then are center cropped to 224\(\times\)224 on these four FGVR datasets. ### Evaluation Protocols We evaluate the proposed method in two settings: linear probing and image retrieval. Linear probing is a commonly used evaluation protocol in SSL. In linear probing, the feature extractor learned from the SSL algorithm will be fixed and a linear classifier will be trained on top of the learned features. The classification performance of the linear classifier indicates the quality of the learned feature. Besides linear probing, we also use image retrieval (also equivalent to the nearest neighbor classification task) task to evaluate the learned features, which was also explored in the literature [6, 22, 23]. This task aims to find the images with the same category as query images based on the \begin{table} \begin{tabular}{c|c|c c|c|c} \hline \multirow{2}{*}{Method} & GradCAM & \multicolumn{2}{c|}{Cross-branch Interaction} & Feature & Loss \\ & Fitting Branch & \multicolumn{2}{c|}{train} & test & Dimension & Function \\ \hline \hline SAM & 1 proj & \(\times\) & \(\times\) & \(C\) & CrossEntropy \\ SAM-Bilinear & max[\(K\) projs] & bilinear pooling & bilinear pooling & \(C\)*\(K\) & CrossEntropy \\ Ours & max[\(K\) projs] & \(\times\) & weighted average pooling & \(C\) & Contrastive \\ \hline _Ours-DualPooling_ & max[\(K\) projs] & weighted average pooling & weighted average pooling & \(C\) & Contrastive \\ _Ours-MultiTask_ & max[\(K\) projs] & \(\times\) & \(\times\) & \(C\) & Contrastive \\ \hline \end{tabular} \end{table} Table 1: Upper part: summary of the major difference between our method and SAM method [33]. Lower part: two variants that are also investigated in this work. Figure 3: Comparison of ResNet-50, MoCo v2 and Ours on the retrieval rank-1 on the iNaturalist2019 dataset. “Amphibians”, “Birds”, “Fungi”, “Reptiles”, and “Insects” are the superclasses in iNaturalist2019. learned feature. Note that, unlike linear probing, the image retrieval task does not involve a large amount of labeled data - which could be used to suppress the less relevant features. In this sense, succeeding in the image retrieval task imposes higher feature quality requirements. Moreover, image retrieval is a practically useful task, and unsupervised feature learning is an attractive solution to image retrieval since the whole retrieval system can be built without any human annotation and intervention. For the retrieval task, we use rank-1, rank-5, and mAP as evaluation metrics. ### Main Results **The Effectiveness of the Proposed Method.** Our method is based on MoCo v2. We first compare our result against MoCo v2 to examine the performance gain. The results are reported in Table 2 and Figure 3. From Table 2, we can see that our method has led to a significant improvement over MoCo v2. It achieves 71.31% top-1 accuracy and 92.03% top-5 accuracy on the CUB-200-2011 dataset with 100% label proportions, which is a 3.01% improvement on top-1 and 1.18% improvement on top-5 over MoCo v2. Similarly, significant improvement can also be found on the Stanford Cars and FGVC Aircraft datasets. This demonstrates that our methods improve the quality of the feature representations learned from the original MoCo v2. Notably, with the label proportions reduced from 100% to 20%, there is generally a better improvement in the performance of the proposed method, which demonstrates that the proposed method learns less noisy features, so that training with fewer data can generalize better. The advantage of the proposed method becomes more pronounced when evaluating the image retrieval task. As seen from Table 2, our method leads to a significant boost in performance on all datasets. Specifically, the rank-1, rank-5 and mAP of ours (\(K\)=32) are 49.69%, 75.23%, and 24.01%, respectively on the CUB-200-2011 dataset, which is 32.62%, 33.77%, and 15.88% higher than the method of MoCo v2. Similar improvement can also be observed on the Stanford Cars and FGVC Aircraft datasets. On the large-scale fine-grained dataset, i.e., iNaturalist2019, our method also performs better than MoCo v2 shown in Figure 3. These indicate the proposed method is particularly good for retrieval tasks, which might be due to its ability to filter out less relevant patterns. **Comparison with Other SSL Frameworks.** In addition to the comparison on MoCo v2, we also compare the proposed method against other commonly used SSL approaches. We report the Top-1 accuracy, the running time, and the peak memory with two different batch sizes (32 \(\&\) 128). The results are shown in Table 3. All methods are run on 4 V100 GPUs, each with 32G memory. We report the training speed, GPU memory usage, and performance. For classification performance, the blue marks mean the best classification results, and the green marks mean the second-best classification results. It is clear to see that our method achieves the best Top-1 performance on the CUB-200-2011, Stanford Cars, and FGVC Aircraft datasets. Also, when the GPU resources are limited, e.g., only 1 V100 GPU is available, the proposed method reduces the batch size but still remains a competitive classification performance compared to other SSL methods. Regarding training speed and GPU memory usage, the proposed method has the same running time as MoCo v2 and SimSiam, slightly more peak memory than SimSiam, but less peak memory, and quicker running time than SimCLR and BYOL. Although DINO uses the least training time and GPU memory usage, their performances are also the worst compared to ours and other self-supervised learning methods. Barlow Twins and VICReg \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{5}{c|}{Classification} & \multicolumn{3}{c}{Retrieval} \\ \cline{3-8} & & Top 1/Top 5(100) & Top 1/Top 5(50) & Top 1/Top 5(20) & rank-1 & rank-5 & mAP \\ \hline \hline \multirow{4}{*}{CUB-200-2011} & ResNet-50 & 68.17/90.42 & 58.99/85.90 & 46.54/77.09 & 10.65 & 29.32 & 5.09 \\ & MoCo v2 & 68.30/90.85 & 60.96/87.00 & 46.91/76.59 & 17.07 & 41.46 & 8.13 \\ & Ours & **71.31/92.03** & **66.52/90.06** & **55.33/83.52** & **49.69** & **75.23** & **24.01** \\ \hline \multirow{4}{*}{Stanford Cars} & ResNet-50 & 57.41/83.55 & 46.23/74.31 & 31.19/58.67 & 4.91 & 16.98 & 2.34 \\ & MoCo v2 & 58.43/84.85 & 50.17/77.38 & 35.14/64.10 & 10.94 & 29.57 & 3.12 \\ & Ours & **60.75/86.44** & **53.87/81.72** & **40.88/69.18** & **34.56** & **60.75** & **8.87** \\ \hline \multirow{4}{*}{FGVC Aircraft} & ResNet-50 & 47.38/74.73 & 37.83/67.12 & 28.20/54.73 & 5.16 & 14.22 & 2.61 \\ & MoCo v2 & 52.54/80.74 & 45.52/73.85 & 35.17/65.08 & 19.38 & 39.90 & 6.30 \\ \cline{1-1} & Ours & **55.87/84.73** & **48.22/77.14** & **38.55/68.53** & **34.33** & **61.09** & **15.43** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification and retrieval of our method evaluated on the CUB-200-2011, Stanford Cars and FGVC Aircraft datasets. “ResNet-50” represents pre-training in the ImageNet dataset [12] in a supervised manner, then freezing the ResNet-50 backbone and only optimizing the supervised linear classifier in the classification task. We report the Top 1 and Top 5 (in %) on the classification task, rank-1, rank-5, and mAP (in %) on the retrieval task. 100, 50, and 20 are the three different label proportions (in %) in the classification task. have a quicker running time and less GPU memory than our proposed method with a batch size of 128, but their performances are much worse than ours. Our method also can be implemented with other self-supervised learning methods, e.g., BYOL, referring to "BYOL+Ours" presented in Table 3. As we can see, our method applied to the BYOL objective is consistently superior to the baseline of BYOL on three fine-grained datasets. ### Comparison with Alternative Solutions Our method is featured by its capability of discovering the key discriminative regions, which is vital for FGVR. In this section, we compared the proposed method with nine alternative solutions to take object localization into account. The first is to simply use a bilinear network [26] for MoCo V2. Specifically, we follow the similar bilinear structure as in [33], which implicitly learns \(K\) parts and aggregates features from those \(K\) parts, and we set \(K=32\) to make a fair comparison to us (since we use 32 projections). We still use MoCo v2 as the SSL framework and denote this method MoCo v2 -Bilinear. The second and third are the methods of _SAM-SSL_ and _SAM-SSL-Bilinear_ introduced in Section 3.2 since our method is extended from the self-boosting attention mechanism (SAM) proposed in [33]. The fourth and fifth are the two variants considered in Section 3.2. The other four comparing methods are the very recently localization-based self-supervised methods, DiLo [42], CVSA [38], LEWEL [20], and ContrastiveCrop [29]. DiLo uses a copy-and-pasting approach as a kind of augmentation to create an image with a different background. The work of CVSA targets a similar problem as ours. They exploit self-supervised fine-grained contrastive learning via cross-view saliency alignment to crops and swaps saliency regions of images. LEWEL adaptively aggregates spatial information of features using spatial aggregation operation between feature map and alignment map to guide the feature learning better. The work of ContrastiveCrop is based on the idea of using attention to guide image cropping, which localizes the object and improves the data augmentation for self-supervised learning. The comparison to those nine alternatives is shown in Table 4. As seen, by comparing the _SAM-SSL_ and _SAM-SSL-Bilinear_, we observe that the proposed method can lead to overall better performance, achieving a significant boost on some datasets. Occasionally, _SAM-SSL-Bilinear_ (\(K\)=32) can achieve comparable performance as ours, but at the cost of using a much higher feature dimensionality. This clearly shows the advantage of the proposed scheme over the scheme in [33]. Furthermore, our method and combined with ContrastiveCrop (i.e., ours+ContrastiveCrop) with the lowest feature dimensions but achieve the best Top 1 and rank-1 performance on the CUB-200-2011, Stanford Cars and FGVC Aircraft datasets, compared to those nine alternatives. Also, compared with the other localization-aware SSL methods, our method shows a clear advantage, especially for the retrieval task, e.g., ours vs. LEWEL. Finally, we find that the variant of our method does not produce a good performance. For example, when applying weighted average pooling at both training and testing, i.e., _Ours-DualPooling_, will make the feature learning fail completely, and the representations will collapse. On the other hand, not performing cross-branch interaction, i.e., _Ours-MultiTask_, does not bring too much improvement over the MoCo v2 baseline. ### The Impact of Number of Projections To explore the impact of the number of linear projections in our method, we conduct experiments with the different \begin{table} \begin{tabular}{c c c c c c c} \hline Method & Batch Size & Top 1(CUB) & Top 1(Cars) & Top 1(Aircraft) & time(CUB) & GPU Memory(CUB) \\ \hline Supervised & - & 81.34 & 91.02 & 87.13 & - & - \\ \hline DINO [6] & 32/128 & 12.37/16.66 & 9.27/10.51 & 8.52/12.93 & 1.5h/1.5h & 4.9G/8.4G \\ SimCLR [7]* & 32/128 & 33.49/38.39 & 44.31/49.41 & 40.56/45.22 & 2.5h/2.5h & 7.1G/23.8G \\ BYOL [16]* & 32/128 & 36.64/39.27 & 43.66/45.21 & 34.90/37.62 & 4.0h/4.0h & 7.4G/24.6G \\ SimSiam [9] & 32/128 & 35.82/39.97 & 56.87/58.89 & 41.59/43.06 & 2.0h/2.0h & 4.4G/8.9G \\ MoCo v2 [8] & 32/128 & 68.03/68.30 & 52.61/58.43 & 42.51/52.54 & 2.0h/2.0h & 6.1G/10.3G \\ BarlowTwins [40] & 32/128 & 28.58/33.45 & 23.34/31.91 & 28.35/34.77 & 1.5h/1.5h & 7.0G/8.9G \\ VICReg [4] & 32/128 & 30.07/37.78 & 19.29/30.80 & 29.97/36.00 & 1.0h/1.0h & 7.0G/9.0G \\ \hline Ours & 32/128 & 70.43 /71.31 & 56.90/60.75 & 46.92/55.87 & 2.0h/2.0h & 6.1G/10.3G \\ BYOL+Ours* & 32/128 & 45.79/51.20 & 48.53 /50.64 & 40.08/45.94 & 4.0h/4.0h & 7.4G/24.6G \\ \hline \end{tabular} * Due to computational constraints, we are unable to evaluate on the batch size of 4096 as used in the original paper; we leave this for future work. \end{table} Table 3: Compared to other state-of-the-art self-supervised learning frameworks with Top 1 accuracy on the CUB-200-2011, Stanford Cars and FGVC Aircraft datasets; running time and peak memory on the CUB-200-2011 dataset. The training time is measured on 4 Tesla V100 GPUs with 100 epochs, and the peak memory is calculated on a single GPU. Top 1 accuracy (%) is reported on linear classification with the frozen representations of their feature extractor. For fairness, all following models use the ResNet-50 as the network backbone and initialize the ResNet-50 architecture with ImageNet-trained weights. blue=best, green=second best. numbers of \(K\). Figure D1 shows the retrieval results of rank-1 w.r.t. eight different projections on the fine-grained dataset CUB-200-2011. As we can see, with the increase of linear projections \(K\), the rank-1 gradually increases--the final rank-1 peaks at 49.69% with \(K\) around 32. After \(K\) reaches 64, the performance decreases. When the number of projections is very large, the combination of \(K\)-part detectors becomes spurious and overfits the correlated pattern, thus resulting in a drop in performance. From the curve, we can also see that choosing any value between 4 and 64 can lead to similar performance. So our method is not very sensitive to \(K\) once it falls into a reasonable range. ### Alternative Structure for GFB In this section, we investigate alternative designs for GFB. In particular, we consider using a two-layer (with 32 as the intermediate feature dimension) multi-layer perception (MLP) to replace the maximized projections in the proposed method. Compared with our GFB structure, an MLP has better fitting capacity due to the extra linear layer. We conduct experiments on the Stanford Cars dataset, and the result is shown in Table E1. We find that the performance dramatically decreases when using MLP. This demonstrates the importance of our GFB module design. ## 5 Conclusion, Further Results (Appendix), Limitation and Future Work In this paper, we introduce a simple-but-effective way of learning an additional screening mechanism to self-supervised feature learning for fine-grained visual recognition. The idea is to identify discriminative clues commonly seen across instances and classes by fitting the GradCAM of the SSL loss with a fitting-capability-limited branch. Our method achieves state-of-the-art in the classification task and is particularly pronounced in retrieval tasks on fine-grained visual recognition problems. More experimental results, including adopting our method for non-fine-grained visual recognition problems, and visualizing each projection in the GFB, can be found in the Appendix. So far, the proposed method seems to be most effective for FGVR, which could be a limitation and we plan to extend the applicability of the proposed method in our future work. \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Feature} & \multicolumn{3}{c|}{Classification} & \multicolumn{3}{c}{Retrieval} \\ \cline{3-8} & & & CUB & Cars & Aircraft & CUB & Cars & Aircraft \\ \hline \hline MoCo v2 [8] & \(C\) & 68.30 & 58.43 & 52.54 & 17.07 & 10.94 & 19.38 \\ MoCo v2 -Bilinear & \(C\)*\(K\) & 68.44 & 58.06 & 53.01 & 41.27 & 30.89 & 30.80 \\ _SAM-SSL_ & \(C\) & 68.59 & 58.49 & 52.97 & 18.38 & 14.26 & 21.72 \\ _SAM-SSL-Bilinear_ & \(C\)*\(K\) & 71.56 & 59.12 & 55.12 & 44.20 & 35.38 & 32.10 \\ DiLo [38, 42] & \(C\) & 64.14 & - & - & - & - & - \\ CVSA [38] & \(C\) & 65.02 & - & - & - & - & - \\ LEWEL [20] & \(C\)*\(K\) & 69.27 & 59.02 & 54.33 & 19.23 & 12.01 & 20.67 \\ ContrastiveCrop [29] & \(C\) & 68.82 & 61.66 & 54.40 & 18.71 & 13.61 & 20.88 \\ \hline _Ours-DualPooling_ & \(C\) & \multicolumn{3}{c|}{collapse} & \multicolumn{3}{c}{collapse} \\ _Ours-MultiTask_ & \(C\) & 68.56 & 58.55 & 52.87 & 17.62 & 13.98 & 22.23 \\ Ours & \(C\) & 71.31 & 60.75 & 55.87 & 49.69 & 34.56 & 34.33 \\ Ours+ContrastiveCrop & \(C\) & 72.84 & 63.71 & 56.08 & 49.36 & 33.55 & 34.95 \\ \hline \end{tabular} \end{table} Table 4: The linear Top 1 (%) and retrieval rank-1 (%) performance comparisons of recent alternative solutions and Ours on the CUB-200-2011, Stanford Cars and FGVC Aircraft datasets. blue=best, green=second best. Collapse means the model fails to produce meaningful performance. Figure 4: Comparison of MoCo v2 (the blue plot) baseline with our method (the red plot) w.r.t. \(K\) on the CUB-200-2011. \begin{table} \begin{tabular}{c c c c c} \hline Dataset & Architecture & rank-1 & rank-5 & mAP \\ \hline \hline \multirow{2}{*}{Cars} & Ours & 34.56 & 60.75 & 8.87 \\ & MLP & 25.57 & 50.91 & 5.92 \\ \hline \end{tabular} \end{table} Table 5: Retrieval performance (%) of our methods and using MLP as the alternative GFB branch. The evaluation is on the Stanford Cars dataset.
2305.05781
Rare Isotope-Containing Diamond Color Centers for Fundamental Symmetry Tests
Detecting a non-zero electric dipole moment (EDM) in a particle would unambiguously signify physics beyond the Standard Model. A potential pathway towards this is the detection of a nuclear Schiff moment, the magnitude of which is enhanced by the presence of nuclear octupole deformation. However, due to the low production rate of isotopes featuring such "pear-shaped" nuclei, capturing, detecting, and manipulating them efficiently is a crucial prerequisite. Incorporating them into synthetic diamond optical crystals can produce defects with defined, molecule-like structures and isolated electronic states within the diamond band gap, increasing capture efficiency, enabling repeated probing of even a single atom, and producing narrow optical linewidths. In this study, we used density functional theory (DFT) to investigate the formation, structure, and electronic properties of crystal defects in diamond containing $^{229}$Pa, a rare isotope that is predicted to have an exceptionally strong nuclear octupole deformation. In addition, we identified and studied stable lanthanide-containing defects with similar electronic structures as non-radioactive proxies to aid in experimental methods. Our findings hold promise for the existence of such defects and can contribute to the development of a quantum information processing-inspired toolbox of techniques for studying rare isotopes.
Ian M. Morris, Kai Klink, Jaideep T. Singh, Jose L. Mendoza-Cortes, Shannon S. Nicley, Jonas N. Becker
2023-05-09T22:01:38Z
http://arxiv.org/abs/2305.05781v1
# Rare Isotope-Containing Diamond Color Centers for Fundamental Symmetry Tests ###### Abstract Detecting a non-zero electric dipole moment (EDM) in a particle would unambiguously signify physics beyond the Standard Model. A potential pathway towards this is the detection of a nuclear Schiff moment, the magnitude of which is enhanced by the presence of nuclear octupole deformation. However, due to the low production rate of isotopes featuring such "pear-shaped" nuclei, capturing, detecting, and manipulating them efficiently is a crucial prerequisite. Incorporating them into synthetic diamond optical crystals can produce defects with defined, molecule-like structures and isolated electronic states within the diamond band gap, increasing capture efficiency, enabling repeated probing of even a single atom, and producing narrow optical linewidths. In this study, we used density functional theory (DFT) to investigate the formation, structure, and electronic properties of crystal defects in diamond containing \({}^{229}Pa\), a rare isotope that is predicted to have an exceptionally strong nuclear octupole deformation. In addition, we identified and studied stable lanthanide-containing defects with similar electronic structures as non-radioactive proxies to aid in experimental methods. Our findings hold promise for the existence of such defects and can contribute to the development of a quantum information processing-inspired toolbox of techniques for studying rare isotopes. ## I Introduction The search for charge-conjugation-parity (_CP_) symmetry-violating interactions is a critical aspect of modern physics that aims to answer some of the universe's most fundamental questions. In particular, _CP_ violations can help explain the observed baryon asymmetry in the universe [1]. However, current observations of _CP_ violations are not significant enough to account for such phenomena. Recently, the measurement of a non-zero permanent electric dipole moment (EDM) within atomic nuclei induced by the nuclear Schiff moment has garnered considerable attention as a potential solution. The existence of a permanent EDM requires breaking of both time-reversal symmetry (_T_) and parity symmetry (_P_), which, by the _CPT_ theorem, implies that it breaks _CP_ symmetry as well [2]. Therefore, the study of permanent EDMs in atomic nuclei provides an exciting avenue for detecting _CP_-violating phenomena and addressing some of the most pressing questions in physics today. Measuring a permanent EDM poses a significant challenge due to its extremely weak signature. However, certain pear-shaped (octupole-deformed) nuclei, such as \({}^{223}\)Fr, \({}^{225}\)Ra, and \({}^{229}\)Pa, have shown to be particularly sensitive to EDM measurements, making them ideal candidates for further study [3; 4]. In particular, \({}^{229}\)Pa is predicted to provide over six orders of magnitude more sensitivity than the current experimental limit on EDM measurements taken with \({}^{199}\)Hg [5; 6]. Despite its potential, the limited global production of \({}^{229}\)Pa has hindered its experimental study. However, the newly-opened Facility for Rare Isotope Beams (FRIB) at Michigan State University is expected to produce a significant amount of \({}^{229}\)Pa within the decade [7]. This will provide a host of opportunities to study \({}^{229}\)Pa. One such opportunity that has been proposed is to implant \({}^{229}\)Pa nuclei within an optical crystal, thereby enhancing the signal for EDM measurements. This approach provides numerous advantages, such as high number densities, efficient optical probing, and large internal electric fields for oriented non-inverton symmetric crystal defects in optical crystals. However, one other factor that has limited the study of \({}^{229}\)Pa is its extreme toxicity and radioactivity. As such, stable nuclear surrogates are necessary for the development of experimental and testing schemes prior to use of \({}^{229}\)Pa. \({}^{141}\)Pr is an excellent candidate for this as it is expected to be isoelectronic with \({}^{229}\)Pa and has the same nuclear spin \(I=\frac{5}{2}\). Moreover, \({}^{141}\)Pr is not as toxic and not radioactive. Thus, \({}^{141}\)Pr can serve as a stable nuclear surrogate to \({}^{229}\)Pa, allowing for method development and testing. This approach can pave the way for future experiments in detecting EDMs in atoms and addressing some of the most profound questions in modern physics. Diamond is a highly suitable host material for EDM-sensitive isotopes such as \({}^{229}\)Pa. It possesses exceptional radiation hardness, making it more resistant to damage from implantation and the decay of incorporated radioactive species than most other host materials [8; 9]. Additionally, its wide band gap (5.5 eV) increases the probability of defect formation within the gap, as demonstrated by the existence of thousands of optically active crystal defects in diamond [10]. Furthermore, synthetic diamond can be made nuclear spin-free using \({}^{12}\)C enriched precursors in chemical vapor deposition (CVD) growth, eliminating a significant source of spin decoherence and effectively creating an almost perfect spin vacuum [11]. The extensively studied nitrogen vacancy center in diamond can also be utilized as a highly sensitive quantum magnetometer and can be used for in-situ co-magnetometry [12]. Overall, these properties make diamond a highly attractive material for hosting isotopes such as \({}^{229}\)Pa [13]. This paper presents a study on the geometric structure, thermodynamic stability, and electronic properties of \({}^{229}\)Pa and \({}^{141}\)Pr defects in diamond using density functional theory (DFT). Specifically, we investigate a variety of different defect configurations, including substitutional defects as well as defects with one to four vacancies introduced nearby. The paper is organized as follows: Section II outlines the computational details and methods involved in the DFT calculations; Section III presents results of these calculations, including geometric structure, formation energies, and charge transition levels along with electronic structure and EDM sensitivity. Finally, in Section IV, we draw conclusions based on our findings. Figure 1: Relaxed defect structures of (a) \({}^{229}\)Pa\({}_{\text{sub}}\), (b) \({}^{229}\)PaV, (c) \({}^{229}\)PaV\({}_{2}\), (d) \({}^{229}\)PaV\({}_{3}\), and (e) \({}^{229}\)PaV\({}_{4}\). For clarity, only the defect ion and the nearest neighbor carbon atoms are displayed. The larger green atom is the \({}^{229}\)Pa ion. The black atoms represent carbon, while the white atoms represent vacancies. For \({}^{229}\)PaV\({}_{2}\), \({}^{229}\)PaV\({}_{3}\), and \({}^{229}\)PaV\({}_{4}\) the ”extra” white vacancy ball that can be seen through the \({}^{229}\)Pa is the initial position of the \({}^{229}\)Pa at a lattice site. Methods Spin-polarized DFT was employed using the projector augmented wave method [14; 15] as implemented in VASP 6.2.1 [16] to characterize isotopic \({}^{229}\)Pa and \({}^{141}\)Pr defects in diamond. The exchange and correlation behavior of the valence electrons (\(2s^{2}2p^{2}\), \(6d^{2}7s^{2}5f^{1}\), and \(4f^{3}s^{2}\) electrons for C, Pa and Pr, respectively) during structure optimization was described using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) [17]. To account for the strongly correlated behavior of the \(f\)-electrons in actinides and lanthanides, a Hubbard-U-type correction (DFT+U) was included for Pa and Pr \(f\)-electrons in all PBE-level calculations. The implementation suggested by Leichtenstein et al. [18] was used with an on-site Coulomb parameter U = 7 eV and on-site exchange parameter J = 1 eV for Pa and Pr, as has been used by others to study lanthanide defects in diamond [19; 20; 21; 22]. Additionally, the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional [23; 24] was used for the calculation of highly accurate electronic structures. This range-separated hybrid functional can accurately reproduce experimental band gaps and charge transition levels in diamond and other group-IV semiconductors to within 0.1 eV [25; 26] and has successfully described a variety of defects in diamond [26; 27; 28; 29; 30; 31; 32]. A variety of defect configurations were studied, including defect ions placed in the substitutional lattice site as well as in substitutional lattice site positions with one to four vacant sites adjacent to them. Calculations were performed on a 3x3x3 diamond supercell containing 216 atoms, and the Brillouin zone was sampled at the \(\Gamma\) point. The excited states were calculated using the constrained-occupation DFT method (\(\Delta\)-SCF) [26] with zero phonon lines (ZPL) calculated by taking the energy difference between ground and excited states. The initial geometries of the models are depicted in Figure 1. The supercell defects were allowed to relax with a constant volume using a conjugate gradient method to ensure that the defect formation energies are comparable. The plane-wave energy cutoff was set to 370 eV. Ionic optimization was performed until forces were less than 10\({}^{2}\) eV/A, and the break condition for the electronic self-consistent loop was set to 10\({}^{6}\) eV. To account for the isotopic nature of \({}^{229}\)Pa and \({}^{141}\)Pr, the mass value in the POTCAR file was changed accordingly. The PBE functional was chosen for geometry relaxation due to its lower computational cost and its ability to predict the structures of a variety of defects in diamond with sufficient accuracy [33; 34; 35; 36]. Furthermore, using a smaller 2x2x2 supercell of 64 atoms, we relaxed the geometry using both PBE+U and HSE06 functionals and found that the difference in atomic positions between the two relaxed structures was less than 10\({}^{-5}\) A on average, demonstrating that the accuracy of PBE+U is comparable to that of HSE06 for geometry relaxation. To assess which defect configuration was most stable, formation and cohesive energies were calculated for each defect studied. The formation energy for a defect \(X\) with charge state \(q\) can be calculated according to: \[E^{\mathrm{f}}[X^{q}]=E_{\mathrm{tot}}[X^{q}]-E_{\mathrm{tot}}[\mathrm{bulk} ]-\sum_{i=1}^{k}n_{i}\mu_{i}+q(\epsilon_{\mathrm{VBM}}+E_{\mathrm{F}})+E_{ \mathrm{corr}}, \tag{1}\] where \(E_{\mathrm{tot}}[X^{q}]\) and \(E_{\mathrm{tot}}[\mathrm{bulk}]\) are the total energies of the bulk material with and without the defect, respectively; \(n_{i}\) is the number of atoms of species \(i\) that have been added to or removed from the supercell (for example, \({}^{229}\)Pa\({}_{\mathrm{sub}}\) removed 1 C and added 1 \({}^{229}\)Pa); \(\mu_{i}\) is the chemical potential corresponding to atomic species \(i\); \(\epsilon_{\mathrm{VBM}}\) is the valence band maximum of the bulk material; \(E_{\mathrm{F}}\) is the Fermi level, which can have values within the material's band gap; and \(E_{\mathrm{corr}}\) is the finite-size electrostatic correction [37]. \(E_{\mathrm{corr}}\) was obtained using the scheme proposed by Freysoldt, Neugebauer and Van de Walle (FNV) [38] as implemented in the Spinney code package [39]. The chemical potential for C was obtained by dividing the total energy of the pristine diamond supercell by the number of atoms. The chemical potential of Pa was calculated as the total energy of metallic Pa (with a bcc tetragonal structure, I4/mmm, no. 139) divided by the number of Pa atoms. Similarly, the chemical potential of Pr was calculated using the total energy of metallic Pr (with a hexagonal structure, P63/mmc, no. 194) divided by the number of Pr atoms. For the chemical potential, the k-point sampling was increased to 9x9x9 due to the small crystal structure. Cohesive energies were calculated according to: \[E_{\mathrm{c}}=\frac{1}{n}\left(\sum_{i=1}^{k}n_{i}E_{\mathrm{atom},i}-E_{ \mathrm{tot}}\right), \tag{2}\] where \(n\) is the total number of atoms, \(E_{\mathrm{tot}}\) is the total energy of the defect system, \(n_{i}\) is the number of atoms of species \(i\), and \(E_{\mathrm{atom},i}\) is the energy per atom for species \(i\)[40]. In order to evaluate the cohesive energies of the structures, it was necessary to calculate the total energies of the corresponding isolated atoms in the structures (using the same exchange functionals and calculation quality settings). For the calculation of the C atom, a 10x10x10 A cube with a single C atom in the center was used, giving enough space around the atom for it to be considered as an isolated atom. For the Pa and Pr atoms, a slightly larger cube of 15x15x15 A was used to ensure isolation of the atoms. To determine the most stable charge state for a given defect, its charge transition levels (CTLs) are calculated. The CTL is the Fermi level at which a transition between two charge states becomes energetically favorable [41]. It is calculated using the formula: \[\epsilon(q_{1}/q_{2})=\frac{E_{q_{1}}^{\text{tot}}+E_{q_{1}}^{\text{corr}}-E_{q_ {2}}^{\text{tot}}-E_{q_{2}}^{\text{corr}}}{q_{2}-q_{1}} \tag{3}\] where \(E_{q}^{\text{tot}}\) is the total energy of the supercell calculation in charge state \(q\) and \(E_{q}^{\text{corr}}\) is the corresponding charge correction that accounts for the periodic interaction of charges between neighboring supercells [38; 42; 43]. The formation and cohesive energies were evaluated for the neutral charge states of the different defect configurations to determine which configuration is most stable. From there, we limited our analysis to the most stable structure and plotted the formation energies for different charge states as a function of Fermi level to determine which charge state is most stable (determined by which charge state has the lowest formation energy for any given Fermi level). The crossing points of these formation energy lines represent charge transition levels, where one charge state becomes more favorable than another. Zero-field splitting (ZFS), magnetic hyperfine, and electric field gradient tensors were all calculated within VASP. For the ZFS tensor in particular, we use the method by [44] with the PBE functional, which has been demonstrated to be sufficiently accurate [34]. For all of these calculations, a higher cut-off energy of 700 eV was used. VESTA [45] was used to visualize the defect structures in addition to the wave functions, whose plane wave coefficients we extracted using the Python class PyVaspwfc [46]. Similarly, transition dipole moments were calculated directly using said plane wave coefficients. ## III Results and Discussion ### Structure and Stability First, the structure of each of the defect configurations was studied to determine which is thermodynamically most likely to form during ion implantation and subsequent annealing. All defect ions were initially placed at a substitutional lattice site with nearest neighbor C atoms removed to create vacancies. The final relaxed structures for each defect configuration are shown in Figure 1. Interestingly, we find that \({}^{229}\)Pa and \({}^{141}\)Pr defects form qualitatively identical structures for all defect models considered. As such, the following descriptions and images for each defect configuration apply to both. For the substitutional defect with no vacancies, the defect ion did not move, but the nearest neighbor carbon atoms were displaced outwards. For the single vacancy, the ion moved into the split vacancy configuration, while for the higher-order vacancy complexes, it moved into a position that filled the void created by the removed carbons. It is worth noting that while the split vacancy is inversion symmetric, the higher-order vacancy complexes are not, resulting in a permanent electric dipole moment and thus static internal electric field. While this is usually avoided for quantum information processing applications as it makes defects susceptible to environmental field fluctuations, it is in fact desirable for EDM experiments, as it increases the sensitivity of EDM measurements [6]. The calculated formation and cohesive energies shed light on which configuration is most energetically favorable to form. Our analysis reveals that, for both \({}^{229}\)Pa and \({}^{141}\)Pr, the substitutional model is less favorable than those containing vacancies, as it generally has a higher formation energy despite having a marginally higher cohesive energy in certain cases. This is in agreement with other first principles studies of defects in diamond that feature large ions, which can introduce significant strain [20; 22; 36; 47]. The introduction of vacancies helps to offset this by creating additional room for the dopant atom. Among the defects with vacancies, the double and triple vacancies are the most energetically favorable in terms of formation energy and have comparable or larger cohesive energy values compared to the single and quadruple vacancy defects. Between the double and triple vacancy, however, we find that the double vacancy is the most stable, as there is diminishing gains by adding yet another vacancy [19]. Moreover, higher-order vacancy complexes are kinetically less likely to form due to the low mobility of substitutional defects in diamond at typical processing temperatures [48]. Additionally, a similar ab-initio study was done on Ce defects in diamond, and CeV\({}_{2}\) was found to be most stable [20; 49]. Therefore, we conclude that the most stable structure for both \({}^{229}\)Pa and \({}^{141}\)Pr defects in diamond is a defect ion accompanied by two vacancies. To further assess the probability of these defects forming, we compare our calculated results with values calculated for other defects in diamond. A combined experimental and theoretical study done on Er\({}^{3+}\) ions implanted in diamond calculated similar cohesive and formation energies. Importantly, they also experimentally observed the characteristic telecom band emission from the Erbium ions after implantation and annealing [36]. Additionally, formation energies calculated for nickel complexes in diamond also find similar formation energies. Lastly, we compare the formation energies of the various charge states with the common NV center (see Figure 4) and they are the same order of magnitude. These examples demonstrate the feasibility of lanthanides and actinides forming luminescent centers in diamond. Notably, non-inversion-symmetric defect configurations are preferred. As noted above, while typically not ideal for quantum information processing applications, [50], the opposite is true for EDM measurements [51; 52]. As was stated above, this results in a permanent electric dipole moment which can result in linear Stark shifts. This results in symmetric doublet splittings in optical transitions when two states with oppositely-oriented dipole moments are degenerate in a non-inversion symmetric site. This makes it possible to specifically address ions that have a specific direction of electric polarization. Going back to the initial statement of inversion-breaking defects being preferred, this means that for any given sample, the majority of defects formed will not be inversion-symmetric and will thus retain the advantages of enhanced EDM measurement sensitivity. Based on these findings, we focused subsequent calculations on \({}^{229}\)Pa and \({}^{141}\)Pr defects with two vacancies. Figure 2: Top panels are formation energies for different defect configurations using different functionals. Bottom panels are cohesive energies for different defect configurations and different functionals. For the bottom panels, the solid lines denote the cohesive energy for pristine diamond without any defects using both PBE and HSE06 functionals. ### Charge State Formation Energies Since \({}^{229}\)PaV\({}_{2}\) and \({}^{141}\)PrV\({}_{2}\) appear to be the most thermodynamically favorable defects, which also feature promising geometries for EDM measurements, we will focus on these configurations from here on out. We start by determining their potential charge states and charge transition levels. Our findings indicate that both defects can potentially take on charge states ranging from -3 to +1. The formation energy as a function of Fermi level is displayed in Figure 2. As mentioned above, the calculated formation energy values are comparable to those of other defects in diamond that contain large defect ions [19; 36], indicating that, at least thermodynamically, defect formation is possible. Additionally, for the purposes of NV co-magnetometry, the -1, -2, and -3 charge states for both defects all land squarely within the Fermi levels where negatively charged NV centers are likely to form. Furthermore, since NV co-magnetometry requires a relatively high donor concentration, the negative charge states are preferred. In terms of which charge state is most likely to form in diamond without the need careful doping, the -1 charge state is nearest the Fermi level for natural Charge transition levels (CTLs) were calculated using the information shown in the formation energy diagram (Figure 2). Calculating the CTLs for different doping levels allowed us to determine which charge state is most stable at each Fermi level. This information is important for understanding the behavior of the defects in diamond under different doping conditions. For example, it provides information on the effect of different atomic species on the defects' charge stability. Here, the negative charge states act as electron acceptors and require compensatory electron donors in the system, such as substitutional N that have a deep donor level located 1.7 eV below the conduction band minimum [53; 54]. ### Electronic Structure In this section, we present a detailed analysis of the electronic structure of the defects using group theory and DFT calculations. Both defects of interest, \({}^{141}\)PrV\({}_{2}\) and \({}^{229}\)PaV\({}_{2}\), are part of the C\({}_{2\nu}\) symmetry point group. Using this, we derive a defect molecular orbital diagram to make predictions for the optical transitions and fine structure. First, we calculate the spin-polarized level structure of the single-electron orbitals using DFT and derive which irreducible representation of C\({}_{2\nu}\) they belong to by applying the respective symmetry operators to the calculated wave functions. Figure 4 shows the visualizations of the defect wave functions obtained from the DFT calculations, along with the single-electron orbital levels. From these single electron orbitals, we construct the many electron (molecular) orbital configurations shown in Figure 4. This diagram displays the single-electron Kohn-Sham energy levels and their corresponding irreducible representations, allowing possible optical transitions to be identified. From the results, the +1 and -1 charge states features S=1 spin triplets; the neutral charge state has a S=3/2 quartet; and the -2 and -3 charge states feature an S=1/2 doublet and S=0 singlet, respectively. With this, we can identify charge states of interest based on their spin. To detect a nuclear-Schiff moment, there needs to be hyperfine coupling to an electron spin, ruling out the -3 charge state as a potential candidate as there is no electron for the nuclear spin to couple to. That leaves the positive and neutral as well as remaining two negative charge states. We focus on the -1 and -2 charge states as they are more likely to form within natural diamond while also falling within Fermi level regions that the negatively charged NV center does. One additional observation from the Kohn-Sham orbitals is that the defect ions introduce occupied bands within the band gap of diamond. This differs from other color centers in diamond such as the group-IV vacancies or nickel vacancies where the Kohn-Sham orbitals are situated below the valence band edge [56; 57]. For \({}^{229}\)PaV\({}_{2}\) and \({}^{141}\)PrV\({}_{2}\), however, both the ground and excited state levels for the minority spin channel are located within the band gap. This localization of the defect states from the bulk bands reduces the probability of single-photon transitions from the defect to bulk states during defect-defect transitions, potentially enhancing the excitation efficiency for the studied centers. In general, our goal is to identify electronic structures featuring ground and excited state spin and orbital configurations leading to qualitatively identical interaction, splitting patterns and optical transitions for both \({}^{141}\)PrV\({}_{2}\) and \({}^{229}\)PaV\({}_{2}\) so that the stable \({}^{141}\)PrV\({}_{2}\) defect can be used as a test bed for method development. To identify these transitions, we first need to know which optical transitions are electric dipole allowed. In C\({}_{2\nu}\), the dipole moment vector transforms as \((\mathrm{B}_{1},\mathrm{B}_{2},\mathrm{A}_{1})\). With that, we can calculate the matrix elements for various transitions using the irreducible symmetry representation to determine which transitions are allowed. Because the electric dipole operator does not act on the spin part of the wavefunction, we only consider non-spin flipping excitations within the spin up and spin down channels, respectively, to identify possible optically allowed transitions. Based on ground and excited state wavefunctions, we calculate transition dipole moments to determine the strength of certain transitions. Using this, transitions which match for both \({}^{141}\)PrV\({}_{2}\) and \({}^{229}\)PaV\({}_{2}\) and which have a TDM \(>\) 1 Debye were selected for further study. The results are displayed in Table 1 along with the calculated ZPL for each transition. We start our analysis with the -1 charge states for both defects, which have \({}^{3}\)B\({}_{2}\rightarrow^{3}\) A\({}_{2}\) transitions. These states transform as orbital singlets, due to the fact that the defects have C\({}_{2\mathrm{v}}\) symmetry, so there is no spin-orbit or Jahn-Teller contributions to the ground or excited states. The ground state \({}^{3}\)B\({}_{2}\) features an \(S=1\) spin triplet, and \({}^{229}\)Pa nuclear spin \(I=\frac{5}{2}\), leading to the fine and hyperfine structure shown in Figure 6. Five interactions were considered when analyzing the possible spin state splittings: the electron-electron magnetic dipolar interaction \(\mathbf{D}\), the hyperfine interaction \(\mathbf{A}\), the nuclear quadrupole interaction \(\mathbf{Q}\), and the electronic and nuclear Zeeman interactions: \[\hat{H}=\mu_{B}g_{e}\vec{B}\cdot\vec{S}+\mu_{N}g_{n}\vec{B}\cdot\vec{I}+\vec{S} \cdot\mathbf{D}\cdot\vec{S}+\vec{S}\cdot\mathbf{A}\cdot\vec{I}+\vec{I}\cdot \mathbf{Q}\cdot\vec{I}, \tag{4}\] where \(\mu_{B}\) and \(\mu_{N}\) are the Bohr and nuclear magnetons, \(\vec{B}\) is the magnetic field vector, \(g_{e}\) and \(g_{n}\) are the electron and nuclear g-factors, and \(\vec{S}\) and \(\vec{I}\) are the total electron and nuclear spin angular momenta. In the principle axis coordinates, the latter three terms in Eq. (3.1) can be converted to: \[\hat{H}_{D}=D[S_{z}^{2}-\frac{1}{3}S(S+1)+\frac{\epsilon}{3}(S_{+}^{2}+S_{-}^{ 2})] \tag{5}\] Figure 3: Charged formation energy as a function of Fermi level for PaV\({}_{2}\) and PrV\({}_{2}\) defects in diamond. Additionally, formation energies calculated by [55] for the NV center in diamond are included for comparison. \[\hat{H}_{Q}=\frac{eQ_{I}V_{zz}}{4I(2I-1)}[3I_{z}^{2}-I(I+1)+\eta(I_{x}^{2}+I_{y} {-}^{2})] \tag{6}\] \[\hat{H}_{A}=A_{zz}S_{z}I_{z}+A_{xx}S_{x}I_{x}+A_{yy}S_{y}I_{y} \tag{7}\] where \(D=\frac{3}{2}D_{zz}\), \(Q_{I}\) is the nuclear quadrupole moment, \(e\) is the electric charge, and \(\epsilon=(D_{xx}-D_{yy})/D_{zz}\) and \(\eta=(V_{xx}-V_{yy})/V_{zz}\) are the asymmetric coefficients. It should be noted that VASP has been found to underestimate the zero-field tensor, \(\mathbf{D}\), even when using hybrid functionals, so the values may be larger than what was calculated [58]. The quadrupole for \({}^{229}\)Pa has not been experimentally measured, so the theoretically calculated value from [59] was used. The calculated values are shown in Table 2 for the defect ions of interest as well as for the nearest neighbor carbons which surround the defects. The calculation can be simplified by selecting the \(C_{2}\) axis of the defects as the z-direction and accounting for the fact that \(\hat{H}_{D}\gg\hat{H}_{A},\hat{H}_{Q}\). This reduces the effective Hamiltonian to \[\hat{H}=\mu_{B}g_{e}B_{z}S_{z}+\mu_{N}g_{n}B_{z}I_{z}+DS_{z}^{2}+QI_{z}^{2}+A_ {zz}S_{z}I_{z}. \tag{8}\] For the level structure, this means that there is an initial zero-field splitting originating from \(\mathbf{D}\) which splits the \(m_{s}=0\) and \(m_{s}=\pm 1\) states. Then, the electron Zeeman interaction lifts the degeneracy of the \(m_{s}=\pm 1\) states. The quadrupole interaction splits each of these branches further into \(m_{I}=\pm\frac{1}{2},\pm\frac{3}{2},\pm\frac{5}{2}\) levels. Then, the hyperfine and nuclear Zeeman terms lift the degeneracy of these levels. Similarly, the excited state has the same fundamental splitting pattern. The -2 charge state has one unpaired electron and therefore has spin \(S=\frac{1}{2}\) due to the half-occupied molecular orbital that can be spin up or down, resulting in a spin multiplicity of 2. The ground state has the electronic configuration \([a_{1}]^{2}[b_{2}]^{2}[b_{1}]^{2}[a_{1}]^{1}[b_{2}]^{2}\), which transforms as the irreducible representation A\({}_{2}\) based on the direct product of the irreducible representations that constitute the state. Similar to the -1 transitions, the excited and ground states are orbital singlets, resulting in the absence of Jahn-Teller instability and spin-orbit coupling. Similarly, there is no spin-spin interaction because there is only one unpaired electron. Thus, we start our analysis with the application of a magnetic field, which lifts the degeneracy of the \(m_{s}=\pm\frac{1}{2}\) states for both the ground and excited states, as they are both spin doublets. The electronic Zeeman interaction term was given above. Similar to the -1 charge state, there are 6 splittings for each branch from the nuclear Zeeman and magnetic hyperfine interaction terms, which split the \(m_{I}=\pm\frac{1}{2},\pm\frac{3}{2},\pm\frac{5}{2}\) levels. See Figure 5 for the full level structure along with calculated values. ### Estimated EDM Sensitivity To provide a first estimate of the sensitivity of these defects for EDM measurements, we focus on the effective internal field generated by their non-inversion symmetric structure. To do this, we first attempt to estimate the differential dipole moment, \(\Delta\mu\), and polarizability, \(\Delta\alpha\), between the ground and excited states. We take an approach proposed by [60; 61] where the Stark shift of the ZPL can be modeled by: \[\Delta E_{\text{ZPL}}=-\frac{1}{\epsilon_{s}}\Delta\mu E-\frac{1}{2\epsilon_{ s}^{2}}\Delta\alpha E^{2}, \tag{9}\] where E is an applied electric field perturbing the defect, and \(\epsilon_{s}\) is the static dielectric constant of the material which we take as 5.7 for diamond [62]. Using this, we calculate the ZPL at varying applied electric field strengths and fit the \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Spin Channel & Transition & ZPL & TDM \\ \hline \hline & & & (nm) & (Debye) \\ \({}^{229}\)PaV\({}_{2}^{2-}\) & up & \({}^{2}\)A\({}_{1}\rightarrow{}^{2}\)B\({}_{2}\) & 533 & 1.37 \\ \({}^{141}\)PrV\({}_{2}^{2-}\) & up & \({}^{2}\)A\({}_{1}\rightarrow{}^{2}\)B\({}_{2}\) & 764 & 1.61 \\ & down & \({}^{2}\)A\({}_{1}\rightarrow{}^{2}\)B\({}_{2}\) & 1765 & 1.16 \\ \({}^{229}\)PaV\({}_{2}^{1-}\) & down & \({}^{3}\)B\({}_{2}\rightarrow{}^{3}\)A\({}_{2}\) & 1187 & 6.64 \\ \({}^{141}\)PrV\({}_{2}^{1-}\) & down & \({}^{3}\)B\({}_{2}\rightarrow{}^{3}\)A\({}_{2}\) & 1259 & 7.72 \\ \end{tabular} \end{table} Table 1: Matching optical transitions for the \(-2\) and \(-1\) charge states for \({}^{229}\)Pa\(V_{2}\) and \({}^{141}\)Pr\(V_{2}\) defects. Spin channel refers to whether a spin up or down electron was promoted to a higher band (i.e., excited state). Transition shows the symmetry of the ground and excited state. TDM is the transition dipole moment, which corresponds to the strength of the transition. —\(\Delta D\)— is the magnitude of the difference in induced dipole moment from ground to excited state. equation above to the resulting ZPL energy changes. We performed these calculations for both the -1 and -2 charge state for the \({}^{229}\)PaV\({}_{2}\) defects. The VASP applied electric field is in units of eV/A, so the resulting \(\Delta\mu\) is initially in units of eA and \(\Delta\alpha\) is in units of m\({}^{2}\)e/V. We also provide these values in the other typically quoted units of Debye for differential dipole moment and a\({}_{0}^{3}\) for differential polarizability (see Table 3). With these values in hand, we can make an estimate for the internal effective electric field within the crystal. To \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline & Symmetry & \(A_{xx}\) & \(A_{yy}\) & \(A_{zz}\) & \(V_{zz}\) & \(\eta\) & \(D_{zz}\) & \(\epsilon\) \\ \({}^{229}\)PaV\({}_{2}\)\({}^{-}\) & \({}^{2}\)A\({}_{1}\) & -50.874 & -48.39 & -116.319 & 613.191 & 0.595 & – & – \\ \({}^{141}\)PrV\({}_{2}\)\({}^{2-}\) & \({}^{2}\)A\({}_{1}\) & -195.352 & -160.946 & -201.971 & 287.623 & 0.744 & – & – \\ \({}^{229}\)PaV\({}_{2}\)\({}^{1-}\) & \({}^{3}\)B\({}_{2}\) & -76.684 & -46.146 & -88.272 & -683.679 & 0.776 & 1.611 & 0.115 \\ \({}^{141}\)PrV\({}_{2}\)\({}^{-}\) & \({}^{3}\)B\({}_{2}\) & -106.872 & -104.617 & -113.123 & 370.684 & 0.77 & 12.931 & 0.275 \\ \end{tabular} \end{table} Table 2: Values for the zero-field splitting (D\({}_{zz}\)) in GHz (for S=1), hyperfine coupling parameters (A\({}_{xx}\), A\({}_{yy}\), A\({}_{zz}\)) in MHz, electric field gradients (V\({}_{zz}\)) in V/Å\({}^{2}\), and symmetry labels for the ground state of each defect in the -2 and -1 charge states, all of which were calculated using HSE06 aside from the zero field splitting tensor which used the PBE functional. Figure 4: Ground state electronic structure for the charge states of \({}^{229}\)PaV\({}_{2}\) and \({}^{141}\)PrV\({}_{2}\). The single-electron orbitals are labeled with their corresponding irreducible representations. do this, we use the following equation: \[E_{\rm eff}=\left(\frac{1}{4\pi\epsilon_{0}}\right)\left(\frac{\Delta\mu}{Z_{\rm scale }^{3}}\right), \tag{10}\] where, \(\epsilon_{0}=8.85\times 10^{-12}\) [F/m] is the permittivity of free space, and \(Z_{scale}=1\)A is a length scale estimate for the volume. Importantly, the dipole moments used in the equation for differential dipole moment are not the EDM that is measured for fundamental symmetry breaking. It is the overall induced dipole because of the effective electric field within the non-inversion symmetric defect. With this, we can obtain a rough estimate for the effective electric field experienced by an electron within the defect. This value is then further reduced by a conservative factor of 100 to estimate the shielding by the crystal, also known as the permittivity tensor. Outside of these values for an effective field and differential polarizability, the defects have several other advantages. One such advantage is that the angular momentum is zero, so this greatly reduces their coupling to the lattice, enabling narrow optical linewidths. Furthermore, in the case of the -1 charge state, it features a spin-1 triplet, which should extend its coherence time because there will be limited coupling to the spin-1/2 bath within diamond, similar to the NV center. Conclusion We have identified the \({}^{229}\)PaV\({}_{2}\) defect in diamond as a promising candidate for tests of fundamental symmetry violations. It lacks inversion symmetry, which allows for heightened EDM sensitivity and can also inhabit a number of negatively-charged states, which have similar Fermi levels to the NV center, enabling co-magnetometry with NV centers. Multiple optical transitions which can be captured with laser spectroscopy techniques were identified. Furthermore, a large effective electric field was calculated. Moreover, while production of \({}^{229}\)Pa will occur at the Facility for Rare Isotope Beams, we have also identified a stable lanthanide-containing defect in the form of \({}^{141}\)PrV\({}_{2}\) defects in diamond, for which we have identified ground to excited state configurations and transitions that are qualitatively identical to those of the \({}^{229}\)PaV\({}_{2}\). This will facilitate experimental method development. While not considered here, the effect of applied electric fields or strain could also serve to enhance the dipole moment and the Hamiltonian terms should be be explored in the future. ## V Acknowledgements IMM acknowledges support from an Alfred J. and Ruth Zeits Research Fellowship at MSU. KK acknowledges support from a Hantel Endowed Fellowship at MSU. JNB acknowledges support by the Cowen Family Endowment at MSU.
2308.14670
Symmetric Models for Visual Force Policy Learning
While it is generally acknowledged that force feedback is beneficial to robotic control, applications of policy learning to robotic manipulation typically only leverage visual feedback. Recently, symmetric neural models have been used to significantly improve the sample efficiency and performance of policy learning across a variety of robotic manipulation domains. This paper explores an application of symmetric policy learning to visual-force problems. We present Symmetric Visual Force Learning (SVFL), a novel method for robotic control which leverages visual and force feedback. We demonstrate that SVFL can significantly outperform state of the art baselines for visual force learning and report several interesting empirical findings related to the utility of learning force feedback control policies in both general manipulation tasks and scenarios with low visual acuity.
Colin Kohler, Anuj Shrivatsav Srikanth, Eshan Arora, Robert Platt
2023-08-28T15:55:57Z
http://arxiv.org/abs/2308.14670v1
# Symmetric Models for Visual Force Policy Learning ###### Abstract While it is generally acknowledged that force feedback is beneficial to robotic control, applications of policy learning to robotic manipulation typically only leverage visual feedback. Recently, symmetric neural models have been used to significantly improve the sample efficiency and performance of policy learning across a variety of robotic manipulation domains. This paper explores an application of symmetric policy learning to visual-force problems. We present Symmetric Visual Force Learning (SVFL), a novel method for robotic control which leverages visual and force feedback. We demonstrate that SVFL can significantly outperform state of the art baselines for visual force learning and report several interesting empirical findings related to the utility of learning force feedback control policies in both general manipulation tasks and scenarios with low visual acuity. Keywords:Force Feedback, Policy Learning, Manipulation ## 1 Introduction There are a variety of manipulation tasks where it is essential to use both vision and force feedback as part of the control policy. Peg insertion with tight tolerances, for example, is a task that is nearly impossible to solve without leveraging force feedback in some form. The classical approach is to use an admittance controller with a remote center of compliance to help the peg slide into the hole [1]. However, this is a very limited use of force feedback and it seems like it should be possible to use force information in a more comprehensive way. Nevertheless, after decades of research, it is still not clear how to accomplish this. One of the core obstacles is the difficulty in simulating the complex force interactions that happen at the robot end effector. These depend upon the complex mechanics of the robotic drive train - harmonic drives or planetary gearheads that cannot be modeled with any accuracy. While there have been major efforts in the past to circumvent this challenge with series elastic drives [2] or direct drives [3], each of these approaches comes with its own set of challenges. An obvious alternative approach is to leverage machine learning, i.e. model free reinforcement learning (RL), to obtain force feedback assisted policies. This is in contrast to vision-only RL where the policy only takes visual feedback [4, 5]. In _visual force_ RL, there is the possibility to adapt control policies directly to the mechanical characteristics of the system as they exist in the physical world, without the need to model those dynamics first. However, this assumes that we can run RL online directly in the physical world, something that is hard to do due to the poor sample efficiency of RL. RL is well known to require an enormous amount of data in order to learn even simple policies effectively. While visual force RL might, in principle, be able to learn effective policies, this sample inefficiency prevents us from learning policies directly on physical equipment. In order to improve the sample efficiency of RL in visual force problems, one common approach is to learn a helpful latent representation during a pretraining phase [6; 7; 8; 9]. This generally takes the form of self-supervised robot "play" in the domain of interest that must precede actual policy learning. Unfortunately, this is both cumbersome and brittle as the latent representation does not generalize well outside the situations experienced during the play phase. This is especially prevalent in the visual force domain as the noisy nature of force sensors means there will be many force observations not experienced during pretraining leading to poor latent predictions during policy learning. This paper develops an alternative approach to the problem of visual force learning based on exploiting domain symmetries using equivariant learning [10]. Recently, symmetric neural networks have been shown to dramatically improve the sample efficiency of RL in robotic manipulation domains [11; 12]. However, this work has focused exclusively on visual feedback and has not yet been applied to visual force learning. Here, there are several open questions. Can symmetric neural models improve sample efficiency in problems with force feedback? What might the model architecture look like to accomplish that? On what sorts of manipulation tasks might this approach be most helpful? This paper makes three main contributions. First, we propose a novel method for visual force policy learning called Symmetric Visual Force Learning (SVFL) which exploits the underlying symmetry of manipulation tasks to improve sample efficiency and performance. Second, we empirically evaluate the importance of force feedback assisted control across a variety of manipulation domains and find that force feedback is helpful for nearly all tasks, not just contact-rich domains like peg insertion where we would expect it to be important. Finally, we explore the role of force-assisted policies in domains with low visual acuity and characterize the degree to which force models can compensate for poor visual information. ## 2 Related Work **Contact-Rich Manipulation.** Contact-rich manipulation tasks, i.e. peg insertion, screw fastening, edge following, etc., are well-studied areas in robotic manipulation due to their prevalence in manufacturing domains. These tasks often are solved by hand-engineered polices which utilize force feedback and very accurate state estimation [1], resulting in policies that perform well in structured environments but do not generalize to the large range of real-world variability. More recent work has proposed the use of reinforcement learning to address these variations [4; 13; 14] by training neural network policies which combine vision and proprioception. However, while these methods have been shown to perform well across a variety of domains and task variations, they require a high level of visual acuity, such that the task is solvable solely using image observations. In practice, this means these methods are unsuitable for a large portion of contact-rich manipulation tasks which require a high degree of precision and often include visual obstructions. **Multimodal Learning.** A common approach to multimodal learning is to first learn a latent dynamics model which compactly represents the high-dimensional observations and then use this model for model-based learning. This technique has recently been adapted for use in various robotics domains to combine various types of heterogeneous data. Li et al. [15] combine vision and haptic information using a GAN but do not utilize their latent representation for manipulation policies. Fazeli et al. [9] first learn a physics model using both vision and force data and use this model as input to a handcrafted policy to play a game of Jenga. Similarly, Zheng et al. [8] propose a model which learns a cross-modal visual-tactile model for a series of tasks, reusing past knowledge to perform lifelong learning. However, similar to [15] they do not use this learned representation for either a hand-crafted policy or policy learning. Our work is most closely related to [7; 6] which we use as baselines in this work. Lee et al. [7] combine vision, force, and proprioceptive data using a variational latent model learned from self-supervision and use this model to learn a policy for peg insertion. Chen et al. [6] learn a multimodal latent heatmap using a cross-modal visual-tactile transformer (VTT) which distributes attention spatially. They show that by combining VTT with stochastic latent actor critic (SLAC), they can learn policies that can solve a number of manipulation tasks. In comparison to these works, we propose a sample-efficient deterministic multimodal representation that is learned end-to-end without the need for pretraining. This is achieved through the use of a fully equivariant model which exploits the symmetry inherent in the \(SO(2)\) domain to improve sample efficiency. Furthermore, we remove the need for the heavily structured, dense reward functions used in these previous works. **Equivariant Neural Networks.** Equivariant networks were first introduced as G-Convolutions [16] and Steerable CNNs [10; 17; 18]. Since their inception they have been applied across varied datatypes including images [17], spherical data [19], and point clouds [20]. More recent work has expanded the use of equivariant networks to reinforcement learning [12; 5; 21] and robotics [22; 23; 24; 25]. Compared to these prior works which focus on a single data modality, this works studies the effectiveness of combining various heterogeneous datatypes while preserving the symmetry inherit in each of these data modalities. ## 3 Background **Equivariant Neural Networks.** A function is equivariant if it respects the symmetries of its input and output spaces. Specifically, a function \(f:X\to Y\) is _equivariant_ with respect to a symmetry group \(G\) if it commutes with all transformations \(g\in G\), \(f(\rho_{x}(g)x)=\rho_{y}(g)f(x)\), where \(\rho_{x}\) and \(\rho_{y}\) are the _representations_ of the group \(G\) that define how the group element \(g\in G\) acts on \(x\in X\) and \(y\in Y\), respectively. An equivariant function is a mathematical way of expressing that \(f\) is symmetric with respect to \(G\): if we evaluate \(f\) for various transformed versions of the same input, we should obtain transformed versions of the same output. Although this symmetry can be learned [26], in this work we require the symmetry group \(G\) and representation \(\rho_{x}\) to be known at design time. For example, in a convolutional model, this can be accomplished by tying the kernel weights together to satisfy \(K(gy)=\rho_{out}(g)K(y)\rho_{in}(g)^{-1}\), where \(\rho_{in}\) and \(\rho_{out}\) denote the representation of the group operator at the input and output of the layer [27]. End-to-end equivariant models can be constructed by combining equivariant convolutional layers and equivariant activation functions. In order to leverage symmetry in this way, it is common to transform the input so that standard group representations work correctly, e.g., to transform an image to a top-down view so that image rotations correspond to object rotations. **Extrinsic Equivariance.** Often real-world problems contain symmetry corruptions such as oblique viewing angles and occlusions. This is particularly prevalent in robotics domains where the state of the world is rarely fully observable. In these domains we consider the symmetry to be _latent_ where we know that some symmetry is present in the problem but cannot easily express how that symmetry acts in the input space. We refer to this relationship as _extrinsic equivariance_[21], where the equivariant constraint in the equivariant network enforces equivariance to out-of-distribution data. While extrinsic equivariance is not ideal, it does not necessarily increase error and has been shown to provide significant performance improvements in reinforcement learning [21]. ## 4 Approach ### Problem Statement We model the visual force control problem as a discrete time finite horizon Markov decision process (MDP), \(\mathcal{M}=(S,A,T,R,\gamma)\), where states \(s\in S\) encode visual, force, and proprioceptive data and actions \(a\in A\) command small end effector displacements. This MDP transitions at a frequency of \(20\)_Hz_ and the commanded hand displacements are provided as positional inputs to a lower level Cartesian space admittance controller that runs at \(500\)_Hz_ with a fixed stiffness. The hand is constrained to point straight down at the table (along the \(-z\) direction). State is a tuple \(s=(I,f,e)\in S\). \(I\in\mathbb{R}^{4\times h\times w}\) is a 4-channel RGB-D image captured from a fixed camera pointed at the workspace. \(f=(f_{xy},f_{z},m_{xy},m_{z})\in\mathbb{R}^{T\times 6}\) is a \(T\times 6\) time series of the last \(T\) measurements from a six-axis wrist force-torque sensor transformed into the robot base frame. \(e=(e_{\lambda},e_{xy},e_{z},e_{\theta})\in\mathbb{R}^{5}\) is the configuration of the end effector where \(e_{\lambda}\in E_{\lambda}\) is the hand open width, \((e_{xy},e_{z})\) are the Cartesian coordinates of the hand, and \(e_{\theta}\) is the orientation of the hand about the \(-z\) axis. Actions are represented by \(a=(\lambda,\Delta p)\in A\subseteq\mathbb{R}^{5}\) where \(\lambda\in\mathbb{R}\) is the desired gripper open width and \(\Delta p=(\Delta p_{xy},\Delta p_{z},\Delta p_{\theta})\in\mathbb{R}^{4}\) is the desired delta pose of the gripper with respect to the current pose \(p\). As we discuss in the next section, we assume that the problem is \(O(2)\)-symmetric in the sense that the transition and reward functions are invariant with respect to planar rotations and reflections, for an appropriate definition of the action of \(O(2)\) on \(S\) and \(A\). ### \(\mathbf{O(2)}\) Symmetries in Visual Force Domains In order to leverage symmetric models for visual force policy learning, we utilize the group invariant MDP framework. A group invariant MDP is an MDP with reward and transition functions that are invariant under the group action, \(R(s,a)=R(\rho_{s}(g)s,\rho_{a}(g)a)\) and \(T(s,a,s^{\prime})=T(\rho_{s}(g)s,\rho_{a}(g)a,\rho_{s}(g)s^{\prime})\), for elements of an appropriate symmetry group \(g\in G\)[11]. \(\rho_{s}\) and \(\rho_{a}\) are representations of the group \(G\) that define how group elements act on state and action. This paper focuses on discrete subgroups of \(O(2)\) such as the dihedral groups \(D_{4}\) or \(D_{8}\) that represent rotations and reflections in the \(xy\) plane, i.e. the plane of the table. We utilize the \(D_{8}\) group in our experiments (see Appendix 7.4.1 for ablations on the effect of group size). In order to express visual force manipulation as a group invariant MDP, we must define how the group operates on state and action such that the transition and reward invariance equalities described above are approximately satisfied. State is \(s=(I,f,e)=(I,f_{xy},f_{z},m_{xy},m_{z},e_{xy},e_{z},\epsilon_{\lambda})\). Since we are focused on rotations and reflections in the plane about the \(z\) axis, only the \(xy\) variables are affected. Therefore, the group \(g\in SO(2)\) acts on \(s\) via \(\rho_{s}(g)s=(\rho_{0}(g)I,\rho_{1}(g)f_{xy},f_{z},\rho_{1}(g)m_{xy},m_{z}, \rho_{1}(g)e_{xy},e_{z},e_{\lambda})\) where \(\rho_{0}(g)\) is a linear operator that rotates/reflects the pixels in an image by \(g\) and \(\rho_{1}(g)\) is the standard representation of rotation/reflection in the form of a \(2\times 2\) orthonormal matrix. Turning to action, \(a=(\lambda,\Delta p_{xy},\Delta p_{z},\Delta p_{\theta})\), we define \(\rho_{a}(g)a=(\lambda,\rho_{1}(g)\Delta p_{xy},\Delta p_{z},\Delta p_{\theta})\). Given these definitions, visual force manipulation satisfies the transition and reward invariance constraints, \(R(s,a)=R(\rho_{s}(g)s,\rho_{a}(g)a)\) and \(T(s,a,s^{\prime})=T(\rho_{s}(g)s,\rho_{a}(g)a,\rho_{s}(g)s^{\prime})\). This is illustrated for transition invariance in Figure 1. ### Model Architecture As we discuss in the next section, we do policy learning using SAC which requires both a critic (a \(Q\)-function) and an actor. In our method, both actor and critic employ the same encoder architecture which encodes state into a latent representation. Since our state \(s=(I,f,e)\in S\) is multimodal (i.e. vision, force, and proprioception) our backbone is actually three encoders, the output of which is concatenated (Figure 2). The image encoder (top left in Figure 2) is a series of seven equivariant convolutional layers. The force encoder (middle left) is a single equivariant self-attention layer. The proprioceptive encoder (bottom left) is a four-layer equivariant MLP. More details on each of these encoders can be found in the Appendix in Section 7.2. In each of these encoders, the model respects the equivariance and invariance of each data modality corresponding to the relationships described in Section 4.2. Figure 1: \(\mathbf{O(2)}\) **Symmetries.** Figure 2: **High level model architecture.** The force encoder is of particular note due to its use of single-headed self-attention. The input is a set of \(T\) tokens, \(f\in\mathbb{R}^{T\times 6}\), that encode the most recent \(T\) measurements from the force-torque sensor. In order to make this model equivariant, we simply convert each of the key, query, and value networks to become equivariant models. For the standard implementation of self-attention, \(\text{Attention}=\text{softmax}(fW^{Q}(fW^{K})^{T})fW^{V}\), the resulting group self attention operation is equivariant [28]: \[\text{Attention}(X_{f}\Gamma) =\text{softmax}(X_{f}\Gamma W^{Q}(X_{f}\Gamma W^{K})^{T})X_{f} \Gamma W^{V}\] \[=\text{softmax}(X_{f}W^{Q}\Gamma(X_{f}W^{K}\Gamma)^{T})X_{f}W^{V}\Gamma\] \[=\text{softmax}(X_{f}W^{Q}\Gamma\Gamma^{T}(X_{f}W^{K})^{T})X_{f} W^{V}\Gamma\] \[=\text{softmax}(X_{f}W^{Q}(X_{f}W^{k})^{T})X_{f}W^{V}\Gamma= \text{Attention}(X_{f})\Gamma,\] where, for simplicity of this analysis, we define \(\Gamma\) to be the linear representation of the action of a group element \(g\in G\) and \(X_{f}\in\mathbb{R}^{T\times 6\times|G|}\). 1 We informally explored alternative force-torque encoder models but found that this self attention approach worked best. Footnote 1: Although we omit the positional encoding here, this does not affect the result [28]. ### Equivariant SAC For policy learning, we use Soft Actor Critic (SAC) [29] combined with the model architecture described above. This can be viewed as a variation of Equivariant SAC [11] that is adapted to visual force control problems. The policy is a network \(\pi:S\to A\times A_{\sigma}\), where \(A_{\sigma}\) is the space of action standard deviations. We define the group action on the action space of the policy network \(\bar{a}\in A\times A_{\sigma}\) as: \(\rho_{\bar{a}}(g)\bar{a}=(\rho_{a}(g)a,a_{\sigma})\), where \(a_{\sigma}\in A_{\sigma}\) and \(g\in G\). The actor network \(\pi\) is defined as a mapping \(s\mapsto\bar{a}\) that satisfies the following equivariance constraint: \(\pi(\rho_{s}(g)s)=\rho_{a}(g)(\pi(s))\). The critic is a Double Q-network: \(q:S\times A\rightarrow\mathcal{R}\) that satisfies an invariant constraint: \(q(\rho_{s}(g)s,\rho_{a}(g)a)=q(s,a)\). ## 5 Experiments We performed a series of experiments both in simulation and on physical hardware to validate our approach, Symmetric Visual Force Learning (SVFL). First, we benchmark SVFL's performance in simulation against alternative approaches in the literature. Second, we perform ablations that measure the contributions of different input modalities for different tasks under both ideal and degraded visual observations. Finally, we validate the approach on physical hardware. ### Simulated Experiments **Tasks.** We evaluate SVFL across nine manipulation tasks from the BulletArm benchmark [30] which uses the PyBullet [31] simulator: Block Picking, Block Pushing, Block Pulling, Block Corner Pulling, Mug Picking, Household Picking, Peg Insertion, Drawer Opening, and Drawer Closing (Figure 7). For all tasks, a sparse reward function is used where a reward of \(+1\) is given at task completion and \(0\) otherwise. Further task details can be found in the Appendix (Section 7.1, 7.3). **Baselines.** We benchmark our method against two prominent alternative methods for visual force (or visual tactile) learning that have been proposed recently: Visual-Tactile Transformers (VTT) [6] and Product of Experts (PoE) [7]. We also compare against a non-symmetric version of our model that is the same in every way except that it does not use equivariant layers (CNN). Both PoE and VTT are latent representation methods which rely on a self-supervised pretraining phase to build a compact latent representation of the underlying states providing increased sample efficiency. Due to this pretraining, these methods represent attractive options for on-robot policy learning. While our method does not use any pretraining, and is therefore at a disadvantage relative to these two methods, we maintained this pretraining phase as originally proposed in [7] and [6] as it is a core component of latent representation learning. In both baselines we used the encoder architectures proposed in [6] which were shown to outperform those in [7]. PoE encodes the different input modalities independently using separate encoders and combines them using product of experts [7]. VTT combines modalities by using self and cross-modal attention to build latent representations that focus attention on important task features in the visual state space [6]. For further details on these baselines, see [6; 7]. The latent encoders are pretrained for \(10^{4}\) steps on expert data to predict the reconstruction of the state, contact and alignment embeddings, and the reward. All methods use Prioritized Experience Relay (PER) [32] pre-loaded with \(50\) episodes of the expert data. For more details, see the Appendix (Section 7.3). **Results.** We compared our method (SVFL) against the two baselines (POE and VTT) and the non-symmetric model (CNN) on the nine domains described above. Results from six representative domains are shown in Figure 3 and results for all nine can be found in the Appendix (Figure 14). All results are averaged over five runs starting from independent random seeds. When compared to the baselines, SVFL has significantly higher success rates and sample efficiency in all cases. ### Sensor Modality Ablation Although it is intuitive that force data should help learn better policies on manipulation tasks, especially on contact rich tasks like peg insertion, it is important to validate this assumption and to measure the benefits that can be gained by using both vision and force feedback rather than vision alone. Recall that our state representation can be factored into three modalities, \(s=(I,f,e)\), where \(I\) is an image (vision), \(f\) is force, and \(e\) is the configuration of the robot hand (proprioception). Here, we compare the performance of SVFL with all three modalities against a vision-only model, a vision/force model, and a vision/proprioception model on the same tasks as in Section 5.1. Results for six tasks are shown in Figure 4 and complete results are given in Figure 15 in the Appendix. The results indicate that the inclusion of each additional sensor modality improves sample efficiency and performance for policy learning with all three sensor modalities performing best in most cases. However, notice that the degree to which force (and proprioceptive) data helps depends upon the task. For example, the addition of force feedback drastically improves performance in Peg Insertion but has almost no effect in Block Pulling. There are, however, many tasks between these extremes. In Drawer Opening and Block Picking the force-aware policy converges to a slightly higher success rate than the non-force assisted policies. The fact that force feedback is usually helpful, even in tasks Figure 3: **Baseline Comparison. Comparison of SVFL (gray) with baselines. Greedy evaluation policy is shown in terms of success rate. In all of our experiments, results are averaged over 5 random seeds and the evaluation is performed every \(500\) training steps. Shading denotes standard error.** where one might not expect it, is interesting. This suggests that there is real value in incorporating force feedback into a robotic learning pipeline, even when there is a non-trivial cost to doing so. ### Role of Force Feedback When Visual Acuity is Degraded We also perform experiments in the context of degraded visual acuity to determine what happens if the visual input to our model is scaled down significantly. Specifically, we evaluate the model on RGB-D images rescaled (bilinear interpolation) to four different sizes: \(64\times 64\), \(32\times 32\), \(16\times 16\), and \(8\times 8\). Aside from the rescaling, all other aspects of the model match the SVFL method detailed in the previous section. This experiment gives an indication of how force data can compensate for low resolution cameras, cloudy environments, or smudged camera lenses. Figure 5 shows performance at convergence for six of the tasks at the four different levels of visual resolution (see Figure 13 in the supplementary material for corresponding results on all nine domains). We note several interesting observations. First, the importance of visual acuity is dependant on the task, e.g. high visual acuity is very important for Block Picking but not very important for Block Pulling. Second, force Figure 4: **Sensor Modality Ablation. Comparison of the full SVFL model (gray) versus SVFL with subsets of the data modalities.** Figure 5: **Performance Under Degraded Visual Acuity. Comparison of the full SVFL model (gray) versus SVFL with subsets of the data modalities under visual acuity degradation. Performance is given after all models are trained to convergence.** information generally tends to help the most in low visual acuity scenarios. Finally, while force data generally improves performance, it cannot compensate for the loss of information in extreme visual degradation in tasks which require high visual acuity. ### Real-World On-Robot Policy Learning We repeat the simulated Block Picking policy learning experiment from Section 5.2 in the real world to evaluate our methods performance in the real-world. Figure 6 shows the experimental setup which includes a UR5e robotic arm, a Robotiq Gripper, a wrist-mounted force-torque sensor, and a Intel RealSense camera. The block is a \(5mm\) wooden cube that is randomly posed in the workspace. We utilize AprilTags to track the block for use in reward/termination checking and to automatically reset the workspace by moving the block to a new pose at the start of each episode. These tags are not utilized during policy learning. In order to facilitate faster learning, we modify a number of environmental parameters in our real-world setup. We use a workspace size as of \(0.3m\times 0.3m\times\times 0.25m\) and a sparse reward function. We increase the number of expert demonstrations to \(100\) (from \(50\)) and reduce the maximum number of steps per episode to \(25\) (from \(50\)). Additionally, we reduce the action space by removing control of the gripper orientation and increase the maximum amount of movement the policy can take in one step to \(5cm\) (from \(2.5cm\)). We utilize the same model architecture as in Section 5.1. Figure 6 shows the learning curve of the full SVFL model alongside the various subsets of data modalities available to our method. We train all models for \(3000\) steps taking around \(4\) hours. As in the simulation results, the full SVFL model is both more sample efficient and outperforms SVFL. Additionally, we see that force sensing is a vital component in this setting with the force-aware models achieving a \(90\%\) success rate compared to the \(60\%\) success rate of the non-force aware models (at \(3000\) training steps). ## 6 Discussion & Limitations This paper proposes Symmetric Visual Force Learning (SVFL), an approach to policy learning with visual force feedback that incorporates \(SE(2)\) symmetries into the model. Our experiments demonstrate that SVFL outperforms two recent high profile benchmarks, PoE [7] and VTT [6], by a significant margin both in terms of learning speed and final performance. We also report a couple of interesting empirical findings. First, we find that force feedback is helpful across a wide variety of policy learning scenarios, not just those where one would expect force feedback to help, i.e. Peg Insertion. Second, we find that the positive effect of incorporating force feedback increases as visual acuity decreases. A limitation of this work is that although we expect that our framework is extensible to haptic feedback, this paper focuses on force feedback only. Additionally, we constrain our problem to top-down manipulation and planar symmetries in \(SE(2)\) and therefor there is sig Figure 6: **On-Robot Policy Learning. (Left) Robotic setup. (Right) Comparison of the full SVFL model (gray) versus SVFL with subsets of the data modalities in the real-world on the Block Picking task. Results are averaged over \(3\) runs.** nificant scope to extend this to \(SE(3)\) symmetries. Finally, this paper focuses primarily on RL but the encoder architectures should be widely applicable to other learning techniques such as imitation learning.
2309.03025
The Scattering Map on Collapsing Charged Spherically Symmetric Spacetimes
In this paper we generalise our previous results [1] concerning scattering on the exterior of collapsing dust clouds to the charged case, including in particular the extremal case. We analyse the energy boundedness of solutions $\phi$ to the wave equation on the exterior of collapsing spherically symmetric charged matter clouds. We then proceed to define the scattering map on this spacetime, and look at the implications of our boundedness results on this map. More specifically, we first construct a class of spherically symmetric charged collapsing matter cloud exteriors, and then consider solutions to the wave equation with Dirichlet (reflective) boundary conditions on the surface of these clouds. We then show that the energy of $\phi$ remains uniformly bounded going forwards or backwards in time, and that the scattering map is bounded going forwards but not backwards. Therefore, the scattering map is not surjective onto the space of finite energy on $\mathcal{I}^+\cup\mathcal{H}^+$. Thus there does not exist a backwards scattering map from finite energy radiation fields on $\mathcal{I}^+\cup\mathcal{H}^+$ to finite energy radiation fields on $\mathcal{I}^-$ for these models. These results will be used to give a treatment of Hawking radiation in a companion paper [2].
Frederick Alford
2023-09-06T14:10:48Z
http://arxiv.org/abs/2309.03025v1
# The Scattering Map on Collapsing Charged Spherically Symmetric Spacetimes ###### Abstract In this paper we generalise our previous results [1] concerning scattering on the exterior of collapsing dust clouds to the charged case, including in particular the extremal case. We analyse the energy boundedness of solutions \(\phi\) to the wave equation on the exterior of collapsing spherically symmetric charged matter clouds. We then proceed to define the scattering map on this spacetime, and look at the implications of our boundedness results on this map. More specifically, we first construct a class of spherically symmetric charged collapsing matter cloud exteriors, and then consider solutions to the wave equation with Dirichlet (reflective) boundary conditions on the surface of these clouds. We then show that the energy of \(\phi\) remains uniformly bounded going forwards or backwards in time, and that the scattering map is bounded going forwards but not backwards. Therefore, the scattering map is not surjective onto the space of finite energy on \(\mathcal{I}^{+}\cup\mathcal{H}^{+}\). Thus there does not exist a backwards scattering map from finite energy radiation fields on \(\mathcal{I}^{+}\cup\mathcal{H}^{+}\) to finite energy radiation fields on \(\mathcal{I}^{-}\) for these models. These results will be used to give a treatment of Hawking radiation in a companion paper [2]. ## 1 Overview In [1], we initiated the study of the classical scattering of waves on fully dynamical collapsing spacetimes, specifically the Oppenheimer-Snyder model. This plays a significant role in the mathematical study of Hawking radiation [3]. Because extremal black holes play a distinguished role within the study of Hawking radiation, it is important to include these in our models. To this end, this paper looks at the scattering map for massless scalar waves on a class of spherically symmetric, charged, collapsing spacetime models, which can be viewed as a generalisation of the Oppenheimer-Snyder model. This class includes models which collapse to form both sub-extremal and extremal Reissner-Nordstrom black holes [4]. This will allow us to study Hawking radiation in a mathematical context in a companion paper, [2]. For the convenience of the reader, we closely follow the structure of our previous paper [1]. We hope this will make similarities and differences between the charged and uncharged case more clear. In this paper we will be studying the energy boundedness of solutions to the linear wave equation \[\Box_{\phi}\phi:=\frac{1}{\sqrt{-g}}\partial_{a}\left(\sqrt{-g}g^{ab}\partial _{b}\phi\right)=0 \tag{1.1}\] on collapsing spherically symmetric spacetimes, i.e. solutions to the Einstein-Maxwell equations outside an evolving sphere. This sphere is given by \(\{(t^{*},r_{b}(t^{*}),\theta,\varphi)\}\) in the coordinates below with some restrictions placed on the function \(r_{b}\). As this exterior is an asymptotically flat solution of the Einstein-Maxwell equations, it has a Reissner-Nordstrom metric, given by \[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}{}^{ 2}+2\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}dr+\left(1+\frac{ 2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)dr^{2}+r^{2}g_{S^{2}} \tag{1.2}\] \[t^{*}\in\mathbb{R}\qquad r\in[\tilde{r}_{b}(t^{*}),\infty),\] where \(g_{S^{2}}\) is the metric on the unit 2-sphere, \(\tilde{r}_{b}=\max\{r_{b},r_{+}\}\), and \(r=r_{+}\) is the horizon of the underlying Reissner-Nordstrom metric as given by (3.6). The parameter \(M\) is positive, and the parameter \(q\) takes values in the range \([-1,1]\), with \(q^{2}=1\) corresponding to the extremal case. These collapsing matter cloud models will include the Oppenheimer-Snyder Model [5], and we will refer to these more general models as Reissner-Nordstrom Oppenheimer-Snyder (RNOS) models. These will include both extremal and sub-extremal cases. We will be imposing Dirichlet (i.e. reflective) conditions on the boundary of the matter cloud, i.e. \(\phi=0\) on \(r=r_{b}\) (in a trace sense), and then proceed to define a scattering theory for these spacetimes. In [1], we considered both the permeating and the reflective cases. However, here we will not attempt to consider the interior of our matter cloud as this will depend entirely on one's choice of matter model. The main theorems of this paper are informally stated below: **Theorem 1** (Uniform Non-degenerate Energy Boundedness): _For all RNOS models, including the extremal case \(|q|=1\), we define \(\mathcal{F}_{[t_{5},t]}\), \(t_{0}^{*}\leq t_{1}^{*}\), to be the map taking solutions of (1.1) on \(\Sigma_{t_{5}^{*}}\) forward to the same solution evaluated on \(\Sigma_{t_{1}^{*}}\). Then \(\mathcal{F}_{[t_{5},t]}\) is uniformly bounded with respect to the non-degenerate energy of \(\phi\). Furthermore, for \(t_{1}^{*}\leq t_{c}^{*}\), the inverse of \(\mathcal{F}_{[t_{5},t]}\) is also uniformly bounded with respect to the non-degenerate energy._ _This Theorem is stated more precisely across Theorems 6.2, 6.3, 6.4._ The hypersurface \(\Sigma_{t^{*}}\) is shown in Figure 1, as is the sphere \((t_{c}^{*},r_{+})\). We define \(t_{c}^{*}\) as the \(t^{*}\) coordinate for which \(r_{b}(t_{c}^{*})=r_{+}\). Here "non-degenerate energy" signifies energy flux through the surface with respect to an _everywhere timelike_ vector field (including at the boundary of the matter cloud and the horizon), which coincides with the timelike Killing field in a neighbourhood of null infinity. Non-degenerate energy through the surface \(\Sigma_{t^{*}}\) controls the \(L^{2}\) norm of \(\phi\)'s derivatives. We next turn to defining a radiation field on future and past null infinities \(\mathcal{I}^{\pm}\) and at the future horizon \(\mathcal{H}^{+}\). Note that previous works on Reissner-Nordstrom black holes, see [6] for example, already give us the existence of the future radiation field at \(\mathcal{I}^{+}\) and \(\mathcal{H}^{+}\), so we only need to concern ourselves with the past radiation field on \(\mathcal{I}^{-}\). **Theorem 2** (Existence and Non-degenerate Energy Boundedness of the Past Radiation Field): _For all RNOS models, including the extremal case \(|q|=1\), we define the map \(\mathcal{F}^{-}\), which takes solutions of (1.1) from \(\Sigma_{\sharp}\), to their past radiation field on \(\mathcal{I}^{-}\). Then \(\mathcal{F}^{-}\) exists and is bounded with respect to the non-degenerate energy. The inverse of \(\mathcal{F}^{-}\), denoted \(\mathcal{F}^{+}\), is also bounded with respect to the non-degenerate energy. Finally, \(\mathcal{F}^{-}\) is a bijection between these finite energy spaces._ _This Theorem is stated more precisely in Theorem 7.6._ We finally define the map \(\mathcal{G}^{+}\), which takes solutions on \(\Sigma_{\sharp^{+}}\) to their radiation field on \(\mathcal{H}^{+}\cup\mathcal{I}^{+}\). We make use of previous results from [7, 8], from which we know \(\mathcal{G}^{+}\) is bounded, and that \(\mathcal{G}^{-}:=(\mathcal{G}^{+})^{-1}\) (only defined on the image of \(\mathcal{G}^{+}\)) is unbounded. This leads us to the final theorem on the scattering map: **Theorem 3** (Boundedness and Non-surjectivity of the Scattering Map): _For all RNOS models, including the extremal case \(|q|=1\), we define the scattering map_ \[\mathcal{S}^{+}:=\mathcal{G}^{+}\circ\mathcal{F}^{+}. \tag{1.3}\] _This map takes the radiation field of a solution to (1.1) on \(\mathcal{I}^{-}\) to that solution's radiation fields on \(\mathcal{H}^{+}\cup\mathcal{I}^{+}\). Then \(\mathcal{S}^{+}\) is bounded with respect to the non-degenerate energy (\(L^{2}\) norms of \(\partial_{\epsilon}(r\phi)\) on \(\mathcal{I}^{-}\) and \(\mathcal{H}^{+}\) and \(\partial_{\epsilon}(r\phi)\) on \(\mathcal{I}^{+}\)). However, if we define the inverse of \(\mathcal{S}^{+}\), denoted by \(\mathcal{S}^{-}\) (only defined on the image of \(\mathcal{S}^{+}\)), this is not bounded with respect to the non-degenerate energy. Therefore there does not exist a backwards scattering map going from finite energy spaces on \(\mathcal{H}^{+}\cup\mathcal{I}^{+}\) back to finite energy spaces on \(\mathcal{I}^{-}\)._ _This Theorem is stated more precisely as Theorem 7.9._ The non-invertibility of \(\mathcal{S}^{+}\) arises from the non-invertibility of \(\mathcal{G}^{+}\), exactly as in the uncharged case [1]. It is the existence of the map \(\mathcal{F}^{+}\) mapping into the space of non-degenerate energy that extends this non-invertibility to data on \(\mathcal{I}^{-}\), and thus causes the collapsing case to differ from the Reissner-Nordstrom case. **Remark 1.1**: _It is particularly surprising to note the non-surjectivity of \(\mathcal{S}^{+}\) includes the extremal case. This occurs despite the significant differences in the properties of the black hole horizon, most notably the absence of the usual red-shift effect [9] in the extremal case._ It remains an open problem to precisely characterise the image of the scattering map \(\mathcal{S}^{+}\), even in the uncharged case. ### Acknowledgements We would like to thanks Mihalis Dafermos for many insightful comments, and for proof reading the manuscript. We would also like to thank Owain Salter Fitz-Gibbon for many insightful discussions. Last but by no means least, we would like to thank Claude Warnick and Bernard Kay for their comments and suggestions. This work was part funded by EPSRC DTP, [1936235]. This work was supported by the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research. ## 2 Previous Work There have been several previous works studying black hole scattering on collapsing spacetimes, see [10, 11, 12, 13]. However, scattering in the collapsing charged case does not appear to have been considered previously. For a longer discussion of the uncharged case, we refer the reader to our previous work [1] and references therein. There have, however, been other works considering the underlying models of charged collapse, and there have been other works considering scattering on Reissner-Nordstrom backgrounds. Several papers look at models of spherical collapse to Reissner-Nordstrom, such as [14, 15, 16]. Most papers considering collapsing models focus on the interior of the collapsing star. This paper, however, will not focus on the specifics of interior models such as these, unlike [1]. We note that there are many such models, which entirely depend on what equation of state is chosen for the interior of the matter cloud. Generally, study of the scattering map in the exterior sub-extremal Reissner-Nordstrom spacetime is paired together with that of Schwarzschild, as it has similar behaviour (see [7]). The extremal case has been studied in detail separately, see [8], as behaviour in this case differs from the sub-extremal case. Scattering in the black hole interior has also been studied independently, [17]. The exterior of the RNOS Models (see Section 3) is a spherically symmetric, vacuum solution to the Einstein-Maxwell equations, and thus has the Reissner-Nordstrom metric, by uniqueness (see [18], for example). However, this paper will not be discussing the scattering map on Reissner-Nordstrom much beyond this, and instead will quote results from [7] (in the sub-extremal case) and [8] (in the extremal case). We refer the reader to these for a more complete discussion of scattering in Reissner-Nordstrom spacetimes. ## 3 RNOS Models In this section we look at our background models of spherically symmetric charged matter cloud collapse. In section 3.1, we derive the metric of our spacetime in the exterior of our collapsing matter cloud. If the reader is not interested in this derivation, they may skip straight to section 3.2, where the background manifold is defined, with some interesting and/or useful properties stated. Figure 1: Penrose Diagram of RNOS Model, with spacelike hyper surface \(\Sigma_{\sharp^{+}}\). ### Example of a Physical RNOS model In this section, we derive a physical example of an RNOS model under the following assumptions: We assume our manifold is a spherically symmetric solution of the Einstein-Maxwell equations \[R_{\mu\nu}-\frac{1}{2}R_{\mu\nu} =8\pi\mathbf{T}_{\mu\nu} \tag{3.1}\] \[\mathbf{T}_{\mu\nu} =\frac{1}{4\pi}\left(F_{\mu\alpha}F_{\nu}{}^{\alpha}-\frac{1}{4}F \alpha^{\beta}F_{\alpha\beta}g_{\mu\nu}\right)\] (3.2) \[\nabla_{\mu}F^{\mu\nu} =0\] (3.3) \[\nabla_{\mu}F_{\nu\rho}+\nabla_{\nu}F_{\mu\nu}+\nabla_{\rho}F_{\mu \nu}=0, \tag{3.4}\] with coordinates \(t^{*}\in(-\infty,\infty)\), \((\theta,\varphi)\in S^{2}\), \(r\in[\tilde{r}_{\text{h}}(t^{*}),\infty)\). We define \[\tilde{r}_{\text{h}}(t^{*}) :=\begin{cases}r_{\text{h}}(t^{*})&t^{*}\leq t_{*}^{*}\\ r_{+}&t^{*}>t_{*}^{*}\end{cases} \tag{3.5}\] \[r_{+} :=M(1+\sqrt{1-q^{2}}). \tag{3.6}\] Here \(r=r_{\text{h}}(t^{*})\) is a hypersurface generated by a family of timelike, ingoing radial curves such that, for any fixed \(\theta,\varphi\), the curve \(\{t^{*},r_{\text{h}}(t^{*}),\theta,\varphi\}\) describes the motion of a particle moving only under the electromagnetic force with fixed charge to mass ratio. the value of \(t^{*}\) for which \(r_{\text{h}}(t^{*})=r_{+}\) is labelled \(t_{*}^{*}\). That is, we assume that the surface of the cloud is itself massive and charged, with uniform charge to mass ratio across the surface. For our results we will actually only require certain bounds on \(r_{\text{h}}\) and \(\tilde{r}_{\text{b}}\), but here we will consider one possible behaviour of \(r_{\text{h}}\) in full. We also note that we are looking solely at the exterior of the black hole. Thus, we will not be considering the region \(r_{\text{h}}(t^{*})<r_{+}\). We will instead use \(\tilde{r}_{\text{h}}(t^{*})\) as the boundary of our manifold. The topology of our manifold \(\{t^{*},r\geq\tilde{r}_{\text{h}}(t^{*}),\theta,\varphi\}\) is that of the exterior of a cylinder in \(3+1D\) Lorentzian space. As this is simply connected, equation (3.4) means we can choose an \(A\) such that \[F=dA. \tag{3.7}\] Given an asymptotically flat, spherically symmetric solution of the Einstein-Maxwell equations, we know that our solution is a subset of a Reissner-Nordstrom spacetime (see for example in [18]). This gives the first two parameters of our spacetime; \(M\), the mass of the Reissner-Nordstrom black hole spacetime our manifold is a subset of, and \(q=Q/M\), the charge density of our underlying Reissner-Nordstrom spacetime. We will assume \(q\) has modulus less than or equal to \(1\), as otherwise our matter cloud will either not collapse, or will form a naked singularity rather than a black hole. Exterior Reissner-Nordstrom spacetime has global coordinates: \[\mathcal{M}_{RN} =\mathbb{R}\times[r_{+},\infty)\times S^{2} \tag{3.8}\] \[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*2}+ 2\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}dr+\left(1+\frac{2M}{ r}-\frac{q^{2}M^{2}}{r^{2}}\right)dr^{2}+r^{2}g_{S^{2}}\] (3.9) \[A=\frac{qM}{r}dt^{*}, \tag{3.10}\] where \(g_{S^{2}}\) is the metric on the unit 2-sphere. **Remark 3.1** (Adding \(\mathcal{H}^{-}\) to the Manifold): _Normally, the Reissner-Nordstrom manifold includes \(\mathcal{H}^{-}\), and in the sub-extremal case the bifurcation sphere \(\mathcal{B}\). However, as we will be considering the exterior of a collapsing dust cloud, we will not need \(\mathcal{H}^{-}\). Thus, we will not concern ourselves with the intricacies of attaching \(\mathcal{H}^{-}\) and \(\mathcal{B}\) to \(\mathcal{M}_{RN}\)._ We now proceed to calculate the path moved by a radially moving charged test particle, with charge density \(\tilde{q}\). The motion of this particle extremises the following action: \[S =\int_{rrn}Ld\tau=m\int_{rrn_{a}}\frac{1}{2}g_{\text{ad}}v^{a}v^{ b}-\tilde{q}v^{a}A_{\text{ad}}d\tau \tag{3.11}\] \[=m\int_{rrn_{b}}\left(\frac{1}{2}\left(-\left(1-\frac{2M}{r}+ \frac{q^{2}M^{2}}{r^{2}}\right)\left(\frac{dt^{*}}{d\tau}\right)^{2}+2\left( \frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\frac{dt^{*}}{d\tau}\frac{dr}{d \tau}+\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\left(\frac{dr}{d \tau}\right)^{2}\right)-\frac{q\tilde{q}M}{r}\frac{dt^{*}}{d\tau}\right)dr\] for \(v^{a}\) the velocity of the particle with respect to \(\tau\), \(A\) as defined in (3.7), and \(\tau\) the proper time for the particle, i.e. normalised such that \(g_{ab}v^{a}v^{b}=-1\). We can then use first integrals of the Euler-Lagrange equations to find constants of the motion. Firstly, \(L\) is independent of explicit \(\tau\) dependence, and so \(g_{ab}v^{a}v^{b}\) is constant. By rescaling \(\tau\), we choose \(g_{ab}v^{a}v^{b}\) to be \(-1\). The second constant we obtain is from \(L\) being independent of \(t^{*}\). Thus \[T^{*}=\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\frac{dt^{*}}{d\tau}- \left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\frac{dr}{d\tau}+\frac{q \tilde{q}M}{r} \tag{3.12}\] is constant. Figure 2: Penrose diagram of pure Reissner–Nordström spacetimes Using (3.12) to remove dependence of \(g_{ab}v^{a}v^{b}\) on \(\frac{d\nu}{d\tau}\), we obtain \[\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)^{-1}\left(T^{*} -\frac{q\tilde{q}M}{r}\right)^{2} =\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\frac{ d\mu^{*}}{d\tau}\right)^{2}-2\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}} \right)\frac{d\tau}{dr}\frac{dr}{dr}+\frac{\left(\frac{2M}{r}-\frac{q^{2}M^{2} }{r^{2}}\right)^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\left(\frac{d\mu }{d\tau}\right)^{2}\] \[=-g_{ab}v^{a}v^{b}+\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}+ \frac{\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)^{2}}{1-\frac{2M}{r}+ \frac{q^{2}M^{2}}{r^{2}}}\right)\frac{dr}{dr^{2}}\] \[=1+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)^{-1} \left(\frac{dr}{d\tau}\right)^{2}, \tag{3.13}\] which rearranges to \[\left(\frac{d\tau}{d\tau}\right)^{2} =\left(T^{*}-\frac{q\tilde{q}M}{r}\right)^{2}-\left(1-\frac{2M}{r }+\frac{q^{2}M^{2}}{r^{2}}\right) \tag{3.14}\] \[=\left(T^{*}-\frac{q\tilde{q}M}{r}\right)^{2}-\left(1-\frac{M}{r} \right)^{2}+\frac{\left(1-q^{2}\right)M^{2}}{r^{2}}\] \[=\frac{Mr_{+}}{r^{2}}\left(T^{*}-\frac{Mq\tilde{q}}{r_{+}}+\frac{ \left(T^{*}-1\right)\left(r-r_{+}\right)}{M}\right)\left(T^{*}-\frac{Mq\tilde {q}}{r_{+}}+\frac{\left(T^{*}+1\right)\left(r-r_{+}\right)}{M}\right).\] From (3.12) we can see that if this particle's velocity is to be future directed, we require \(T^{*}>0\). In order for the surface of this dust cloud to cross the event horizon, we require that \(\hat{q}<\frac{T^{*}-\hat{q}}{Mq}\). We can then see that \(\frac{d\tau}{d\tau}\neq 0\) for any \(r\), provided \(T^{*}>1\). If \(T^{*}<1\), then \(\frac{d\tau}{d\tau}=0\) for some \(r\) (we will discuss if this \(r\) is obtained in finite time below). We can also see, from writing out the statement \(g_{ab}v^{a}v^{b}=-1\), that \[\left(-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)+2\left(\frac{2M}{ r}-\frac{q^{2}M^{2}}{r^{2}}\right)\frac{dr}{dt^{*}}+\left(1+\frac{2M}{r}- \frac{q^{2}M^{2}}{r^{2}}\right)\left(\frac{dr}{dt^{*}}\right)^{2}\right)\left( \frac{dt^{*}}{d\tau}\right)^{2}=-1 \tag{3.15}\] which tells us that \(\frac{d\mu}{dt^{*}}>-1\). #### 3.1.1 \(T^{*}<1\) We now look at the behaviour of \(r_{\rm b}\) in the case where \(T^{*}<1\). If \(T^{*}<1\), then looking at \(r\rightarrow\infty\) we can see \(\frac{d\mu}{d\tau}\) vanishes at a finite radius, so the matter cloud will tend to that radius, either reaching it at a finite time, or as \(t\rightarrow-\infty\). We therefore look at integrating equation (3.14) to obtain \(\tau(r)\), which gives us \[\tau=\frac{M}{(1-T^{*})^{3/2}}\left((1-q\tilde{q}T^{*})\sin^{-1}\left(\frac{1 -T^{*2}}{A}\frac{r}{M}-\frac{1-q\tilde{q}T^{*}}{A}\right)-\left(A-\left((1-T^ {*2})\frac{r}{M}-(1-q\tilde{q}T^{*})\right)^{2}\right)^{\frac{1}{2}}\right) \tag{3.16}\] where \(A=\sqrt{(1-q\tilde{q}T^{*})^{2}-q^{2}(1-\tilde{q}^{2})(1-T^{*2})}\) is a constant. Equation (3.16) tells us that in the case \(T^{*}<1\), the matter cloud's radius obtains its limit within a finite (and therefore compact) proper time interval. As \(t^{*}\) is a continuous increasing function of \(\tau\), \(r_{\rm b}\) obtains its limit in finite \(t^{*}\) time. We will call this time \(t^{*}_{-}\). At this point, the curve would collapse back into the black hole, hitting the past event horizon. Therefore, in order to have a collapsing model in the \(T^{*}<1\) case, we will assume that the radius of the matter cloud, \(r_{\rm b}(t^{*})\), remains at \(r_{\rm b}(t^{*}_{-})\) for all \(t^{*}\leq t^{*}_{-}\). #### 3.1.2 \(T^{*}\geq 1\) Here we have that our matter cloud radius tends to \(\infty\), as \(\tau\rightarrow-\infty\). Thus the main part we will need to concern ourselves with is what happens to the surface of the matter cloud as \(\tau\rightarrow-\infty\), \(r_{\rm b}\rightarrow\infty\). Equations (3.14) and (3.12) give us that \[\dot{r}_{\rm b}:=\frac{dr}{dt^{*}}\frac{\frac{d\tau}{dt^{*}}}{\frac{d\tau}{dt^{ *}}}\rightarrow-\frac{\sqrt{T^{*2}-1}}{T^{*}}=:-a\qquad\text{as}\quad t^{*} \rightarrow-\infty \tag{3.17}\] where we will refer to \(a\in[0,1)\) as the asymptotic speed of the surface of the matter cloud. Note that in the case \(T^{*}=1\), we obtain that \(\dot{r}_{\rm b}\to 0\) as \(t^{*}\rightarrow-\infty\). In this case, ### Definition of RNOS Manifold and Global Coordinates The RNOS models are defined as a class of collapsing spacetimes, with parameters \(M\geq 0\), \(q\in[-1,1]\), and a \(H^{2}_{\infty}\) function \(r_{\rm b}:(-\infty,t^{*}_{\rm c}]\rightarrow[0,\infty)\) with constraints (3.20)-(3.22). The topologies of the underlying manifolds are all given by global coordinates: \[\mathcal{M}=\bigcup_{t^{*}\in\mathbb{R}}\{t^{*}\}\times[\tilde{r}_{\rm b}(t^{*} ),\infty)\times S^{2}\subset\mathcal{M}_{RN}. \tag{3.18}\] where the range of the second coordinate, \(r\) depends on the first coordinate \(t^{*}\). Then we have metric \[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*2}+2\left(\frac{2M}{ r}-\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}dr+\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}} \right)dr^{2}+r^{2}g_{S^{2}} \tag{3.19}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad t^{*} \in\mathbb{R}\qquad r\in[\tilde{r}_{\rm b}(t^{*}),\infty)\] where \(g_{S^{2}}\) is the Euclidean metric on the unit sphere. We impose the following conditions on \(r_{\text{b}}\): \[r_{\text{b}}(t^{*}):=\frac{dr}{dt^{*}}\in(-1,0] \tag{3.20}\] \[\exists t^{*}\text{ s.t. }r_{\text{b}}(t^{*}_{\text{e}})=r_{+},r_{ \text{b}}(t^{*})>r_{+}\quad\forall t^{*}<t^{*}_{\text{e}}\] (3.21) \[(1,r_{\text{b}}(t^{*}),0,0)\in T(\mathcal{M})\text{ is timelike for all }t^{*}\in(-\infty,t^{*}_{\text{e}}), \tag{3.22}\] where \(r_{+}\) is the black hole horizon for the Reissner-Nordstrom spacetime given by (3.6). We then define \(\tilde{r}_{\text{b}}\) by \[\tilde{r}_{\text{b}}:=\begin{cases}r_{\text{b}}(t^{*})&t^{*}\leq t^{*}_{\text {e}}\\ r_{+}&t^{*}>t^{*}_{\text{e}}\end{cases}, \tag{3.23}\] Note that \(\tilde{r}_{\text{b}}(t^{*})\) is not a differentiable function of \(t^{*}\), but \(r_{\text{b}}\) is. We allow 2 possible past asymptotic behaviours for \(r_{\text{b}}\). Firstly, \[\int_{-\infty}^{t^{*}}|r_{\text{b}}(t^{*})|+|\tilde{r}_{\text{b}}(t^{*})|dt^{* }<\infty. \tag{3.24}\] This is known as the 'fixed boundary' case, as it required \(r_{\text{b}}(t^{*})\to r_{-}\) as \(t^{*}\to-\infty\) for some \(r_{-}\). The second allowed past asymptotic behaviour will be referred to as the 'expanding boundary' case, and requires: \[\int_{-\infty}^{t^{*}_{\text{e}}}\frac{1}{r_{\text{b}}(t^{*})^{2 }}dt^{*}<\infty \tag{3.25}\] \[\dot{r}_{\text{b}}(t^{*})\in[-1+\epsilon,0]\text{ for some }\epsilon>0.\] This model includes any past boundary condition for which \(\dot{r}_{\text{b}}\to a\in(-1,0)\), and also includes the Oppenheimer-Snyder model, as this has \(r_{\text{b}}(t^{*})\sim(-t^{*})^{2/3}\). Note this case requires \(r_{\text{b}}\to\infty\) as \(t^{*}\to-\infty\). The formation of an extremal black hole in finite time is a much discussed topic, see for example [19], [20] and more recently [21], and is heavily related to the third law of black hole dynamics. However, this paper will not discuss the formation of these black holes in more detail, and instead just consider the fairly general RNOS models given above. The RNOS models have the same exterior Penrose diagram as the original Oppenheimer-Snyder model, see Figure 1, derived in [1], for example. We will also be using the double null coordinates given by: \[u=t^{*}-\int_{-\infty M}^{r}\frac{1+\frac{2M}{s}-\frac{x^{2}M^{ 2}}{s^{2}}}{1-\frac{2M}{s}+\frac{x^{2}M^{2}}{s^{2}}}ds \tag{3.26}\] \[v=t^{*}+r\] (3.27) \[\partial_{u}=\frac{1}{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r ^{2}}\right)(\partial_{r}-\partial_{r})\] (3.28) \[\partial_{v}=\frac{1}{2}\left(\left(1+\frac{2M}{r}-\frac{q^{2}M^ {2}}{r^{2}}\right)\partial_{r}+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)\partial_{r}\right)\] (3.29) \[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dudv+r(u, v)^{2}g_{S^{2}}. \tag{3.30}\] Finally, we have four linearly independent Killing vector fields in our space time. The timelike Killing field, \(\partial_{t^{*}}\) does not preserve the boundary \(\{r=r_{\text{b}}(t^{*})\}\). However, we have 3 angular Killing fields, \(\{\Omega_{i}\}_{i=1}^{3}\) which span all angular derivatives and are tangent to the boundary of the matter cloud. When given in \(\theta\), \(\varphi\) coordinates, these take the form: \[\Omega_{1}=\partial_{\varphi} \tag{3.31}\] \[\Omega_{2}=\cos\varphi\partial_{\theta}-\sin\varphi\cot\theta \partial_{\varphi}\] \[\Omega_{3}=-\sin\varphi\partial_{\theta}-\cos\varphi\cot\theta \partial_{\varphi}.\] ## 4 Notation In this paper, we will be using the same notation as [1]. We will be considering the following hypersurfaces in our manifold, equipped with the stated normals and volume forms. Note these normals will not necessarily be unit normals, but have been chosen such that divergence theorem can be applied without involving additional factors. \[\Sigma_{\text{q}_{\text{i}}}:=\{(t^{*},r,\theta,\varphi):r\geq \tilde{r}_{\text{b}}(t^{*}),t^{*}=t^{*}_{0}\} dV=r^{2}drd\omega \tag{4.1}\] \[\Sigma_{\text{no}}:=\{(t^{*},r,\theta,\varphi):r\geq\tilde{r}_{ \text{b}}(t^{*}),u(t^{*},r)=u_{0}\} dV=\frac{1}{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)r^{2}drd\omega d nn=-du\] (4.2) \[\Sigma_{\text{r}_{\text{0}}}:=\{(t^{*},r,\theta,\varphi):r\geq \tilde{r}_{\text{b}}(t^{*}),v(t^{*},r)=v_{0}\} dV=\frac{1}{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)r^{2}dud\omega d nn=-dv\] (4.3) \[S_{[q_{\text{i}},\text{e}_{\text{i}}]}=\{(t^{*},\tilde{r}_{\text{ b}}(t^{*}),\theta,\varphi)\text{ s.t. }t^{*}\in[t^{*}_{\text{o}},t^{*}_{\text{i}}]\} dV=r^{2}dt^{*}d\omega d dn=d\rho:=dr-\dot{r}_{\text{b}}dt^{*}, \tag{4.4}\] where \(d\omega\) is the Euclidean volume form on the unit sphere _i.e._ \[d\omega=\sin\theta d\theta d\varphi. \tag{4.5}\] Figure 3: Penrose Diagram of RNOS Model, with spacelike hyper surface \(\Sigma_{\text{r}}\). We define future/past null infinity by: \[\mathcal{I}^{+}:=\mathbb{R}\times S^{2}\qquad dV=dud\omega\qquad\mathcal{I}^{-}:= \mathbb{R}\times S^{2}\qquad dV=dvd\omega. \tag{4.6}\] Past null infinity is viewed as the limit of \(\Sigma_{u_{0}}\) as \(u_{0}\to\infty\). For an appropriate function \(f(u,v,\theta,\varphi)\), we will define the function "evaluated on \(\mathcal{I}^{+}\)" to be \[f(v,\theta,\varphi)|_{\mathcal{I}^{-}:=\lim_{v\to\infty}f(u,v,\theta,\varphi). \tag{4.7}\] Similarly, \(\mathcal{I}^{+}\) is considered to be the limit of \(\Sigma_{u_{0}}\) as \(v_{0}\to\infty\). For an appropriate function \(f(u,v,\theta,\varphi)\), we will define the function "evaluated on \(\mathcal{I}^{+}\)" to be \[f(u,\theta,\varphi):=\lim_{v\to\infty}f\left(u,v,\theta,\varphi\right). \tag{4.8}\] From here onwards, any surface integral that is left without a volume form will be assumed to have the relevant volume form listed above, and all space-time integrals will be assumed to have the usual volume form \(\sqrt{-det(g)}\). We will be considering solutions of (1.1) which vanish on the surface \(r=r_{b}(t^{*})\) (in a trace sense). We will generally be considering these solutions to arise from initial data on a spacelike surface. Initial data will consist of imposing the value of the solution and its normal derivative, with both smooth and compactly supported. We will then consider the following seminorms of a spacetime function \(f\) on any given submanifold \(\Sigma\subset\mathcal{M}\), given by: \[\|f\|_{\mathcal{I}^{(}\Sigma_{u})}^{2}=\int_{\Sigma}|f|^{2}dV. \tag{4.9}\] We will also define the \(\dot{H}^{1}\) norm as: \[\|f\|_{\dot{H}^{1}(\Sigma_{u_{0}}^{+})}^{2} :=\int_{\Sigma_{u_{0}}}|\partial_{t^{*}}f|^{2}+|\partial_{t}f|^{2 }+\frac{1}{r^{2}}\|\dot{\nabla}f\|^{2}dV \tag{4.10}\] \[\|f\|_{\dot{H}^{1}(\Sigma_{u_{0}})}^{2} :=\int_{\Sigma_{u_{0}}}\frac{|\partial_{t}f|^{2}}{\left(1-\frac{ 2M}{r}+\frac{4^{3}M^{2}}{r^{2}}\right)^{2}}+\frac{1}{r^{2}}\|\dot{\nabla}f\|^{2 }dV\] (4.11) \[\|f\|_{\dot{H}^{1}(\Sigma_{u_{0}})}^{2} :=\int_{\Sigma_{u_{0}}}\frac{|\partial_{t}f|^{2}}{\left(1-\frac{ 2M}{r^{2}}+\frac{4^{3}M^{2}}{r^{2}}\right)^{2}}+\frac{1}{r^{2}}\|\dot{\nabla}f \|^{2}dV, \tag{4.12}\] where \(\dot{\nabla}\) is the induced gradient on the unit sphere. This is a tensor on the unit sphere, and we define the norm of such a tensor by \[\|T\|^{2}=\sum_{a_{1},a_{2},\dots a_{n}=1}^{n}|T_{a_{1},a_{2},\dots a_{n}}|^{2} \tag{4.13}\] for \(T\) an \(m\) tensor on \(S^{n}\), in any orthonormal basis tangent to the sphere at that point. Note that we have not yet defined the spaces for which the \(\dot{H}^{1}\) norms will actually be norms. Let \(C^{\infty}_{0}(S)\) be the space of compactly supported functions on surface \(S\), which vanish on \(\{r=r_{b}(t^{*})\}\cap S\). We will define the \(\dot{H}^{1}(\Sigma_{t_{0}^{*}})\) norm on a pair of functions \(\phi_{0},\phi_{1}\in C^{\infty}_{0}(\Sigma_{t_{0}^{*}})\) as follows: \[\|(\phi_{0},\phi_{1})\|_{\dot{H}^{1}(\Sigma_{u_{0}^{*}})}:=\|\phi\|_{\dot{H}^{ 1}(\Sigma_{u_{0}^{*}})}\qquad\text{for any $\phi$ s.t. $(\phi|_{\Sigma_{u_{0}^{*}}},\partial_{t}\cdot\phi|_{\Sigma_{u_{0}^{*}}})=(\phi_{0},\phi_{1}).$} \tag{4.14}\] We similarly define the \(\dot{H}^{1}(\Sigma_{u_{0}})\) and \(\dot{H}^{1}(\Sigma_{v_{0}})\) on \(\phi_{0}\in C^{\infty}_{0}(\Sigma_{u_{0},v_{0}})\) as follows: \[\|\phi\|_{\dot{H}^{1}(\Sigma_{u_{0},v_{0}})}:=\|\phi\|_{\dot{H}^{1}(\Sigma_{u_ {0},v_{0}})}\qquad\text{for any $\phi$ s.t. $\phi|_{\Sigma_{u_{0},v_{0}}}=\phi_{0}$.} \tag{4.15}\] We will also need to consider what functions we will be working with. For this, we will be using the same notation as [22, 1]. We first need to look at the notions of energy momentum tensors and energy currents (note this energy momentum tensor will be expressed as \(T\), and is different from \(\mathbf{T}\) in (3.1)). \[T_{\mu\nu}(\phi) =\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla^{ \rho}\phi\nabla_{\rho}\phi \tag{4.16}\] \[J^{\mu}_{S} =X^{\nu}T_{\mu\nu}\] (4.17) \[K^{X} =\nabla^{\mu}J^{X}_{\nu}\] (4.18) \[J^{X,w}_{\mu} =X^{\nu}T_{\mu\nu}+w\nabla_{\mu}(\phi^{2})-\phi^{2}\nabla_{\mu}w\] (4.19) \[K^{X,w} =\nabla^{\mu}J^{X,w}_{\nu}=K^{X}+2w\nabla_{\mu}\phi\nabla^{\mu} \phi-\phi^{2}\Box_{y}w\] (4.20) \[X\text{-energy}(\phi,S) =\int_{S}dn(J^{X}). \tag{4.21}\] Here, \(dn\) is the normal to \(S\). It should be noted that applications of divergence theorem do not introduce any additional factors with our choice of volume form and normal, i.e. \[\int_{t^{*}\in[t^{*}_{0},t^{*}_{1}]}K^{X,\omega}=-\int_{\Sigma_{t^{*}_{1}}}dn( J^{X,\omega})+\int_{\Sigma_{t^{*}_{2}}}dn(J^{X,\omega})-\int_{S_{[t^{*}_{2},t^{*}]}}dn(J^{X, \omega}), \tag{4.22}\] with similar equations holding for \(\Sigma_{u,v}\). For any \(T_{\mu\nu}\) obeying the dominant energy condition, \(X\) future pointing and causal, and \(S\) spacelike, then the \(X\)-energy is non-negative. For any pair of functions, \(\phi_{0},\phi_{1}\in C^{\infty}_{0}(\mathbb{L}_{\sharp_{\sharp}}^{*})\), and \(X\) a causal, future pointing vector, we define the \(X\) norm by \[\|(\phi_{0},\phi_{1})\|_{X,\Sigma_{\sharp_{\sharp}}}^{2}:=X\text{- energy}(\phi,\Sigma_{\sharp_{\sharp}}^{*})\qquad\text{for any $\phi$ s.t. }(\phi|_{\Sigma_{\sharp_{\sharp}}^{*}},\partial_{t}\cdot\phi|_{\Sigma_{\sharp_{ \sharp}}^{*}})=(\phi_{0},\phi_{1}). \tag{4.23}\] We similarly define for \(\phi_{0}\in C^{\infty}_{0}(\Sigma_{\infty_{0},\upsilon})\) \[\|\phi_{0}\|_{X,\Sigma_{\upsilon_{0},\upsilon}}^{2}:=X\text{- energy}(\phi,\Sigma_{\sharp_{\sharp}}^{*})\qquad\text{for any $\phi$ s.t. }\phi|_{\Sigma_{\sharp_{\sharp}}^{*}}=\phi_{0}. \tag{4.24}\] Note that for any causal, future pointing \(X\) which coincides with the timelike Killing vector field \(\partial_{t^{*}}\) in a neighbourhood of \(\mathcal{I}^{\pm}\), we have that the \(X\) norm is Lipschitz equivalent to the \(\dot{H}^{1}\) norm. For causal timelike vector \(X\), we define the following function spaces \[\mathcal{E}^{X}_{\Sigma_{\sharp_{\sharp}}^{*}}:=Cl_{X,\Sigma_{ \sharp_{\sharp}}^{*}}(C^{\infty}_{0}(\Sigma_{\sharp_{\sharp}}^{*})\times C^{ \infty}_{0}(\Sigma_{\sharp_{\sharp}}^{*})) \tag{4.25}\] \[\mathcal{E}^{X}_{\Sigma_{\upsilon_{0},\upsilon}}:=Cl_{X,\Sigma_{ \upsilon_{0},\upsilon}}(C^{\infty}_{0}(\Sigma_{\upsilon_{0},\upsilon_{0}})) \tag{4.26}\] where these closures are in \(H^{1}_{\text{loc}}\) with respect to the subsicepted norms. For \(\psi_{0}\in C^{\infty}_{0}(\mathcal{I}^{\pm})\), we define \[\|\psi_{0}\|_{\partial_{t^{*}},\mathcal{I}^{\pm}}^{2}:=\int_{ \mathcal{I}^{\pm}}|\partial_{\epsilon}\psi_{0}|^{2}dvd\omega \tag{4.27}\] \[\|\psi_{0}\|_{\partial_{t^{*}},\mathcal{I}^{\pm}}^{2}:=\int_{ \mathcal{I}^{-}}|\partial_{\epsilon}\psi_{0}|^{2}dud\omega. \tag{4.28}\] Finally, we define the energy spaces \(\mathcal{E}^{\partial_{\star}}_{\mathcal{I}^{\pm}}\) by \[\mathcal{E}^{\partial_{\star}}_{\mathcal{I}^{\pm}}:=Cl_{\partial_{\star}, \mathcal{I}^{\pm}}(C^{\infty}_{0}(\mathcal{I}^{\pm})). \tag{4.29}\] ## 5 Existence and Uniqueness of Solutions Given smooth compactly supported initial data \(\phi=\phi_{0}\), \(\partial_{\tau^{*}}\phi=\phi_{1}\) on \(\Sigma_{\sharp_{\sharp}}\), \(t^{*}_{0}<t^{*}_{c}\), we have that there exists a unique smooth solution compactly supported on every \(\Sigma_{\tau^{*}}\). Thus when proving boundedness or decay results, we may assume that our solution has sufficiently many derivatives and that weighted integrals with weights growing in \(r\) converge. Then we can generalise results to all \(\dot{H}^{1}\) functions using that compactly supported smooth functions are dense within \(\dot{H}^{1}\) functions. **Theorem 5.1** (Existence of Smooth Solutions): _Let \(\phi_{0}\) and \(\phi_{1}\) smooth, compactly supported functions on \(\Sigma_{\sharp_{\sharp}}\), \(t^{*}_{0}\leq t^{*}_{c}\), such that \(\phi_{0}(r_{\text{f}}(t^{*}_{0}),\theta,\varphi)=0\) and \(\phi_{1}(r_{\text{f}}(t^{*}_{0}),\theta,\varphi)+\dot{r}_{\text{f}}(t^{*}_{0}) \partial_{\theta}\phi_{0}(r^{*}_{0}(t^{*}_{0}),\theta,\varphi)=0\). Then there exists a \(\phi\in C^{\infty}(\mathcal{M})\) with \(\phi|_{\Sigma_{\text{f}^{\prime}}}\in C^{\infty}_{0}(\Sigma_{\text{f}^{ \prime}})\) for all \(t^{*}\in\mathbb{R}\), such that_ \[\square_{\sharp}\phi =0 \tag{5.1}\] \[\phi(t^{*},r_{\text{f}}(t^{*}),\theta,\varphi) =0\quad\forall t^{*}\leq t^{*}_{c},\quad(\theta,\varphi)\in S^{2}\] (5.2) \[(\phi,\partial_{t^{*}}\phi)|_{\Sigma_{\sharp_{\sharp}}^{*}} =(\phi_{0},\phi_{1}). \tag{5.3}\] Proof.: For a proof, one can follow the proof of Theorem 5.1 in [1] almost exactly. ## 6 Energy Boundedness In this section we work towards proving Theorem 1, as stated in the overview. We will prove this in two sections. We will first prove that going forwards we have a uniform bound on the \(\dot{H}^{1}\) norm, i.e. there exists a constant \(C(\mathcal{M})\) such that \[\|\phi\|_{\dot{H}^{1}(\Sigma_{\sharp_{\sharp}}^{*})}\leq C\|\phi\|_{\dot{H}^{1 }(\Sigma_{\sharp_{\sharp}}^{*})}\quad\forall t^{*}_{1}\geq t^{*}_{0}. \tag{6.1}\] In the second section we will prove the analogous statement going backwards in time: \[\|\phi\|_{\dot{H}^{1}(\Sigma_{\sharp}^{*})}\leq C\|\phi\|_{\dot{H}^{1}(\Sigma_{ \sharp})}\quad\forall t^{*}_{0}\leq t^{*}_{1}\leq t^{*}_{c}. \tag{6.2}\] Note the backwards in time version includes a condition on \(t^{*}_{1}\leq t^{*}_{c}\), as, were \(t^{*}_{1}>t^{*}_{c}\), we can lose arbitrarily large amounts of energy across the event horizon. From here on in this paper, when we say _solution_, unless stated otherwise, we mean \(\phi\) which has finite \(\dot{H}^{1}(\Sigma_{\text{f}^{\prime}})\) norm for all \(t^{*}\), and is a solution of (1.1) in a distributional sense, i.e. \[\int_{\mathcal{M}}g^{ab}\partial_{\dot{H}}f\partial_{\dot{\phi}}\phi=0\quad \forall f\in C^{\infty}_{0}(\mathcal{M}). \tag{6.3}\] Again, note that smooth compactly supported solutions of (1.1) are dense within these functions with respect to the \(\dot{H}^{1}(\Sigma_{\text{f}^{\prime}})\) norm. The methods in this section will closely follow [1]. ### Finite in Time Boundedness We begin by proving a local in time bound on solutions of (1.1). **Theorem 6.1** (Finite in Time Energy Bound): _Given an RNOS model \(\mathcal{M}\), with associated \(M,q,r_{\mathrm{s}}\), \(\phi\) a solution of the wave equation (1.1) with boundary conditions (5.2), and a time interval \(t_{0}^{*}\leq t_{1}^{*}\leq t_{c}^{*}\), we have that there exists a constant \(C=C(\mathcal{M},t_{0}^{*})>0\) such that_ \[C^{-1}\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})}\leq\|\phi\|_{\dot{H}^{1}( \Sigma_{t_{1}^{*}})}\leq C\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})} \tag{6.4}\] Proof.: We start by proving the result for \(\phi\) compactly supported on each \(\Sigma_{t^{*}}\), as then the result can be extended to all \(\dot{H}^{1}\) functions by an easy density argument. We choose a vector field which is everywhere timelike, including on the surface of the matter cloud. We also choose this vector field to be tangent to the surface of the matter cloud. For example \[X=\partial_{t^{*}}+\dot{r}_{\mathrm{s}}(t^{*})\partial_{t}. \tag{6.5}\] Then we have that \[-dt^{*}(J^{X})=\frac{1}{2}\bigg{(} \left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)(\partial_{t^ {*}}\phi)^{2}+2\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\dot{r}_{ \mathrm{s}}(t^{*})\partial_{t^{*}}\phi\partial_{t^{*}}\phi\] \[+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}+\frac{2M}{r}\left( 2-\frac{q^{2}M}{r}\right)|r_{\mathrm{s}}(t^{*})|\right)(\partial_{t}\phi)^{2} +\frac{1}{r^{2}}|\dot{\nabla}\phi|^{2}\bigg{)}. \tag{6.6}\] Note, in every RNOS model, when the matter cloud crosses \(r=r_{+}\), \(\dot{r}_{\mathrm{s}}(t^{*})\neq 0\), and as \(\dot{r}_{\mathrm{s}}(t^{*})\in(-1,0]\), we have that there exists a time independent constant \(A=A(\mathcal{M})\) such that \[A^{-1}\|\phi\|_{\dot{H}^{1}(\Sigma_{t^{*}})}^{2}\leq-\int_{\Sigma_{t^{*}}}dt^ {*}(J^{X})\leq A\|\phi\|_{\dot{H}^{1}(\Sigma_{t^{*}})}^{2} \tag{6.7}\] for all \(t^{*}\leq t_{c}^{*}\). Then we look at the energy current through the surface of the matter cloud \[d\rho(J^{X})=0 \tag{6.8}\] once we notice that on the surface of the matter cloud \(X^{*}\nabla_{*}\phi=0\) and \(d\rho(X)=0\) for \(d\rho\) the normal to the surface of the matter cloud. If we then calculate \(K^{X}\), we get: \[|K^{X}|= \left|\left(1+\frac{M}{r}\right)\frac{\dot{r}_{\mathrm{s}}(t^{* })}{r}(\partial_{t^{*}}\phi)^{2}-\left(\frac{2M\dot{r}_{\mathrm{s}}(t^{*})}{r ^{2}}+\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\ddot{r}_{\mathrm{s }}(t^{*})\right)\partial_{t^{*}}\phi\partial_{t^{*}}\phi\right.\] \[\left.-\left(\left(1-\frac{M}{r}\right)\frac{\dot{r}_{\mathrm{s} }(t^{*})}{r}-\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\ddot{r}_{ \mathrm{s}}(t^{*})\right)(\partial_{t^{*}}\phi)^{2}\right|\leq B\left(|\dot{r }_{\mathrm{s}}|+|\ddot{r}_{\mathrm{s}}|\right)\|\phi\|_{\dot{H}^{1}(\Sigma_{t ^{*}})}^{2}. \tag{6.9}\] Define \[f(t^{*}):=-\int_{\Sigma_{t^{*}}}dt^{*}(J^{X})=\|\phi\|_{X,\Sigma_{t^{*}}}^{2}. \tag{6.10}\] Now, if we integrate \(K^{X}\) in the region \(t^{*}\in[t_{0}^{*},t_{1}^{*}]\) and apply (4.22), we obtain \[f(t_{1}^{*})-B\int_{t_{0}^{*}}^{t_{1}^{*}}(|\dot{r}_{\mathrm{s}}|+|\ddot{r}_{ \mathrm{s}}|)\,f(t^{*})dt^{*}\leq f(t_{0}^{*})\leq f(t_{1}^{*})+B\int_{t_{0}^{* }}^{t_{1}^{*}}(|\dot{r}_{\mathrm{s}}|+|\ddot{r}_{\mathrm{s}}|)\,f(t^{*})dt^{ *}. \tag{6.11}\] Then an application of Gronwall's Inequality to each of the inequalities in (6.11), and applying (6.7) to the result gives us that \[\left(Ae^{B\int_{t_{0}^{*}}^{t_{1}^{*}}(|\dot{r}_{\mathrm{s}}|+|\dot{r}_{ \mathrm{s}}|)dt^{*}}\right)^{-1}\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})}^{2} \leq\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{1}^{*}})}^{2}\leq\left(Ae^{B\int_{t_{0}^{* }}^{t_{1}^{*}}(|\dot{r}_{\mathrm{s}}|+|\dot{r}_{\mathrm{s}}|)dt^{*}}\right)\| \phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})}^{2}. \tag{6.12}\] Letting \(C^{2}=Ae^{B\int_{t_{0}^{*}}^{t_{1}^{*}}(|\dot{r}_{\mathrm{s}}|+|\dot{r}_{ \mathrm{s}}|)dt^{*}}\) gives us the required result. ### Fixed Boundary Case We now prove a uniform boundedness result for the case \(r_{\mathrm{s}}\) is constant for \(t^{*}\leq t_{-}^{*}\)..: **Theorem 6.2** (Uniform in Time Energy Bound for the Fixed Boundary Case): _Given an RNOS model \(\mathcal{M}\) and associated \(M\), \(q\) and \(r_{\mathrm{s}}\), with \(r_{\mathrm{s}}(t^{*})\) constant for \(t^{*}\leq t_{-}^{*}\), and \(\phi\) a solution of the wave equation (1.1) with boundary conditions (5.2), we have that there exists a constant \(C=C(\mathcal{M})>0\) such that_ \[C^{-1}\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})}\leq\|\phi\|_{\dot{H}^{1}( \Sigma_{t_{1}^{*}})}\leq C\|\phi\|_{\dot{H}^{1}(\Sigma_{t_{0}^{*}})}\quad\forall t _{0}^{*}\leq t_{1}^{*}\leq t_{c}^{*}. \tag{6.13}\] Proof.: This proof is identical to the proof for Theorem 6.1. By the definition of the fixed boundary case, the integral \(\int_{-\infty}^{t_{-}^{*}}(|\dot{r}_{\mathrm{s}}|+|\ddot{r}_{\mathrm{s}}|)\) converges. Thus, the constant given by Theorem 6.1 is actually a uniform in time bound for all \(t_{0}^{*}<t_{-}^{*}\), i.e. take \[C^{2}=Ae^{B\int_{-\infty}^{t_{-}^{*}}(|\dot{r}_{\mathrm{s}}|+|\ddot{r}_{\mathrm{s }}|)dt^{*}}. \tag{6.14}\] ### Expanding Boundary Case The expanding boundary case is much more difficult than the fixed boundary case, so we will break this boundedness result into two different Theorems. We will start with the forward bound: **Theorem 6.3** (Uniform Forward in Time Energy Bound for the Expanding Boundary Case): _Given an RNOS model \(\mathcal{M}\) with associated \(M,q,r_{h}\) with \(r_{h}(t^{*})\to\infty\) as \(t^{*}\to-\infty\), and a solution \(\phi\) of the wave equation (1.1) with boundary conditions (5.2), we have that there exists a constant \(C=C(\mathcal{M})>0\) such that_ \[\|\phi\|_{\dot{H}^{1}(\Sigma_{q}^{*})}\leq C\|\phi\|_{\dot{H}^{1}(\Sigma_{q}^{ *})}\quad\forall t_{0}^{*}\leq t_{1}^{*}\leq t_{c}^{*}. \tag{6.15}\] Proof.: We will proceed similarly to Theorem 6.1, but we will take the vector field \[X=\partial_{t^{*}}. \tag{6.16}\] Then we obtain the following results: \[K^{X} =0 \tag{6.17}\] \[-dt^{*}(J^{X}) =\frac{1}{2}\Bigg{(}\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2 }}\right)(\partial_{t^{*}}\phi)^{2}+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^ {2}}\right)(\partial_{t}\phi)^{2}+\frac{1}{r^{2}}|\dot{\nabla}\phi|^{2}\Bigg{)}\] (6.18) \[d\rho(J^{X}) =-\frac{\dot{r}_{h}(t^{*})}{2}(1+\dot{r}_{h}(t^{*}))\left(\left( 1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)-\left(1+\frac{2M}{r}-\frac{q^{ 2}M^{2}}{r^{2}}\right)\dot{r}_{h}(t^{*})\right)(\partial_{r}\phi)^{2}\geq 0, \tag{6.19}\] recalling that \(\dot{r}_{h}\in(-1,0]\). Now if we take an arbitrary \(t_{0}^{*}<t_{c}^{*}\), then in the region \(t^{*}\leq t_{0}^{*}\), \(r\geq r_{h}(t^{*})\geq r_{h}(t^{*}_{0})>r_{+}\). Thus there exists an \(\epsilon>0\) such that \[1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\geq\epsilon. \tag{6.20}\] Therefore in the region \(t^{*}\leq t_{0}^{*}\) \[\epsilon\|\phi\|_{\dot{H}^{1}(\Sigma_{q^{*}})}^{2}\leq-\int_{\Sigma_{q^{*}}} dt^{*}(J^{X})\leq\|\phi\|_{\dot{H}^{1}(\Sigma_{q^{*}})}^{2}. \tag{6.21}\] Then, as before, we integrate \(K^{X}\) in a region \(t^{*}\in[t_{1}^{*},t_{2}^{*}]\) for \(t_{1}^{*}\leq t_{2}^{*}\leq t_{0}^{*}\), and apply (4.22). Once we note that the boundary term has the correct sign, we have \[\|\phi\|_{\dot{H}^{1}(\Sigma_{q})}^{2}\leq\epsilon^{-1}\|\phi\|_{\dot{H}^{1}( \Sigma_{q^{*}})}^{2}. \tag{6.22}\] An application of Theorem 6.1 will then allow us to extend our bound over the remaining finite interval \([t_{0}^{*},t_{c}^{*}]\) to obtain the required result. Now we look at obtaining the backward in time bound: **Theorem 6.4** (Uniform Backward in Time Energy Bound for Expanding Boundary Case): _Given an RNOS model \(\mathcal{M}\) with associated \(M\), \(q\) and \(r_{h}\) with \(r_{h}\to\infty\) as \(t^{*}\to-\infty\), and a solution \(\phi\) of the wave equation (1.1) with boundary conditions (5.2), we have that there exists a constant \(C=C(\mathcal{M})>0\) such that_ \[\|\phi\|_{\dot{H}^{1}(\Sigma_{q})}\leq C\|\phi\|_{\dot{H}^{1}(\Sigma_{q})} \quad\forall t_{0}^{*}\leq t_{1}^{*}\leq t_{c}^{*}. \tag{6.23}\] Proof.: For this proof, we will need to use the modified currents, as defined in (4.19). Given \(\dot{r}_{h}\geq-1+\epsilon\), let \(b\in[1-\epsilon,1)\). Looking in the region \(t^{*}<0\), we will use the vector field and modifier \[X =\partial_{t^{*}}-b\partial_{r} \tag{6.24}\] \[w =-\frac{b}{2r}. \tag{6.25}\] We then calculate \[\left|\int_{\Sigma_{q^{*}}}K^{X,w}\right|\leq \int_{\Sigma_{q^{*}}}\left|\frac{bM}{r^{2}}\left(1-\frac{q^{2}M}{r }\right)(\partial_{t^{*}}\phi)^{2}+\frac{bM}{r^{2}}\left(1-\frac{q^{2}M}{r} \right)(\partial_{r}\phi)^{2}-\frac{2bM}{r^{2}}\left(1-\frac{q^{2}M}{r}\right) \partial_{t^{*}}\phi\partial_{r}\phi\right.\] \[\left.-\frac{b}{r^{3}}|\dot{\nabla}\phi|^{2}-\frac{b}{r^{4}} \left(1-\frac{q^{2}M}{r}\right)\phi^{2}\right|\leq\frac{1}{r_{h}(t^{*})}\left( \|\phi\|_{\dot{H}^{1}(\Sigma_{q^{*}})}^{2}+\int_{\Sigma_{q^{*}}}\frac{\phi^{2 }}{r^{2}}\right)\] \[d\rho(J^{X,w}) =-\frac{1}{2}\left(b+\dot{r}_{h}(t^{*})\right)(1+\dot{r}_{h}(t^{*} ))\left(\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)-\left(1+\frac{2M }{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\dot{r}_{h}(t^{*})\right)(\partial_{r} \phi)^{2} \tag{6.26}\] \[-dt^{*}(J^{X,w})= \frac{1}{2}\Bigg{(}\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2} }\right)(\partial_{t^{*}}\phi)^{2}-2b\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{ r^{2}}\right)\partial_{t^{*}}\phi\partial_{t^{*}}\phi\] (6.27) \[+\left(\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)+\frac{ 2bM}{r}\left(2-\frac{q^{2}M}{r}\right)\right)(\partial_{r}\phi)^{2}+\frac{1}{r ^{2}}|\dot{\nabla}\phi|^{2}\] (6.28) \[+\frac{2bM}{r}\left(2-\frac{q^{2}M}{r}\right)\frac{\phi}{r} \partial_{r}\phi-2b\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\frac{ \phi}{r}\partial_{t^{*}}\phi+\frac{bM}{r}\left(2-\frac{q^{2}M}{r}\right)\frac{ \phi^{2}}{r^{2}}\Bigg{)}.\] We can see that \(d\rho(J^{X},w)\geq 0\), for sufficiently negative \(t^{*}\), since \(b+\dot{r}_{b}(t^{*})\geq 0\). Next, let us consider \(-dt^{*}(J^{X,w})\). Integrating over \(\Sigma_{t^{*}}\), we obtain: \[-\int_{\Sigma_{t^{*}}}dt^{*}(J^{X,w}) =\frac{1}{2}\int_{\Sigma_{t^{*}}}\Bigg{(}\left(1+\frac{2M}{r}- \frac{q^{2}M^{2}}{r^{2}}\right)(\partial_{t^{*}}\phi)^{2}-2b\left(1+\frac{2M}{ r}-\frac{q^{2}M^{2}}{r^{2}}\right)\partial_{r^{*}}\phi\partial_{r^{*}}\phi\] \[\qquad\qquad+\left(\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)+\frac{2bM}{r}\left(2-\frac{q^{2}M}{r}\right)\right)(\partial_{r}\phi) ^{2}+\frac{1}{r^{2}}\|\dot{\nabla}\phi|^{2}\] \[\qquad\qquad+\frac{2bM}{r}\left(2-\frac{q^{2}M}{r}\right)\frac{ \partial}{r}\partial_{r^{*}}\phi-2b\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2 }}\right)\frac{\phi}{r}\partial_{r^{*}}\phi+\frac{bM}{r}\left(2-\frac{q^{2}M }{r}\right)\frac{\phi^{2}}{r^{2}}\Bigg{)}\] \[=\frac{1}{2}\int_{\Sigma_{t^{*}}}\Bigg{(}\left(1+\frac{2M}{r}- \frac{q^{2}M^{2}}{r^{2}}\right)(1-b)(\partial_{t^{*}}\phi)^{2}+b\left(1+\frac{ 2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\left(\left(\partial_{r^{*}}\phi- \partial_{r^{*}}\phi-\frac{\phi}{r}\right)^{2}-2\frac{\phi}{r}\partial_{r^{*}} \phi\right)\] \[\qquad\qquad+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right) (1-b)(\partial_{r}\phi)^{2}+\frac{1}{r^{2}}\|\dot{\nabla}\phi|^{2}+\frac{bM}{ r}\left(2-\frac{q^{2}M}{r}\right)\left(\frac{\phi}{r}+\partial_{r^{*}}\phi \right)^{2}\Bigg{)} \tag{6.29}\] \[=\frac{1}{2}\int_{\Sigma_{t^{*}}}\Bigg{(}\left(1+\frac{2M}{r}- \frac{q^{2}M^{2}}{r^{2}}\right)(1-b)(\partial_{t^{*}}\phi)^{2}+b\left(1+\frac {2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)\left(\partial_{t^{*}}\phi-\partial_{r ^{*}}\phi-\frac{\phi}{r}\right)^{2}\] \[\qquad\qquad+b\left(1+\frac{q^{2}M^{2}}{r^{2}}\right)\frac{\phi^{ 2}}{r^{2}}+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)(1-b)(\partial _{r}\phi)^{2}+\frac{1}{r^{2}}\|\dot{\nabla}\phi|^{2}\] \[\qquad\qquad+\frac{bM}{r}\left(2-\frac{q^{2}M}{r}\right)\left( \frac{\phi}{r}+\partial_{r^{*}}\phi\right)^{2}\Bigg{)}.\] We then note a version of Hardy's inequality. If \(h\) is a differentiable function of one variable, with \(h(0)=0\), then \[\exists C>0\ s.t.\ \int_{\Sigma_{t^{*}}}\left(\frac{h(r)}{r}\right)^{2}\leq C \int_{\Sigma_{t^{*}}}\left(\partial_{r}h(r)\right)^{2}. \tag{6.30}\] Using (6.30), we have that there exists a \(t^{*}\) independent constant \(A\) such that \[0\leq\int_{\Sigma_{t^{*}}}\left(b\left(1+\frac{2M}{r}-\frac{q^{2} M^{2}}{r^{2}}\right)\left(\partial_{t^{*}}\phi-\partial_{r^{*}}\phi-\frac{ \phi}{r}\right)^{2}+b\left(1+\frac{q^{2}M^{2}}{r^{2}}\right)\frac{\phi^{2}}{r ^{2}}+\frac{1}{r^{2}}\|\dot{\nabla}\phi|^{2}\right. \tag{6.31}\] \[\left.+\frac{bM}{r}\left(2-\frac{q^{2}M}{r}\right)\left(\frac{ \phi}{r}+\partial_{r^{*}}\phi\right)^{2}\right)\leq A\|\phi\|_{H^{*}(\Sigma_{t ^{*}})}^{2}\] Note \(b\leq 1\). Thus, provided we are away from \(t^{*}_{e}\), there exists a \(t^{*}\) independent \(\epsilon\) such that \[\epsilon\|\phi\|_{H^{*}(\Sigma_{t^{*}})}^{2}\leq-\int_{\Sigma_{t^{*}}}dt^{*}(J ^{X,w})\leq\epsilon^{-1}\|\phi\|_{H^{*}(\Sigma_{t^{*}})}^{2}. \tag{6.32}\] If we again let \(f(t^{*}):=-\int_{\Sigma_{t^{*}}}dt^{*}(J^{X,w})\), we can apply (4.22) to see that \[f(t^{*}_{0})\leq f(t^{*}_{1})+A\int_{t^{*}_{0}}^{t^{*}_{1}}\frac{f(t^{*})}{r_{b }(t^{*})^{2}}dt^{*}. \tag{6.33}\] Thus we can apply Gronwall's Inequality to \(f\), along with the condition on the expanding boundary case that \(\int r_{b}^{-2}(t^{*})dt^{*}\) converges to obtain \[f(t^{*})\leq f(t^{*}_{0})e^{\int_{-\infty}^{t^{*}_{0}}\frac{1}{r_{b}(t^{*})^{2} }dt^{*}}, \tag{6.34}\] for all \(t^{*}<t^{*}_{0}\). Applying (6.32), we obtain \[\|\phi\|_{H^{*}(\Sigma_{t^{*}_{1}})}^{2}\leq A\|\phi\|_{H^{*}(\Sigma_{t^{*}})}^{2 }\quad\forall t^{*}_{1}\leq t^{*}_{0}\leq t^{*}_{2}. \tag{6.35}\] Combining (6.35) with Theorem 6.1 for the interval \([t^{*}_{0},t^{*}_{e}]\) gives us the final result. ## 7 The Scattering Map We now consider bounds on the radiation fields. We will be considering the maps \(\mathcal{G}^{+}\) and \(\mathcal{F}^{-}\), which take data from \(\Sigma_{t^{*}_{e}}\) to data on \(\mathcal{I}^{+}\) and \(\mathcal{I}^{-}\) respectively. We will also consider their inverses (where defined), \(\mathcal{G}^{-}\) and \(\mathcal{F}^{+}\), which take data from \(\mathcal{I}^{+}\) and \(\mathcal{I}^{-}\) respectively to \(\Sigma_{t^{*}_{e}}\). We will look at obtaining boundedness or non-boundedness for these. Finally, we will define the scattering map, \(\mathcal{S}^{+}:=\mathcal{G}^{+}\circ\mathcal{F}^{+}\), and consider boundedness results for this. ### Existence of Radiation Fields To look at these maps, we will first need a definition of radiation field. We will then need to show it exists for all finite energy solutions of the wave equation. **Proposition 7.1** (Existence of the Backwards Radiation Field): _Given \(\phi\) a solution to the wave equation (1.1) with boundary conditions (5.2), there exist \(\psi_{+,-}\) such that_ \[r(u,v)\phi(u,v,\theta,\varphi)\xrightarrow[u\to-\infty]{H^{*}_{\text{loc}}}\psi_{ -}(v,\theta,\varphi) \tag{7.1}\] \[r(u,v)\phi(u,v,\theta,\varphi)\xrightarrow[v\to\infty]{H^{*}_{\text{loc}}}\psi_{+}(u, \theta,\varphi). \tag{7.2}\] Proof.: This existence has been done many times before, see for example [6, 1]. ### Backwards Scattering from \(\Sigma_{t_{2}^{*}}\) Now we have existence of the radiation field, we define the following map: \[\mathcal{F}^{-}: \mathcal{E}^{X}_{t_{2}^{*}} \longrightarrow \mathcal{F}^{-}\left(\mathcal{E}^{X}_{t_{2}^{*}}\right)\subset H^ {1}_{loc}(\mathcal{I}^{-})\] \[(\phi|_{\Sigma_{t_{2}^{*}}},\partial_{t^{*}}\phi|_{\Sigma_{t_{2 }^{*}}})\mapsto \psi_{-}\] where the \(\psi_{-}\) is as defined in Proposition 7.1, and the \(X\) is any everywhere timelike vector field (including on the event horizon) which coincides with the timelike Killing vector field \(\partial_{t^{*}}\) for sufficiently large \(r\). An \(X\) with these properties is chosen, so that the \(X\) norm is equivalent to the \(\dot{H}^{1}\) norm. We define the inverse of \(\mathcal{F}^{-}\) (once injectivity is established on the image of \(\mathcal{F}^{-}\)) as \(\mathcal{F}^{+}\). * Firstly, we will show \(\mathcal{F}^{-}\) is bounded. (Proposition 7.2) * Then we will show that \(\mathcal{F}^{+}\), if it can be defined, would be bounded, which gives us that \(\mathcal{F}^{-}\) is injective. (Proposition 7.3) * Finally, we show that \(Im(\mathcal{F}^{-})\) is dense in \(\mathcal{E}^{T}_{\mathcal{I}^{-}}\). (Proposition 7.5) We will then combine these results in Theorem 7.6 to obtain that \(\mathcal{F}^{-}\) is a linear, bounded bijection with bounded inverse between the spaces \(\mathcal{E}^{X}_{t_{2}^{*}}\) and \(\mathcal{E}^{\partial_{t^{*}}}_{\mathcal{I}^{-}}\). We will begin with the following: **Proposition 7.2** (Boundedness of \(\mathcal{F}^{-}\)): _There exists a constant \(A(\mathcal{M})\) such that_ \[\|\mathcal{F}^{-}(\phi)\|^{2}_{\mathbb{A}_{u},\mathcal{I}^{-}}=\int_{ \mathcal{I}^{-}}(\partial_{u}(\mathcal{F}^{-}(\phi)))^{2}dvd\omega\leq A\| \phi\|^{2}_{\dot{H}^{1}(\Sigma_{t_{2}^{*}})}. \tag{7.4}\] Proof.: We will first prove this for compactly supported smooth functions, and then extend to \(H^{1}\) functions using a density argument. Let \(X,w\) and \(t_{2}^{*}\) be as in the proof of Theorem 6.4. Let \(\phi\) be smooth and compactly supported on \(\Sigma_{t_{2}^{*}}\). Take \(v_{0}\) large enough such that on \(\Sigma_{t_{2}^{*}}\), \(\phi\) is only supported on \(v\leq v_{0}\). We integrate \(K^{X,w}\), in the region \(D_{v_{1}}=\{v\in[v_{1},v_{0}],t^{*}\leq t_{2}^{*}\}\), for any \(v_{1}\leq v_{0}\). We then apply generalised Stokes' Theorem in \(D_{v_{1}}\) to obtain the following boundary terms: \[-\int_{\Sigma_{t_{2}^{*}}^{*}}dt^{*}(J^{X,w}) =\int_{D_{v_{1}}}K^{X,w}-\int_{\{v=v_{1}\}}dv(J^{X,w})+\int_{S_{v _{1}^{*},v_{2}^{*}}}d\rho(J^{X,w})-\lim_{u_{0}\to-\infty}\int_{\{v=u_{0}\} \cap[v_{1},v_{0}]}du(J^{X},w)\] \[\geq-\int_{\{v=u_{0}\}}dv(J^{X,w})-\lim_{u_{0}\to-\infty}\int_{ \{u=u_{0}\}\cap[v_{1},v_{0}]}du(J^{X,w}) \tag{7.5}\] where \(t_{1}^{*}\) is the value of \(t^{*}\) at the sphere where \(\{v=v_{1}\}\) intersects \(\{r=v_{0}(t^{*})\}\). \[-\int_{\{v=v_{1}\}}dv(J^{X,w})=\int_{\{v=v_{1}\}}\frac{b}{2} \left(\partial_{t^{*}}\phi-\partial_{t}\phi-\frac{\phi}{r}\right)^{2}\] \[\qquad\qquad\qquad\qquad+\frac{1}{2}\left(\left(1-\frac{2M}{r}+ \frac{q^{2}M^{2}}{r^{2}}\right)f(t^{*})+\left(\frac{2M}{r}-\frac{q^{2}M^{2}}{ r^{2}}\right)b\right)(\partial_{t^{*}}\phi-\partial_{t}\phi)^{2}\geq 0 \tag{7.6}\] \[-\int_{\{v=u_{0}\}\cap[v_{1},v_{0}]}du(J^{X,w})= \int_{\{v=u_{0}\}\cap[v_{1},v_{0}]}\frac{f(t^{*})-b}{2}\left(1- \frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\left(1+\frac{2M}{r}-\frac {q^{2}M^{2}}{r^{2}}\right)\partial_{t^{*}}\phi-\left(1-\frac{2M}{r}+\frac{q^{ 2}M^{2}}{r^{2}}\right)\partial_{t^{*}}\phi\right)^{2}\] (7.7) \[+\frac{b}{r}\phi\left(\frac{1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2 }}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\partial_{t}\phi-\partial_{t^{*}} \phi\right)-\frac{b\phi^{2}}{2^{2}}+\frac{\left(1-\frac{2M}{r}+\frac{q^{2}M^{2 }}{r^{2}}\right)f(t^{*})-\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right) b}{2r^{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)}| \widehat{\Psi}\phi|^{2}\] However, as we know that \(r\phi\) tends to an \(H^{1}_{loc}\) function, and the volume form on \(\{u=u_{0}\}\) is \(r^{2}\), we can see that the terms in (7.7) with a factor of \(\phi\) tend to \(0\) as \(u_{0}\to 0\). Similarly, by applying the rotational Killing fields \(\Omega_{i}\) (defined in (3.31)) to \(\phi\), we can see \(r\Omega_{1}\phi\) has an \(H^{1}_{loc}\) limit. Thus terms in (7.7) involving \(\widehat{\Psi}\phi\) will also tend to \(0\) in the limit \(u_{0}\to\infty\). Thus in the limit \(u_{0}\to\infty\) (and therefore \(r\to\infty\), \(t^{*}\to-\infty\)) we obtain: \[-\lim_{u_{0}\to\infty}\int_{\{v=u_{0}\}\cap[v_{1},v_{0}]}du(J^{X,w}) =\lim_{u_{0}\to\infty}\int_{\{u=u_{0}\}\cap[v_{1},v_{0}]}\frac{1-b}{4} \left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\partial_{t^{*}}\phi- \partial_{r}\phi\right)^{2}r^{2}dvd\omega \tag{7.8}\] \[=\lim_{u_{0}\to\infty}\int_{\{u=u_{0}\}\cap[v_{1},v_{0}]}\frac{1-b}{ 4}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\partial_{t^{*}}(r \phi)-\partial_{r}(r\phi)\right)^{2}dvd\omega\] \[=\int_{\mathcal{I}^{-}\cap[v_{1},v_{0}]}\frac{1-b}{4}\left(\partial_ {t^{*}}\psi_{-}-\partial_{r}\psi_{-}\right)^{2}dvd\omega\geq\epsilon\int_{ \mathcal{I}^{-}\cap[v_{1},v_{0}]}\left(\partial_{v}\psi_{-}\right)^{2}dvd\omega\] where to get from the first line to the second, we have ignored terms of order \(\phi\), as these tend to \(0\). Substituting (7.8) and (7.6) into (7.5), and noting that \(-\int_{\Sigma_{t_{2}^{*}}}dt^{*}(J^{X,w})\) can be bounded by the \(\dot{H}^{1}\) norm, we have that: \[\int_{\mathcal{I}^{-}\cap[v_{0},v_{1}]}(\partial_{v}(\mathcal{F}^{-}(\phi)))^{2 }dvd\omega\leq A\|\phi\|^{2}_{\dot{H}^{1}(\Sigma_{t_{2}^{*}})}, \tag{7.9}\] where \(A\) is independent of \(v_{1}\). Thus taking a limit as \(v_{1}\to-\infty\) and imposing Theorem 6.1 in the region \([t_{2}^{*},t_{2}^{*}]\) gives us the result of the theorem. We then move on to showing \(\mathcal{F}^{+}\), if it exists, would be bounded: **Proposition 7.3** (Boundedness of \(\mathcal{F}^{+}\)): _There exists a constant \(A\) such that_ \[\|\phi\|_{H^{1}(\Sigma_{n})}^{2}\leq A\int_{\Sigma_{n}}(\partial_{v}(\mathcal{F }^{-}(\phi)))^{2}dvd\omega. \tag{7.10}\] To prove this, we will first need to show a decay result: **Lemma 7.4** (Decay of Solutions Along a Null Foliation): _Let \(\phi\) be a solution to (1.1) with boundary conditions (5.2). Then_ \[\lim_{v\to\infty}\int_{\Sigma_{n}}dv(J^{\partial_{v^{*}}})=0. \tag{7.11}\] Proof.: We first show the result for \(\phi\) compactly supported on some \(\Sigma_{n_{1}}\), and then extend the result by a density argument. Firstly, we calculate \(-dv(J^{\partial_{v^{*}}})\) and \(-du(J^{\partial_{v^{*}}})\). \[-\int_{\Sigma_{n_{0}}}dv(J^{\partial_{v^{*}}}) =\frac{1}{2}\int_{\Sigma_{n_{0}}}\left(1-\frac{2M}{r}+\frac{q^{2} M^{2}}{r^{2}}\right)(\partial_{v}\phi-d_{r}\phi)^{2}+\frac{1}{r^{2}}| \mathring{\nabla}\phi|^{2}\] \[=\int_{\Sigma_{n_{0}}}2\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^ {2}}\right)^{-1}(\partial_{v}\phi)^{2}+\frac{1}{2r^{2}}|\mathring{\nabla}\phi |^{2}=\int_{\Sigma_{n_{0}}}(\partial_{v}\phi)^{2}+\frac{1-\frac{2M}{r}+\frac{q ^{2}M^{2}}{r^{2}}}{4r^{2}}|\mathring{\nabla}\phi|^{2}r^{2}dud\omega \tag{7.12}\] \[-\int_{\Sigma_{n_{0}}\cap[v_{0},v_{1}]}du(J^{\partial_{v^{*}}}) =\frac{1}{2}\int_{\Sigma_{n_{0}}\cap[v_{0},v_{1}]}\left(1-\frac{2M }{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\left(\frac{1+\frac{2M}{r}-\frac{q ^{2}M^{2}}{r}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\right)\partial_{v} \phi-\partial_{r}\phi\right)^{2}+\frac{1}{r^{2}}|\mathring{\nabla}\phi|^{2}\] (7.13) \[=\int_{\Sigma_{n_{0}}\cap[v_{0},v_{1}]}(\partial_{v}\phi)^{2}+ \frac{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}{4r^{2}}|\mathring{\nabla}\phi| ^{2}r^{2}dvd\omega\geq 0\] Integrating \(K^{\partial_{v^{*}}}\) in the area \(D_{v_{0}}:=\{v\in[v_{0},v_{1}]\}\cap\{u\geq u_{0}\}\), using (6.17), (6.19), and then letting \(u_{0}\to-\infty\) gives us that \[-\int_{\Sigma_{n_{1}}}dv(J^{\partial_{v^{*}}})\leq-\int_{\Sigma_{n_{0}}}dv(J^ {\partial_{v^{*}}}). \tag{7.14}\] We then proceed to use the \(r^{p}\) method [23]. We consider the wave operator applied to \(r\phi\): \[\frac{4\partial_{v}\partial_{v}(r\phi)}{1-\frac{2M}{r^{2}}+\frac{q^{2}M}{r^{2 }}}=\frac{1}{r^{2}}\mathring{\Delta}(r\phi)-\frac{2M}{r^{3}}\left(1-\frac{q^{2 }M}{r}\right)r\phi. \tag{7.15}\] We apply (7.15) to the following integral over \(D_{v_{0}}\): \[\int_{\Sigma_{n_{1}}}\left(\frac{2r}{1-\frac{2M}{r}+\frac{q^{2}M }{r^{2}}}(\partial_{u}(r\phi))^{2}\right)dud\omega\geq\left(\int_{\Sigma_{n_{1 }}}-\int_{\Sigma_{n_{0}}}-\int_{\mathring{\Sigma}_{n_{0}}\cap i_{1}}\right) \left(\frac{2r}{1-\frac{2M}{r}+\frac{q^{2}M}{r^{2}}}(\partial_{u}(r\phi))^{2 }\right)dud\omega\] \[= \int_{D_{v_{0}}}\frac{4r\partial_{u}(r\phi)\partial_{u}\partial_{ v}(r\phi)}{1-\frac{2M}{r^{2}}+\frac{q^{2}M}{r^{2}}}+2\left(\partial_{u}(r\phi) ^{2}\right)^{2}\partial_{v}\left(\frac{1}{r}-\frac{2M}{r^{2}}+\frac{q^{2}M^{2} }{r^{2}}\right)^{-1}dudvd\omega\] \[= \int_{D_{v_{0}}}\left(\frac{1}{r}\mathring{\Delta}(r\phi)-\frac{2 M}{r^{2}}\left(1-\frac{q^{2}M}{r}\right)r\phi\right)\partial_{u}(r\phi)+\frac{ 1}{\left(1-\frac{4M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)}\left(\partial_{u}(r \phi)\right)^{2} \tag{7.16}\] \[= \int_{D_{v_{0}}}\left(\frac{1}{r}-\frac{4M}{r}+\frac{q^{2}M^{2} }{r^{2}}\right)\left(\partial_{u}(r\phi)\right)^{2}-\frac{1}{2r}\partial_{u} \left(\left|\mathring{\nabla}r\phi\right|^{2}\right)-\frac{M}{r^{2}}\left(1- \frac{q^{2}M}{r}\right)\partial_{u}(\left(r\phi\right)^{2})dudvd\omega\] \[\geq \int_{D_{v_{0}}}\frac{\left(1-\frac{4M}{r}+\frac{3q^{2}M^{2}}{r^{2 }}\right)\left(\partial_{u}(r\phi)\right)^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2} }{r^{2}}}+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(\frac{1}{4r ^{2}}|\mathring{\nabla}r\phi|^{2}+\left(1-\frac{3q^{2}M}{2r}\right)\frac{M}{r^ {3}}(r\phi)^{2}\right)dudvd\omega\] \[\geq \frac{1}{2}\int_{D_{v_{0}}}(\partial_{u}\phi)^{2}+\frac{1-\frac{2M }{r}+\frac{q^{2}M^{2}}{r^{2}}}{2r^{2}}|\mathring{\nabla}\phi|^{2}r^{2}dudvd \omega\geq\frac{1}{2}\int_{v_{0}}^{v_{1}}\left(-\int_{\Sigma_{n_{0}}}dv(J^{ \partial_{v^{*}}})\right)dv\] In order to obtain the last line, we have used that for \(r\) large enough, \[\int_{\Sigma_{n}}\frac{1-\frac{4M}{r}+\frac{q^{2}M^{2}}{r^{2}}}{1- \frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}} \tag{7.17}\] \[\geq\frac{1}{2}\int_{\Sigma_{n}}r^{2}(\partial_{u}\phi)^{2}-\frac{ M}{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)\left(1-\frac{q^{2}M}{r} \right)\frac{M}{r^{2}}\phi^{2}dud\omega\] \[\geq\frac{1}{2}\int_{\Sigma_{n}}r^{2}(\partial_{u}\phi)^{2}- \frac{M}{2r^{2}}\phi^{2}dud\omega.\] The left hand side of (7.16) is independent of \(v_{0}\), so if we choose \(\phi\) to be compactly supported on \(\Sigma_{n_{1}}\) (these functions are dense in the set of \(H^{1}\) functions), then we can let \(v_{0}\) tend to \(-\infty\) to obtain \[\int_{-\infty}^{v_{1}}\left(-\int_{\Sigma_{n_{0}}}dv(J^{\partial_{v^{*}}})\right) dv\leq\int_{\Sigma_{n_{1}}}\left(\frac{4r}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}} \left(\partial_{u}(r\phi)\right)^{2}\right)dud\omega. \tag{7.18}\] Thus there exists a sequence \(v_{i}\to-\infty\) such that \[-\int_{\Sigma_{v_{i}}}dv(J^{b_{\tau^{*}}})\to 0\quad\text{ as }i\to\infty. \tag{7.19}\] We then note that given any \(\epsilon>0\), and a solution \(\phi\) to (1.1) with finite \(\partial_{\tau}\)- energy on \(\Sigma_{v_{0}}\), there exists a smooth compactly supported function \(\phi_{\epsilon}\) such that \[\|\phi-\phi_{\epsilon}\|_{\partial_{\tau},\Sigma_{v_{0}}}\leq\epsilon/2. \tag{7.20}\] Furthermore, by (7.14), we know that for all \(v_{1}\leq v_{0}\), we have \[\|\phi-\phi_{\epsilon}\|_{\partial_{\tau},\Sigma_{v_{1}}}\leq\epsilon/2. \tag{7.21}\] By (7.19), there exists a \(v_{1}\leq v_{0}\) such that \[\|\phi_{\epsilon}\|_{\partial_{\tau},\Sigma_{v_{1}}}\leq\epsilon/2. \tag{7.22}\] By combining (7.21) (7.22), and (7.14) again, we obtain that \[\|\phi\|_{\partial_{\tau},\Sigma_{v}}\leq\|\phi\|_{\partial_{\tau},\Sigma_{v _{1}}}\leq\|\phi_{\epsilon}\|_{\partial_{\tau},\Sigma_{v_{1}}}+\|\phi-\phi_{ \epsilon}\|_{\partial_{\tau},\Sigma_{v_{1}}}\leq\epsilon, \tag{7.23}\] for all \(v\leq v_{1}\). Thus given any solution \(\phi\) to (1.1) with finite \(\partial_{\tau^{*}}\)- energy, and given any \(\epsilon>0\), there exists a \(v_{1}\) such that \[\|\phi\|_{\partial_{\tau^{*}},\Sigma_{v}}\leq\epsilon, \tag{7.24}\] for all \(v\leq v_{1}\). Proof of Proposition 7.3.: Fix \(t_{1}^{*}<t_{\epsilon}^{*}\), and let \(\phi\) be a solution of (1.1) with boundary conditions (5.2) such that \(\phi\) has finite \(\partial_{\tau^{*}}\)- energy on \(\Sigma_{v_{1}}\). Note as \(t_{1}^{*}<t_{\epsilon}^{*}\) fixed, finite \(\partial_{\tau^{*}}\)- energy is equivalent to having finite \(X\) energy. An explicit calculation gives \[\left(1-\frac{2M}{r_{b}(t_{1}^{*})}+\frac{q^{2}M^{2}}{r_{b}(t_{1}^{*})^{2}} \right)\|\phi\|_{H^{1}(\Sigma_{v_{1}}^{*})}^{2}\leq-\int_{\Sigma_{v_{1}}^{*}} dt^{*}(J^{b_{\tau^{*}}})\leq\|\phi\|_{H^{1}(\Sigma_{v_{1}}^{*})}^{2} \tag{7.25}\] We prove Proposition 7.3 by simply integrating \(K^{\partial_{\tau^{*}}}\) in the region \(D_{u_{0},v_{0}}=\{u>u_{0},v\geq v_{0},t^{*}<t_{1}^{*}\}\). We will then let \(u_{0}\to-\infty\) to get: \[-\int_{\Sigma_{v_{1}}^{*}}dt^{*}(J^{\partial_{\tau^{*}}}) =\lim_{u_{0}\to-\infty}\left(-\int_{\Sigma_{v_{1}}^{*}\cap\{u\geq u _{0}\}}dt^{*}(J^{\partial_{\tau^{*}}})\right) \tag{7.26}\] \[=-\lim_{u_{0}\to-\infty}\int_{\Sigma_{v_{0}}\cap\{v\geq v_{0}\}} du(J^{\partial_{\tau^{*}}})-\lim_{u_{0}\to-\infty}\int_{\Sigma_{v_{0}}\cap\{u\geq u _{0}\}}dv(J^{\partial_{\tau^{*}}})-\int_{S[v_{0},t_{1}^{*}]}d\rho(J^{\partial_ {\tau^{*}}})\] \[\leq\int_{\mathcal{I}-\cap\{v\geq v_{0}\}}(\partial_{\tau}\psi_{ \cdot})^{2}dvd\omega-\int_{\Sigma_{v_{0}}}dv(J^{\partial_{\tau^{*}}}).\] Here we have used (6.19) to ignore the \(S[v_{0},t_{1}^{*}]\) term. Letting \(v_{0}\to-\infty\), and using Lemma 7.4, we obtain \[\left(1-\frac{2M}{r_{b}(t_{1}^{*})}+\frac{q^{2}M^{2}}{r_{b}(t_{1}^{*})^{2}} \right)\|\phi\|_{H^{1}(\Sigma_{v_{1}}^{*})}^{2}\leq-\int_{\Sigma_{v_{1}}^{*}} dt^{*}(J^{\partial_{\tau^{*}}})\leq\int_{\mathcal{I}^{-}}(\partial_{\tau}\psi_{ \cdot})^{2}dvd\omega. \tag{7.27}\] Theorem 6.1 on the interval \(t^{*}\in[t_{1}^{*},t_{\epsilon}^{*}]\) then gives us our result. We now have that \(\mathcal{F}^{-}\) is bounded and injective, so the inverse is well defined. We also have that \(\mathcal{F}^{+}\) is bounded where it is defined. The final result needed to define the scattering map on the whole space \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\) is that the image of \(\mathcal{F}^{-}\) is dense in \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\): **Proposition 7.5** (Density of \(Im(\mathcal{F}^{-})\) in \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\)): \(Im(\mathcal{F}^{-})\) _is dense in \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\)._ Proof.: We prove this using existing results on the scattering map on the full exterior of Reissner-Nordstrom spacetime. We show that compactly supported smooth functions on \(\mathcal{I}^{-}\) are in the image of \(\mathcal{F}^{-}\). These are dense in \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\). Given any smooth compactly supported function \(\psi\in\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\), supported in \(v\geq v_{0}\), we can find a \(t_{0}^{*}\) such that the sphere \((t_{0}^{*},r_{b}(t_{0}^{*}))\) is in the region \(v\leq v_{0}\). Using previous results from [24], there exists a solution, \(\phi^{\prime}\) in Reissner-Nordstrom with radiation field \(\psi\) and vanishing on the past horizon. By finite speed of propagation, \(\phi^{\prime}\) will be supported in \(r\geq v_{0}\). Thus both \(\phi^{\prime}\) and its derivatives on \(\Sigma_{v_{1}}^{*}\) vanishes around \(r=r_{b}\). We then evolve \((\phi^{\prime}|_{\Sigma_{v_{2}}^{*}},\partial_{\tau}\phi^{\prime}|_{\Sigma_{v_{2}} ^{*}})\) from \(\Sigma_{v_{1}}^{*}\) in RNOS, call this solution \(\phi\). By finite speed of propagation and uniqueness of solutions, we must have \(\phi=\phi^{\prime}\) for \(t^{*}\leq t_{0}^{*}\). By boundedness of \(\mathcal{F}^{+}\) (Proposition 7.3) we have that \(\phi\) is in \(\mathcal{E}^{X}_{\Sigma_{v_{2}}^{*}}\), and so the radiation field, \(\psi_{-,}\) is in the image of \(\mathcal{F}^{-}\). **Theorem 7.6** (Bijectivity and Boundedness of \(\mathcal{F}^{-}\)): \(\mathcal{F}^{-}\) _is a linear, bounded bijection with bounded inverse between the spaces \(\mathcal{E}^{X}_{\Sigma_{v_{2}}^{*}}\) and \(\mathcal{E}^{0,*}_{\mathcal{I}^{-}}\)._ Proof.: \(\mathcal{F}^{+}\) is continuous (linear and bounded, by Proposition 7.3), and its image, \(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}}\), is a closed set. Therefore \(\left(\mathcal{F}^{+}\right)^{-1}\left(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}}\right)\) is closed. Thus \[\mathcal{F}^{-}(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}})=\left(\mathcal{F}^{+} \right)^{-1}\left(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}}\right)=Cl\left(\left( \mathcal{F}^{+}\right)^{-1}\left(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}}\right) \right)\supset\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}, \tag{7.28}\] as \(\mathcal{F}^{-}(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}})\) is dense in \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}\) (Proposition 7.5). However, as \(\mathcal{F}^{-}\) is also bounded, we have \[\mathcal{F}^{-}(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}})\subset\mathcal{E}^{0_ {\star}}_{\mathcal{I}^{-}} \tag{7.29}\] Therefore \[\mathcal{F}^{-}(\mathcal{E}^{X}_{\Sigma_{t^{\prime}}})=\mathcal{E}^{0_{\star }}_{\mathcal{I}^{-}}. \tag{7.30}\] Thanks to Propositions 7.2 and 7.3, \(\mathcal{F}^{-}\) and \(\mathcal{F}^{+}\) are bounded, and thus we have the required result. ### Forward Scattering from \(\Sigma_{t^{\prime}_{2}}\) In a similar manner to Section 7.2, we define the map taking initial data on \(\Sigma_{t^{\prime}_{2}}\) to radiation fields on \(\mathcal{H}^{+}\cup\mathcal{I}^{+}\): \[\mathcal{G}^{+}:\quad\mathcal{E}^{X}_{\Sigma_{t^{\prime}_{2}}} \longrightarrow\quad\mathcal{G}^{+}\left(\mathcal{E}^{X}_{\Sigma_{t^{ \prime}_{2}}}\right)\subset H^{1}_{\mathrm{isc}}(\mathcal{H}^{+}\cup\mathcal{ I}^{+}) \tag{7.31}\] \[(\phi|_{\Sigma_{t^{\prime}_{2}}},\partial_{t^{\prime}}\phi|_{ \Sigma_{t^{\prime}_{2}}}) \mapsto\quad(\phi|_{\mathcal{H}^{+}},\psi_{+})\] where \(\psi_{+}\) is as in Proposition 7.1, and \(X\) is again any everywhere timelike vector field (including on the event horizon) which coincides with the timelike Killing vector field \(\partial_{t^{\prime}}\) for sufficiently large \(r\). We will define the inverse of \(\mathcal{G}^{+}\) (only defined on the image of \(\mathcal{G}^{+}\)) as \[\mathcal{G}^{-}\cdot\mathcal{G}^{+}\left(\mathcal{E}^{X}_{\Sigma _{t^{\prime}_{2}}}\right) \longrightarrow\quad\mathcal{E}^{X}_{\Sigma_{t^{\prime}_{2}}} \tag{7.32}\] \[\mathcal{G}^{+}(\phi|_{\Sigma_{t^{\prime}_{2}}},\partial_{t^{ \prime}}\phi|_{\Sigma_{t^{\prime}_{2}}}) \mapsto(\phi|_{\Sigma_{t^{\prime}_{2}}},\partial_{t^{\prime}}\phi|_{ \Sigma_{t^{\prime}_{2}}}).\] **Remark 7.7**: _Note that \(\mathcal{G}^{\pm}\) are defined using scattering in pure Reissner-Nortstrom. Thus they have been studied extensively already, see for example [7] for the sub-extremal case (\(|q|<1\)) and [8] for the extremal case (\(|q|=1\))._ We will be using the following facts about \(\mathcal{G}^{+}\): **Lemma 7.8**: * \(\mathcal{G}^{+}\) _is injective._ * _For the sub-extremal case (_\(|q|<1\)_),_ \(\mathcal{G}^{+}\) _is bounded with respect to the_ \(X\) _norm on_ \(\Sigma_{t^{\prime}_{2}}\) _and_ \(\mathcal{H}^{+}\) _and the_ \(\partial_{t^{\prime}}\) _norm on_ \(\mathcal{I}^{+}\)_. In the extremal case (_\(|q|=1\)_), we use the weaker result that_ \(\mathcal{G}^{+}\) _is bounded with respect to the_ \(X\) _norm on_ \(\Sigma_{t^{\prime}_{2}}\) _and the_ \(\partial_{t^{\prime}}\) _norm on_ \(\mathcal{I}^{+}\) _and_ \(\mathcal{H}^{+}\)_._ * \(\mathcal{G}^{+}\) _is not surjective into_ \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{+}}\)_, for both sub-extremal and extremal Reissner-Nordstrom._ * \(\mathcal{G}^{-}\) _is not bounded, again with respect to the_ \(X\) _norm on_ \(\Sigma_{t^{\prime}_{2}}\) _and_ \(\mathcal{H}^{+}\)_, and the_ \(\partial_{t^{\prime}}\) _norm on_ \(\mathcal{I}^{+}\)_._ Proof.: Thanks to \(\mathcal{T}\) energy conservation, we have that \(\mathcal{G}^{+}\) is injective. For \(\mathcal{G}^{+}\) bounded in the sub-extremal case, we apply the celebrated red-shift vector [9] in order to obtain boundedness of the \(X\) energy on \(\mathcal{H}^{+}\). In the extremal case, we do not have the red-shift effect. In this case, the best we can do is apply conservation of \(T\) energy which immediately gives the weaker extremal result. For \(\mathcal{G}^{+}\) not surjective, we can look at any solution with finite \(\partial_{t^{\prime}}\) energy on \(\Sigma_{t^{\prime}_{2}}\), but infinite \(X\) energy, such as \(\phi=\sqrt{r-r_{+}}\), \(\partial_{t^{\prime}}\phi=0\). Let \(\mathcal{G}^{+}(\phi)=(\phi_{+},\psi_{+})\), which has finite \(X\) and \(\partial_{t^{\prime}}\) energy respectively (the angular component vanishes by spherical symmetry). \(\mathcal{G}^{+}\) is injective from the \(\partial_{t^{\prime}}\) energy space, thus no other finite \(\partial_{t^{\prime}}\) energy data on \(\Sigma_{t^{\prime}_{2}}\) can map to \((\phi_{+},\psi_{+})\). Therefore no finite \(X\) energy solution can map to \((\phi_{+},\psi_{+})\), and thus \(\mathcal{G}^{+}\left(\mathcal{E}^{X}_{\Sigma_{t^{\prime}_{2}}}\right)\neq \mathcal{E}^{X}_{\mathcal{I}^{+}_{2}}\times\mathcal{E}^{0_{\star}}_{\mathcal{I}^ {+}}\). For a more detailed discussion of non-surjectivity in the sub-extremal case see [25] (note this proves non-surjectivity for Kerr, but the proof can be immediately applied to Reissner-Nordstrom). For the extremal case, again see [8]. By taking a series of smooth compactly supported functions approximating \((\phi_{+},\psi_{+})\) in the above paragraph, we can see that \(\mathcal{G}^{-}\) is not bounded. ### The Scattering Map We are finally able to define the forwards Scattering Map: \[\mathcal{S}^{+}:\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}} \longrightarrow\mathcal{S}^{+}\left(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}\right) \tag{7.33}\] \[\mathcal{S}^{+}:=\mathcal{G}^{+}\circ\mathcal{F}^{+}\] and similarly with the backwards scattering map: \[\mathcal{S}^{-}:\mathcal{S}^{+}\left(\mathcal{E}^{0_{\star}}_{ \mathcal{I}^{-}}\right) \longrightarrow\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}} \tag{7.34}\] \[\mathcal{S}^{-}:=\mathcal{F}^{-}\circ\mathcal{G}^{-}.\] Note \(\mathcal{S}^{-}\) is defined only on the image of \(\mathcal{S}^{+}\). **Theorem 7.9** (The Scattering Map): _The sub-extremal (\(|q|<1\)) forward scattering map \(\mathcal{S}^{+}\) defined by (7.33) is an injective linear bounded map from \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}\) to \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}\cup\mathcal{E}^{0_{\star}}_{\mathcal{I}^ {-}}\). The extremal (\(|q|=1\)) forward scattering map \(\mathcal{S}^{+}\), again defined by (7.33), is an injective linear bounded map from \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{-}}\) to \(\mathcal{E}^{0_{\star}}_{\mathcal{I}^{+}}\cup\mathcal{E}^{0_{\star}}_{\mathcal{I}^ {-}}\). In both cases, \(\mathcal{S}^{+}\) is not surjective, and its image does not even contain \(0+\mathcal{E}^{0_{\star}}_{\mathcal{I}^{+}}\). When defined on the image of \(\mathcal{S}^{+}\), its inverse \(\mathcal{S}^{-}\) is injective but not bounded._ Proof.: This is an easy consequence of Theorem 7.6 and Lemma 7.8. This is in immediate contrast with Reissner-Nordstrom spacetime. The scattering map in Reissner-Nordstrom spacetime is an isometry with respect to the \(T\) energy, and this immediately follows from the fact that \(T\) is a global Killing vector field. Moreover, this imposes the canonical choice of energy on \(\mathcal{I}^{\pm}\). However, in the RNOS model, if one considers the \(T\) energy on \(\mathcal{I}^{-}\), then \(\mathcal{F}^{+}\) gives an isometry between \(\mathcal{E}^{T}_{\mathcal{I}^{-}}\) and \(\mathcal{E}^{X}_{\mathcal{I}^{-}}\). Thus, we are forced to consider the non-degenerate \(X\) energy, when considering the solution on \(\Sigma_{\mathcal{I}^{-}}\). This is the main contrast with Reissner-Nordstrom spacetime, where we consider \(T\) energy throughout the whole spacetime. In both Reissner-Nordstrom and RNOS spacetimes, we can consider the backwards reflection map, which takes finite energy solutions from \(\mathcal{I}^{+}\) to \(\mathcal{I}^{-}\). On both these surfaces, choice of energy is canonically given by the existence of Killing vector fields in the region around \(\mathcal{I}^{\pm}\). In Reissner-Nordstrom this map is bounded, however in RNOS, this map does not even exist as a map between finite energy spaces.
2304.03967
Quantum gate synthesis by small perturbation of a free particle in a box with electric field
A quantum unitary gate is realized in this paper by perturbing a free charged particle in a one-dimensional box with a time- and position-varying electric field. The perturbed Hamiltonian is composed of a free particle Hamiltonian plus a perturbing electric potential such that the Schr$\ddot{o}$dinger evolution in time $T$, the unitary evolution operator of the unperturbed system after truncation to a finite number of energy levels, approximates a given unitary gate such as the quantum Fourier transform gate. The idea is to truncate the half-wave Fourier sine series to $M$ terms in the spatial variable $\mathbf x$ before extending the potential as a Dyson series in the interaction picture to compute the evolution operator matrix elements up to the linear and quadratic integral functionals of $ \mathbf V_n(t)'$s. As a result, we used the Dyson series with the Frobenius norm to reduce the difference between the derived gate energy and the given gate energy, and we determined the temporal performance criterion by plotting the noise-to-signal energy ratio (NSER). A mathematical explanation for a quantum gate's magnetic control has also been provided. In addition, we provide a mathematical explanation for a quantum gate that uses magnetic control.
Kumar Gautam
2023-04-08T09:32:52Z
http://arxiv.org/abs/2304.03967v3
# Quantum gate synthesis by small perturbation of a free particle in a box with electric field ###### Abstract A quantum unitary gate is realized in this paper by perturbing a free charged particle in a one-dimensional box with a time- and position-varying electric field. The perturbed Hamiltonian is composed of a free particle Hamiltonian plus a perturbing electric potential such that the Schr\(\ddot{o}\)dinger evolution in time \(T\), the unitary evolution operator of the unperturbed system after truncation to a finite number of energy levels, approximates a given unitary gate such as the quantum Fourier transform gate. The idea is to truncate the half-wave Fourier sine series to \(M\) terms in the spatial variable \(\mathbf{x}\) before extending the potential as a Dyson series in the interaction picture to compute the evolution operator matrix elements up to the linear and quadratic integral functionals of \(\mathbf{V}_{n}(t)\)\({}^{\prime}\)s. As a result, we used the Dyson series with the Frobenius norm to reduce the difference between the derived gate energy and the given gate energy, and we determined the temporal performance criterion by plotting the noise-to-signal energy ratio (NSER). A mathematical explanation for a quantum gate's magnetic control has also been provided. In addition, we provide a mathematical explanation for a quantum gate that uses magnetic control. Keywords:Quantum gate, perturbation theory, Dyson series, Schr\(\ddot{o}\)dinger's equation, Frobenius norm. ## 1 Introduction Quantum mechanics can be comprehended in a straightforward manner by making use of the atomic interaction picture [1] in the appropriate context. W. Heisenberg and E. Schr\(\ddot{o}\)dinger both contributed to the development of the theories that underpin quantum mechanics during the first half of the 20th century. This is a fundamental tenet of quantum mechanics and is referred to as a "cornerstone" in the field [1]. The Schr\(\ddot{o}\)dinger picture allows for a more intuitive understanding of the evolution of a quantum system. In this picture, the wavefunction is the fundamental object and the Hamiltonian is used to describe how the wavefunction evolves. This is in contrast to the Heisenberg picture, where the operators are fundamental objects and the wavefunction is used to describe the evolution of the system. The Schrodinger picture also has the benefit of being easier to understand than the Heisenberg picture. This is used in a variety of quantum computing applications, such as quantum teleportation, quantum error correction, and quantum search algorithms, which can be used to develop efficient algorithms for simulating the evolution of quantum systems and can provide insight into the behavior of quantum systems. Quantum physics is helping to facilitate the development of the next generation of computers, which will have performance capabilities that are superior to those of today's computer technology. Instead of bits, however, qubits are used to store the information. It is possible to express different states of qubits as complex linear superpositions of bits. Hadamard, Pauli, and controlled unitary gates are just a few of the many types of gates that can be utilized in the synthesis of the multi-level logic circuits that are referenced in [8]-[12]. It is possible to locate the quantum state in a linear vector space [5]-[7]. Due to the fact that all quantum transformations are unitary and capable of being reversed, the quantum gates themselves also exhibit this property in nature. As a consequence of this, developing quantum algorithms is extremely difficult because they have to be reversible in order to work properly [13]. This means that instead of the simple application of traditional logic circuits, a set of carefully designed quantum algorithms is required for the precise manipulation and transformation of qubits. The use of quantum gates, combined with quantum algorithms and precise qubit manipulation, is necessary for the development of a high-level logic circuit capable of performing any desired function. This concept is fundamental in quantum mechanics, as it allows for the prediction of measurement outcomes and the calculation of probabilities. Observables are also closely related to the concept of eigenvalues and eigenvectors, which play a crucial role in understanding the behavior of quantum systems. An observable can be defined as the linear superposition of a group of orthogonal projections using the spectral theorem for Hermitian operators. Because of this, each observable can be associated with a distinct color spectrum (PVM). In a more general sense, POVMs can be understood as the transformation of a full spectral family into a set of identity-conserving positive operators. Alternatively, the term "generalized measurements" can be used to describe these standards. In this paper, we demonstrate quantum gates using a small electric field perturbation of a free particle in a box [14]. The problem is discussed in order to be used in the design of various quantum gates, which aid in the generation of quantum logic circuits that are faster than conventional logic circuits. To realize the quantum gates, we perturbed the Hamiltonian with a small electric field, as previously discussed in [15]-[18]. When a quantum system is perturbed by electric, magnetic, or electromagnetic radiation, it becomes excited and changes its state. The Hamiltonian can be perturbed with an electric field by perturbing the free Hamiltonian \(H_{0}\) with a potential \(V\) in the physical theories. Quantum mechanical principles allow for practical application of the physical theories cited in [2]. and the eigenvalues of matrices of that theory can be used to represent various observables, including position, momentum, and energy, and the eigenvalues give the results of these observables [2]. Because Hermitian matrix eigenvalues are generally accepted to be real, Hermitian matrices are used as a representation for these quantities of interest. This is because, according to quantum mechanics, the eigenvalues of an observable represent the set of all possible outcomes of a measurement of that observable. This is due to the fact that there must be a real world existence of any measurable quantity. ### Advantage of perturbation theory The advantage of perturbation theory is that it can increase the dimensionality of the matrices in operator space to infinity and find an approximate solution to the corresponding Schr\(\ddot{o}\)dinger equation that best approximates the given unitary gate. The perturbing time-dependent potential is chosen in such a way that the unitary evolution operator at time \(T\) up to second-order becomes as close to the desired unitary gate on the same Hilbert space as possible. The perturbed Hamiltonian is made up of a free particle and a perturbed potential \(V(t,x)\), where \(-\frac{\partial V(t,x)}{\partial x}=E(t,x)\) is the applied electric field. We design the control potential \(V(t,x)\) so that the unitary evolution operator of the unperturbed system, after truncation to a finite number N of energy levels, approximates a given unitary gate \(Ug\) of size \(N\times N\), such as the quantum Fourier transform gate [20],[21]. The potential is chosen in such a way that the desired gate error energy is minimized, which is equal to the Frobenius norm square of the error between the given gate and the Dyson series-based evolved gate. The gate error energy is expanded in the potential up to the quadratic terms, so that the optimal error energy minimization equations are linear integral equations for the potential over \(t\epsilon[0,T]\). The discretization approach in MATLAB is used to solve these coupled linear integral equations, which tend to replace linear integral equations with linear matrix equations that are simple to invert [22]. ### Dyson series unitary preservation The Dyson series has proven to be a useful tool in the study of quantum systems, and it has also been applied successfully to the field of quantum computing. In particular, the Dyson series can be employed in the determination of energy levels and the investigation of quantum system dynamics. Also, entanglement and decoherence effects can be investigated with this method. Quantum computing researchers have used the Dyson series to create new algorithms and analyze the impact of noise on the field. In addition, researchers have used the Dyson series to learn more about quantum error correction and create cutting-edge quantum computing methods. Dyson series quantum unitary gates are novel in that they can be used to investigate the impact of background noise on quantum computing and to design improved algorithms for the field. Decoherence and entanglement in a system are two other topics that can be explored with the Dyson series. The Dyson series can also be used to investigate the consequences of quantum error correction and to create novel quantum computing methods. To investigate how quantum unitary gates react to disturbances, the Dyson series can be used. Both the external environment and the system's own dynamics can contribute to perturbations. New algorithms for quantum computing can be developed and the accuracy of existing algorithms improved by studying the effects of perturbations on quantum unitary gates [23]-[24]. The Dyson series can also be used to learn more about decoherence and create better quantum computing methods. As a conclusion, the Dyson series is an effective method for researching quantum systems, and it has significant applications in quantum computing. Quantum energy levels and system dynamics can be investigated using this method. As an added bonus, it can be used to investigate system entanglement and the consequences of decoherence. In addition, new quantum computing algorithms can be developed by analyzing the effects of perturbations on quantum unitary gates using the Dyson series. By utilizing the Dyson series, we can effectively study quantum systems and use it to develop new algorithms for quantum computing [25]. ### Our work's novelty In several ways, the work is novel: Initially, we construct a quantum unitary gate utilizing the most fundamental physical system in quantum mechanics with discrete energy levels, a free particle in a box. The Hamiltonian perturbation is the most general type of perturbation, consisting of a non-uniform electric field in space and time. As a result, we are optimizing the error energy over a wide range of perturbations. The novel's concluding characteristic involves the creation of massive gates. Increasing the number of base energy states allows us to create unitary gates of nearly infinite size. The primary reason is that there is no such thing as a 1-D or 2-D box; only 3-D boxes exist. We confine an ion in a three-dimensional box, excite it with an electric field that varies along the X-axis only, and then apply our time-dependent perturbation theory to it. Recent developments in quantum computing have enabled the realization of quantum gates using small perturbations of a free particle in an electric-field-enclosed box. This technique has been used to create a quantum gate, which is a device that can alter the state of a quantum system. This article explores the theoretical and experimental aspects of this technique, as well as its potential applications. Using a small perturbation of a free particle in a box containing electric fields to create a quantum gate is the concept underlying this technique. This is accomplished by applying a small electric field to the box, causing the particle to move in a particular direction. This motion can be used to manipulate the state of the particle, thereby enabling the creation of a quantum gate. Understanding the dynamics of the particle in the box and the effects of the electric field on the particle is required for the theoretical aspects of this technique. This requires a comprehensive evaluation of the particle's motion and the electric field's effect on the particle. When designing the quantum gate, it is also necessary to consider the effects of the electric field on the particle [26]. Quantum computing relies heavily on the Frobenius norm and the fidelity metric to measure the size and precision of quantum circuits. This is a crucial measurement technique. These metrics can be used to compare various quantum circuits and determine which are more efficient and precise. In the future, these metrics may be employed to enhance the performance of quantum computers and quantum circuits. In addition, these metrics could be utilized to compare quantum algorithms and determine which ones are the most effective and precise. This could facilitate the improvement of quantum computing algorithm development. Lastly, these metrics could be used to compare quantum technologies and determine which is the most dependable and effective. Creating the electric field and measuring the motion of the particle are the experimental aspects of this method. This requires specialized equipment, such as lasers and detectors, to measure the particle's motion and the electric field's effect on the particle. In addition, the electric field must be precisely regulated so that the particle's motion can be precisely measured. Among the potential applications of this method are the development of quantum gates for quantum computing, quantum cryptography, and quantum sensing. This technique could also be used to produce quantum gates, which could be utilized for quantum teleportation and quantum communication. The realization of quantum gates using small perturbations of a free particle in a box with electric fields is a promising technique with the potential to revolutionize quantum computing. This technique has the potential to facilitate the creation of quantum gates for applications such as quantum computing, quantum cryptography, quantum sensing, quantum teleportation, and quantum communications. Further investigation is required to fully comprehend the theoretical and experimental aspects of this technique and to investigate its potential applications. Theoretically, this technique would allow the production of quantum gates with small perturbations of a free particle in an electric field, creating a great potential for numerous applications [27]. The magnetic field-based gate design is more complicated here because a 1-D magnetic field will not act on a 1-D charged particle (\(q\vec{V}\times\vec{B}\)\(is\perp\vec{V}\)). So we used to excite a charged particle in a 3-D box with a 3-D control magnetic field and then design the gate. Consequently, the magnetic field must be expressed as \[B(t,X,Y,Z)=\sum_{nmr}B_{nmr}(t)\sin\left(\frac{n\pi X}{a}\right)\sin\left( \frac{n\pi Y}{b}\right)\sin\left(\frac{n\pi Z}{c}\right)\] and \(\{B_{nmr}(t)\}\) must be optimally determined in control with electric field exactly \(\vec{V}(t,X)=\sum_{n}\vec{V}_{n}(t)\sin\left(\frac{n\pi X}{a}\right)\). The magnetic field is expressed with both orbital and spin angular momentum. \[H_{I}(t)=\frac{e(\vec{B}(t,X,Y,Z),\vec{\sigma})}{2m}+\frac{e(\vec{B}(t,X,Y,Z), \vec{L})}{2m}\] Where, \(L=-\iota\vec{r}\times\vec{\bigtriangledown}\). \(B_{nmr}(t)\) depends on 3 space Fourier indices (\(nmr\)) in control with \(V_{n}(t)\) which depends on only one index. Therefore, magnetic field-based gate design is more difficult to compute. The paper is structured as follows: First, we compute the first-order perturbation for the Schr\(\ddot{o}\)dinger evolution operator when the Hamiltonian of a particle in a box is perturbed by a small time-varying potential. Then, in terms of minimizing the Frobenius norm, we define the minimum error energy between the perturbed generator and the given generator. We argue that using the time-dependent perturbation theory of independent quantum systems in conjunction with the matching generator technique is a natural way to realize non-separable gates, which are small perturbations of separable gates. The optimal design's obtained integral equation is then discretized to yield a recursive algorithm. In Section 3, we also evaluate the performance of our algorithm by using the error energy square in the generator as noise power and the given generator energy square as signal power, which is referred to as the noise-to-signal energy ratio (NSER). In Section 4, we justify the use of magnetic field control in the design of a gate. Section 5 concludes with a discussion of future scope. ## 2 Mathematical Model of Quantum Unitary Gate The Hamiltonian of a free particle in a 1-D box \([0,L]\) is \(H_{0}=-\frac{1}{2}\frac{d^{2}}{dx^{2}}\). The boundary conditions for the wave function are \(\Psi(0,t)=\Psi(L,t)=0\). The stationary wave functions are obtained by solving the stationary (time-independent) Schrodinger equation \(-\frac{1}{2}u^{\prime\prime}(x)=Eu(x)\) with the boundary conditions \(u(0)=u(L)=0\). The solutions are provided by [3]: \[u(x)=c_{1}\sin(\sqrt{2E}x)+c_{2}\cos(\sqrt{2E}x)\] Boundary conditions give \(c_{2}=0\), \[L\sqrt{2E}=n\pi,n=1,2,3....\] So \(u_{n}(x)=c_{n}\sin(\frac{n\pi x}{L})\). On normalization,we get: \(\int_{0}^{L}u_{n}^{2}(x)dx=1\) so that \(Lc_{n}^{2}=2\), or, \(c_{n}=\sqrt{\frac{2}{L}}\) and \(E_{n}=\frac{n^{2}\pi^{2}}{2L^{2}}\). A linear superposition of the stationary states yields the general quantum state, which adheres to the normalization constraint: \[\Psi(x,t)=\sum_{n=1}^{\infty}c_{n}e^{-iE_{n}t}u_{n}(x)\] \[\sum_{n=1}^{\infty}|c_{n}|^{2}=1.\] We apply a small perturbation \(\epsilon V(t,x)\) so the perturbed Hamiltonian is \(H(t)=-\frac{1}{2}\frac{d^{2}}{dx^{2}}+\epsilon V(t,x)\). The Schr\(\ddot{o}\)dinger's equation for the perturbed hamiltonian becomes \[i\frac{d\Psi(x,t)}{dt}=(H_{0}+\epsilon V(t))\Psi(x,t).\] According to this notation, the multiplication operator \(V(t)\) multiplies the state by \(V(t,x)\). The following characteristics of the evolution operator \(U_{-}t\) can be used to describe it: \[\Psi(x,t)=\Psi_{t}(x)=U_{t}\Psi_{0}(x).\] \[i\frac{dU_{t}}{dt}=H(t)U_{t}=(H_{0}+\epsilon V(t))U_{t},\] We wish to describe this unitary evolution entirely in terms if the interaction potential \(\epsilon V\left(t\right)\) only, thereby removing the unperturbed Hamiltonian \(H_{0}\). This is achieved via the interaction picture representation of Dirac in which observables evolve according to the unperturbed Hamiltonian \(H_{0}\), while the states evolve according to the "rotated perturbation" \(\widetilde{V}\left(t\right)=e^{itH_{0}}V\left(t\right)e^{-itH_{0}}\). To see how this is done, we set \(U_{t}=e^{-itH_{0}}W_{t}\) and obtain \(i\frac{dW_{t}}{dt}=\epsilon\widetilde{V_{t}}W_{t}\). [1],[2]. Now \(\widetilde{V_{t}}=e^{itH_{0}}V_{t}e^{-itH_{0}}\). Suppose, \[\Psi_{0}(x)=u_{n}(x)=|n\rangle\] then, \[U_{t}(x)= e^{-iE_{n}t}u_{n}(x)-i\epsilon\int_{0}^{t}e^{-iH_{0}t}\widetilde{V _{t_{1}}}u_{n}(x)dt_{1}\] \[-\epsilon^{2}\int_{0<t_{2}<t_{1}<t}e^{-iH_{0}t}\widetilde{V_{t_{2 }}}\widetilde{V_{t_{1}}}u_{n}(x)dt_{2}dt_{1}+O(\epsilon^{3}). \tag{1}\] Consider the matrix elements of \(U_{t}\) in the basis of stationary states of the unperturbed Hamiltonian: \[\langle m|U_{t}|n\rangle=\langle m|\Psi_{t}\rangle=\int_{0}^{L}u_{m} (x)\Psi_{t}(x)dx=e^{-iE_{m}t}\delta_{mn}\\ -i\epsilon\int_{0}^{t}\langle m|V_{t_{1}}|n\rangle e^{-i(t-t_{1}) E_{m}}e^{-it_{1}E_{n}}dt_{1}\\ -\epsilon^{2}\int_{0<t_{2}<t_{1}<t}^{e-i(t-t_{1})E_{m}}e^{-i(t_{1 }-t_{2})E_{p}}e^{-iE_{n}t_{2}}\langle m|V_{t_{1}}|p\rangle\langle p|V_{t_{2}}| n\rangle dt_{2}dt_{1}+O(\epsilon^{3}). \tag{2}\] Or, \[e^{iE_{m}t}\langle m|U_{t}|n\rangle= \delta_{mn}-i\epsilon\int_{0}^{t}\langle m|V_{t_{1}}|n\rangle e^{ iw_{mn}n^{t_{1}}}dt_{1}\] \[-\epsilon^{2}\int_{0<t_{2}<t_{1}<t}^{\langle m|V_{t_{1}}|p\rangle \langle p|V_{t_{2}}|n\rangle e^{i(w_{mp}t_{1}+w_{pn}t_{2})}dt_{2}dt_{1}+O( \epsilon^{3}). \tag{3}\] Note that, where \(w_{mn}\) is a Bohr frequency. In quantum computing is a measure of the energy that is required to perform a quantum computation. This energy is used to manipulate qubits, the fundamental building blocks of quantum computers. In a quantum computer, qubits are manipulated using a variety of techniques, such as entanglement, superposition, and measurement. The Bohr frequency is a measure of the energy required to perform a single operation on a qubit, such as entangling two qubits or measuring the state of a single qubit. The frequency is expressed in terms of the energy gap between the two energy levels of the qubit, and is typically measured in GHz (gigahertz). The Bohr frequency is important in the development of quantum computers because it allows researchers to accurately measure the energy required to perform a single operation. This knowledge can be used to optimize the performance of the system, as well as to compare the performance of different quantum computers. Additionally, the Bohr frequency can be used to understand the behavior of quantum states, as well as to identify potential errors in the system. Overall, the Bohr frequency is an important concept in quantum computing, as it provides a measure of the energy required to perform a single operation on a qubit. It is an essential aspect of developing and optimizing quantum computers, as well as understanding the behavior of quantum states. \[\langle m|V_{t_{1}}|n\rangle=\frac{2}{L}\int_{0}^{L}\sin(\frac{m\pi x}{L})\sin (\frac{n\pi x}{L})V(t,x)dx. \tag{4}\] We expand, \[V(t_{1},x)=\sum_{n=1}^{\infty}V_{n}(t)\sin(\frac{n\pi x}{L}). \tag{5}\] If the free particle has charge \(Q\) and \(V(t)\) is obtained by applying an electric field \(E(t,x)\) Then,\(V(t,x)=-Q\int_{0}^{x}E(t,\xi)d\xi\). The electric field \(E(t,x)\) may be generated by inserting external probes at different point \(x\in[0,L]\). Each probe has a resistor and if \(R(x)\) is the resistance per unit length then the total power dissipated in these resistors at time \(t\) is proportional to, \[P(t)=\int_{0}^{L}R(x)E^{2}(t,x)dx\] \[\propto\int_{0}^{L}R(x)\left(\frac{\partial V(t,x)}{\partial x} \right)^{2}dx.\] Expanding \(R(x)\) as a half wave Fourier series as \(R(x)=\sum_{n=1}^{\infty}R(n)\sin\left(\frac{n\pi x}{L}\right)\), we get,[22]R \[P(t)\propto\sum_{n=1}^{\infty}\left(\frac{n\pi}{L}\right)^{2}R_{n}V_{n}^{2}(t).\] Now, \[\langle m|V_{t}|n\rangle\underset{k}{\sum}\frac{2}{L}\underset{0}{\sum}\left( \frac{m\pi x}{L}\right)\sin\left(\frac{n\pi x}{L}\right)V_{k}(t)\sin\left(\frac{ k\pi x}{L}\right)dx. \tag{6}\] Therefore, using \[\int_{0}^{L}\sin\left(\frac{n\pi x}{L}\right)dx=\begin{cases}0,\text{if }n=0\\ \frac{L}{n\pi}(1-(-1^{n})),\text{if }n\neq 0\end{cases}\] we get, \[\langle m|V_{t}|n\rangle= \sum_{k}\frac{1}{2\pi}\bigg{\{}\frac{1-(-1)^{k+m-n}}{k+m-n}+\frac {1-(-1)^{k+n-m}}{k-m+n}\] \[-\frac{1-(-1)^{k+m+n}}{k+m+n}-\frac{1-(-1)^{k-m-n}}{k-m-n}\bigg{\}} V_{k}(t).\] Finally, \[\langle m|V_{t}|n\rangle= \frac{1}{2\pi}\bigg{\{}\sum_{r\geq(\frac{m-n}{2})}\frac{2}{2r+1}V _{2r+1-m+n}(t)+\sum_{r\geq(\frac{n-m}{2})}\frac{2}{2r+1}V_{2r+1+m-n}(t)\] \[-\sum_{r\geq(\frac{m+n}{2})}\frac{2}{2r+1}V_{2r+1-m-n}(t)-\sum_{r \geq(-\frac{m+n}{2})}\frac{2}{2r+1}V_{2r+1+m+n}(t)\bigg{\}}.\] \[\langle m|V_{t}|n\rangle=\sum_{p=1}^{\infty}k[m,n,p]V_{p}(t). \tag{7}\] Thus, we are truncated by t = T and discretization method have to use for simulation; \[e^{iE_{m}T} \langle m|U_{T}|n\rangle=\delta[m-n]-i\epsilon\int_{0}^{T}k[m,n,p] V_{p}(t)e^{iw_{mn}t}dt\] \[-\epsilon^{2}\int_{0<t_{2}<t_{1}<T}\hskip-14.226378ptk[m,n,p]k[p, n,r]V_{q}(t_{1})V_{r}(t_{2})e^{i(w_{mp}t_{1}+w_{pn}t_{2})}dt_{2}dt_{1}+O( \epsilon^{3}), \tag{8}\] where, summation over the repeated indices \(p,q\) is implied. Let \[U_{d}[m,n]=\langle m|U_{d}|n\rangle=U_{d}[x,y].\] \[=\frac{2}{L}\int_{0}^{L}\int_{0}^{L}\sin(\frac{m\pi x}{L})\sin( \frac{n\pi x}{L})U_{d}(x,y)dxdy. \tag{9}\] Then, define \[W_{d}[m,n]=\delta[m-n]-e^{-iE_{n}T}U_{d}[m,n]. \tag{10}\] We have, \[H=-\frac{1}{2}\frac{d^{2}}{dx^{2}}+\epsilon.V(t,x),0\leq x\leq L.\] Energy levels of the unperturbed system are\(E_{n}=n^{2}\pi^{2}/2L^{2}\) Take \(L=1\). Note that \(m=1\). Normalized energy eigenstate corresponding to \(E_{n}\): \[|n\rangle=\sqrt{2}.sin(n\pi x),n=1,2,...\] In quantum computing, the Schr\(\ddot{o}\)dinger interaction picture is an alternative to the Heisenberg picture and allows for the evolution of a quantum system to be studied in terms of a unitary operator. It is particularly useful for dealing with fast-changing external fields or when certain degrees of freedom in the system are changing rapidly. It is based on the wavefunction of the system and is useful for studying the evolution of quantum states in the presence of a Hamiltonian. In the Schr\(\ddot{o}\)dinger picture, the Hamiltonian is divided into two parts: one that is time-dependent and one that is fixed. The time-dependent part is used to describe the evolution of the system and the fixed part is used to describe the initial state of the system. The Schr\(\ddot{o}\)dinger picture is useful for analyzing the effects of a time-dependent Hamiltonian on the system, such as the effects of time-dependent forces and evolution operator satisfies [3] \[iU^{\prime}(t)=(H_{0}+\epsilon V(t))U(t),\mbox{where }H_{0}=-\frac{1}{2}\frac{d^{2 }}{dx^{2}},\] \[U(t)=exp(-itH_{0})W(t).\] Then, \[iW^{\prime}(t)=\epsilon.\tilde{V}(t)W(t).\] \[\tilde{V}(t)=exp(itH_{0})V(t)exp(-itH_{0}).\] We get, \[W(T)=I-i\epsilon\int_{0}^{T}\hskip-14.226378ptV(t)dt-\epsilon^{2}\int_{0<t_{2 }<t_{1}<T}\hskip-14.226378pt\tilde{V}(t_{1})\tilde{V}(t_{2})dt_{2}dt_{1}+O( \epsilon^{3}).\] Let, \(U_{d}\) be the desired unitary gate. We wish to choose \(V(t,x)\) over \((t,x)\in[0,T]\times[0,L]\) so that \(\parallel U_{d}-U(T)\parallel\) is minimized upto \(O(\epsilon^{2})\) terms and expending through Dyson series. Now, \[\parallel U_{d}-U(T)\parallel^{2}=\parallel U_{d}-exp(-itH_{0})W(T) \parallel^{2}\] \[=\parallel exp(itH_{0})U_{d}-I+i\epsilon\int_{0}^{T}\hskip-14.226378pt V(t)dt\] \[+\epsilon^{2}\int_{0<t_{2}<t_{1}<T}\hskip-14.226378pt\tilde{V}(t_{1})\tilde{V}(t_{2})dt_{2}dt_{1} \parallel^{2}+O(\epsilon^{3}). \tag{11}\] where \(\parallel(.)\parallel\) stands for the Frobenius norm, defined as: \[\parallel X\parallel^{2}=Tr\left(X^{*}X\right)\] Setting, \[W_{d}=exp(itH_{0})U_{d}-I.\] we have, \[\parallel U_{d}-U(T)\parallel^{2}=\parallel W_{d}\parallel^{2}+ \epsilon^{2}\int_{0<t_{1},t_{2}<T}Tr(\tilde{V}(t_{1})\tilde{V}(t_{2}))dt_{1}dt _{2}\] \[+2\epsilon\int_{0}^{T}Im(Tr(W_{d}\tilde{V}(t)))dt+2\epsilon^{2} \int_{0<t_{2}<t_{1}<T}\hskip-14.226378ptRe(Tr(W_{d}\tilde{V}(t_{2})\tilde{V}( t_{1})))dt_{2}dt_{1}+O(\epsilon^{3}). \tag{12}\] We expand the external potential, \[V(t,x)=\sum_{n}V_{n}(t)sin(n\pi x).\] The energy constraint is in quantum computing is a powerful tool, but it can be very energy intensive. To make the most of quantum computing, it is important to understand and manage the energy constraints it imposes. One of the most important energy constraints in quantum computing is the power budget. Quantum computers require large amounts of power to run, and they are subject to a "power wall" that limits the amount of power they can consume. This means that the total power consumed by a quantum computer must be kept within certain bounds in order to ensure that the device does not become unstable and fail. Another energy constraint in quantum computing is the cooling requirements. Quantum computers must be kept extremely cold in order to function properly. This means that large amounts of energy must be used to cool the computers and keep them at the required temperature. Finally, quantum computers must be carefully shielded from external sources of noise and interference. This requires the use of additional energy, both for the shielding itself and for the cooling required to keep the shielding at the optimum temperature. Overall, quantum computing imposes a number of energy constraints that must be taken into account in order to get the most out of the technology. By understanding these constraints and managing them effectively, quantum computing can become a powerful and efficient tool. \[\sum_{p=1}^{N}\alpha[p]\int_{0}^{T}V_{p}^{2}(t)dt=E.\] This is obtained from, \[\int_{0}^{T}\int_{0}^{L}R(x)V^{2}(t,x)dtdx=constt,\] where, \(R(x)\) is a known function (conductance per unit length). Take \(E=E_{0}/5\), \(\alpha[p]=1(constant)\) for all \(p\) and \(\epsilon=1\) where, \(E_{0}=\pi^{2}/2\) (\(\pi^{2}/2mL^{2}\)) is the ground state unperturbed energy. This value of \(E\) ensures that the perturbation is indeed small compared to the unperturbed energy. Further, choose \(T=10/E_{0}=20/\pi^{2}\). The ground state evolves as \(exp(-iE_{0}t)\), ie, with frequency \(E_{0}\) and hence \(1/E_{0}\) is the longest time scale involved in the unperturbed system. We have chosen \(T\) to be 10 times this longest time scale to allow sufficiently good gate approximation. Now, \[Tr(W_{d}\tilde{V}(t))=\sum_{m,n=1}^{N}W_{d}[m,n]\langle n|\tilde{V}(t)|m\rangle.\] \[=2\sum_{m,n,p}W_{d}[m,n]V_{p}(t)exp(i\omega[nm]t)\int_{0}^{1}sin(n\pi x)sin(m \pi x)sin(p\pi x)dx, \tag{13}\] where, \[w_{mn}=\omega[nm]=E_{n}-E_{m}=(n^{2}-m^{2})\pi^{2}/2.\] Let, \[\gamma[mnp]=2\int_{0}^{1}sin(n\pi x)sin(m\pi x)sin(p\pi x)dx.\] \[\approx(2/M)\sum_{r=0}^{M-1}sin(n\pi r/M)sin(m\pi r/M)sin(p\pi r/M),\] where, \(M=100\). Thus, \[2\epsilon\int_{0}^{T}Im(Tr(W_{d}\tilde{V}(t)))dt=\] \[4\int_{0}^{T}\sum_{m,n,p=1}^{N}\hskip-14.226378ptIm(W_{d}[m,n]exp(i\omega[nm] t))\gamma[mnp]V_{p}(t)dt\] \[=\sum_{p=1}^{N}\int_{0}^{T}\beta_{p}(t)V_{p}(t)dt,\] where, \[\beta_{p}(t)=4\sum_{m,n}\gamma[mnp]Im(W_{d}[m,n]exp(i\omega[nm]t)).\] Note that, \(((W_{d}[m,n]))\) is an \(N\times N\) matrix and so is \(\omega[n,m]=E_{n}-E_{m}\). These have to be defined before implementing the above program. To compute and store \(W_{d}[n,m]=\langle n|W_{d}|m\rangle\), we observe that, \[W_{d}=exp(iTH_{0})U_{d}-I,\] so that, \[W_{d}[m,n]=exp(iE_{m}T)\langle m|U_{d}|n\rangle-\delta[m-n].\] \(U_{d}\) will typically be given as a kernel \(U_{d}(x,y)\) from which \(U_{d}[m,n]\) will be calculated as, \[U_{d}[m,n]=2\int_{0}^{1}\int_{0}^{1}U_{d}(x,y)sin(m\pi x)sin(n\pi y)dxdy\] \[\approx(1/M^{2})\sum_{r,s=0}^{M-1}\hskip-10.0ptU_{d}(r/M,s/M)sin(m\pi r/M)sin(n \pi s/M).\] A program can be written to compute and store \(U_{d}[m,n]\) as an \(N\times N\) matrix. Or, we may take for the purpose of illustration, \(U_{d}[m,n]=exp(2\pi i(m-1)(n-1)/N)/\sqrt{N}\) with \(1\leq m,n\leq N\), i.e. the DFT matrix. We may define the vector \(E=((E(n)))_{n=1}^{N}\), of unperturbed energies also beforehand \(E(n)=n^{2}\pi^{2}/2\). Define the diagonal matrix, \[D=diag[exp(iE(m)T),m=1,2,...,N],\] and then compute, \[W_{d}=D*U_{d}-I.\] We get the approximation, \[2\epsilon\int_{0}^{T}Im(Tr(W_{d}\tilde{V}(t)))dt\] \[\approx\sum_{p,r}\beta[K(p-1)+r+1]V_{p}(r\tau)\tau.\] (We have not yet obtained \(V\), we are proceeding that \(V\) has been stored in this form as an unknown \(KN\times 1\) vector) We have, \[2\epsilon\int_{0}^{T}Im(Tr(W_{d}\tilde{V}(t)))dt\approx\sum_{p,r}\beta[K(p-1) +r+1]V[K(p-1)+r+1]\tau. \tag{14}\] \[=\tau.\sum_{m=1}^{KN}\beta[m]V[m].\] Likewise, we compute, \[Term(2)=\int_{0<t_{1},t_{2}<T}Tr(\tilde{V}(t_{1})\tilde{V}(t_{2}))dt_{1}dt_{2}.\] \[=\int_{0<t_{1},t_{2}<T}\sum_{m,n=1}^{N}exp(i\omega[m,n](t_{1}-t_{2}))\langle m |V(t_{1})|n\rangle\langle n|V(t_{2})|m\rangle dt_{1}dt_{2}. \tag{15}\] We now propose to replace all continuous-time integrals by discrete sums over integer time indices with \(\tau\) as the time discretization step size. Further, we club the discrete-time index and the potential integer index into a single index by the process of lexicographic ordering. This clubbing enables us to formulate the problem of determining the optimal potential as a matrix-vector combined quadratic optimization problem, which can be optimized using MATLAB [15],[16]. Again, we observe that, \[\langle m|V(t)|n\rangle=\sum_{p}\gamma[mnp]V_{p}(t),\] and hence, \[Term(2)=\sum_{mnpq=1}^{N}\gamma[mnp]\gamma[nmq]\int V_{p}(t_{1})V_{q}(t_{2})cos( \omega[m,n](t_{1}-t_{2}))dt_{1}dt_{2}. \tag{16}\] \[=\tau^{2}\hskip-14.226378pt\sum_{mnpq,r_{1}r_{2}}\hskip-14.226378pt\gamma[mnp] \gamma[mnq]V_{p}(r_{1}\tau)V_{q}(r_{2}\tau)cos(\omega[m,n](r_{1}-r_{2})\tau). \tag{17}\] \[= \tau^{2}\sum\gamma[mnp]\gamma[mnq]cos(\omega[m,n](r_{1}-r_{2})\tau)\] \[\times V[K(p-1)+r_{1}+1]V[K(q-1)+r_{2}+1]. \tag{18}\] Thus, defining the \(KN\times KN\) matrix \(C\) with entries, \[C[K(p-1)+r_{1}+1,K(q-1)+r_{2}+1]\] \[=\tau^{2}\sum_{m,n=1}^{N}\gamma[mnp]\gamma[mnq]cos(\omega[m,n](r_{1}-r_{2}) \tau).\] for \(1\leq p,q\leq N\), \(0\leq r_{1},r_{2}\leq K-1\), we have, \[Term(2)=\] \[=\sum_{m,n=1}^{KN}C[m,n]V[m]V[n].\] Likewise, \[Term(3)=\int_{0<t_{2}<t_{1}<T}\hskip-14.226378ptRe(Tr(W_{d}\tilde{V}(t_{2}) \tilde{V}(t_{1})))dt_{2}dt_{1}.\] \[=\int_{0<t_{2}<t_{1}<T}\sum_{mnp}Re(W_{d}[m,n]<n|V(t_{2})|p><p|V(t_{1})|m>)dt_ {2}dt_{1}. \tag{19}\] \[=\int_{t_{2}<t_{1}}\sum_{mnp}Re(W_{d}[m,n]\gamma[npa]\gamma[pmb]\] \[\times exp(i(\omega[p,m]t_{1}+\omega[n,p]t_{2}))V_{a}(t_{1})V_{b} (t_{2})dt_{2}dt_{1}. \tag{20}\] Define the \(KN\times KN\) matrix \(D\) by, \[D[K(a-1)+r_{1}+1,K(b-1)+r_{2}+1]=\] \[= \hskip-14.226378pt\sum_{mnp=1}^{N}\hskip-14.226378pt\gamma[npa] \gamma[pmb]Re(W_{d}[m,n]\times exp(i(\omega[p,m]r_{1}+\omega[n,p]r_{2})\tau)). \tag{21}\] Note that, \(\gamma[mnp]\) is accessed as \(\gamma[N^{2}(m-1)+N(n-1)+p]\) from the memory. Then, \[Term(3)=\hskip-14.226378pt\sum_{a,b,r_{1},r_{2},r_{2}<r_{1}}\hskip-14.226378ptD [K(a-1)+r_{1}+1,K(b-1)+r_{2}+1]\] \[\times V[K(a-1)+r_{1}+1]V[K(b-1)+r_{2}+1]. \tag{22}\] This, like \(Term(2)\) is a quadratic form in \(V\) and can be expressed as \(V^{T}HV\) where \(H\) is derived from \(D\) in an obvious way. Finally, combining all this, the optimization problem can be cast as, \[\mbox{minimize }F(\mathbf{V},\lambda)=\mathbf{V}^{T}\mathbf{Q}\mathbf{V}+\beta^{T} \mathbf{V}+\lambda(\mathbf{V}^{T}\mathbf{R}\mathbf{V}-E).\] where, \(\mathbf{Q},\mathbf{R}\) are positive definite matrices and \(E\) is a given real positive scalar, \(\lambda\) is a real scalar, \(\beta\) is a real vector.[17],[18] \[\nabla\mathbf{V}F=0,dF/d\lambda=0,\] give, \[\mathbf{Q}\mathbf{V}+\beta+\lambda\mathbf{R}\mathbf{V}=0,\mathbf{V}^{T} \mathbf{R}\mathbf{V}=E.\] The first gives, \[\mathbf{V}=-(\mathbf{Q}+\lambda\mathbf{R})^{-1}\beta,\] and substituting this into the second gives, \[\beta^{T}(\mathbf{Q}+\lambda\mathbf{R})^{-T}\mathbf{R}(\mathbf{Q}+\lambda \mathbf{R})^{-1}\beta=E.\] This equation is very hard to solve for \(\lambda\). Thus, a recursive algorithm for calculating \(\mathbf{V}\) would be as follows: Start with an initial guess \(\lambda^{(0)}\) for \(\lambda\). For \(n=0,1,2,...\) compute, \[\mathbf{V}^{(n+1)}=-(\mathbf{Q}+\lambda^{(n)}\mathbf{R})^{-1}\beta,\] \[\lambda^{(n+1)}=-[\mathbf{V}^{(n+1)T}(\mathbf{Q}\mathbf{V}^{(n+1)}+\beta)]/[ \mathbf{V}^{(n+1)T}\mathbf{R}\mathbf{V}^{(n+1)}].\] ## 3 Simulation Results It shows that with increasing time the NSER decreases. Thus, we can achieve very low NSER's by increasing the duration of the evolution of time [23]. It represents the noise-to-signal (\(NSER\)) versus time plot which shows that NSER decreases with time. \[NSER=\frac{\|U_{d}-U(T)\|^{2}}{\|U_{d}\|^{2}}\] where \(U_{d}\) is the desired gate and \(U(T)\) is the simulated gate. The graph shows that the NSER decreases with time and reaches a steady value and cannot be zero as the truncated Dyson series cannot achieve a steady unitary gate. We can also design the quantum gate in the presence of noise and extend the potential upto \(\mathcal{O}(\epsilon^{k})\). In this article, ## 4 Design a Gate using Magnetic Field Control If a particle in a 3-D box is connected with an electric field applied along only the X-direction, then the total Hamiltonian will be \[H(t)=-\frac{1}{2m}\bigg{(}\frac{\partial^{2}}{\partial X^{2}}+\frac{\partial^ {2}}{\partial Y^{2}}+\frac{\partial^{2}}{\partial Z^{2}}\bigg{)}+\epsilon \sum_{n=1}V_{n}(t)\sin\bigg{(}\frac{n\pi X}{a}\bigg{)}\] The egigenstaes of the unperturbed Hamitonian obtained after applying boundary conditions are \[\left|nmp\right>=\left(\frac{2}{L}\right)^{\frac{3}{2}}\sin\bigg{(}\frac{n\pi X }{L}\bigg{)}\sin\bigg{(}\frac{n\pi X}{L}\bigg{)}\] and hence in the intraction picture upto \(O(\epsilon^{2})\), the matrix elements of the evolution operators are \[\left<n^{\prime}m^{\prime}p^{\prime}\right|W(T)\left|nmp\right>= \delta[n^{\prime}-n]\delta[m^{\prime}-m]\delta[p^{\prime}-p]\] \[-\epsilon\epsilon\int_{0}^{T}\left<n^{\prime}m^{\prime}p^{\prime }\right|V(t,X)\left|nmp\right>\exp(\iota(E_{n^{\prime}m^{\prime}p^{\prime}}-E _{nmp})t)dt\] \[-\epsilon^{2}\sum_{n^{\prime}m^{\prime}p^{\prime}}\int_{0<t_{2}<t _{1}<T}\left<n^{\prime}m^{\prime}p^{\prime}\right|V(t_{1},X)\left|n^{\prime}m^ {\prime}p^{\prime}\right>bran^{m}m^{\prime}p^{\prime}V(t_{2},X)\left|nmp\right>\] \[\exp(\iota((E_{n^{\prime}m^{\prime}p^{\prime}}-E_{n^{\prime}m^ {\prime}p^{\prime}})t_{1}+(E_{n^{\prime}m^{\prime}p^{\prime}}-E_{nmp})t_{2})) dt_{1}dt_{2}\] where, \(E_{nmp}=\frac{\pi^{2}}{2m_{0}L^{2}}(n^{2}+m^{2}+p^{2})\). It is easy to see that this matrix factorizes into tensor product form: \[\left\langle n^{\prime}m^{\prime}p^{\prime}\right|W(T)\left|nmp\right\rangle= \left\langle n^{\prime}\right|W_{X}(T)\left|n\right\rangle\delta[m^{\prime}-m ]\delta[p^{\prime}-p]\] where, \[\left\langle n^{\prime}\right|W_{X}(T)\left|n\right\rangle= \delta[n^{\prime}-n]\] \[-\epsilon^{2}\sum_{n^{*}}\int_{0<t_{2}<t_{1}<T}\left\langle n^{ \prime}\right|V(t_{1},X)\left|n^{*}\right\rangle\left\langle n^{*}\right|V(t_ {2},X)\left|n\right\rangle\] \[\exp(\iota((E_{n^{\prime}}-E_{n^{*}})t_{1}+(E_{n^{*}}-E_{n})t_{2 }))dt_{1}dt_{2}\] Thus our designed gate has the form \[W_{X}(T)\otimes I_{N}\otimes I_{N}\] and hence if \(W_{g}\) is the derived gate, we have to choose \(\{V_{n}(t)\}\) so that \(\|W_{X}(T)\otimes I_{N}\otimes I_{N}-W_{g}(T)\|^{2}\) is a minimum, that is, minimize \[N^{2}\|W_{X}(T)\|^{2}-2Re(T_{r}((W_{X}(T)\otimes I_{N}\otimes I _{N})W_{g}(T)^{*})\] \[=N^{2}\|W_{X}(T)\|^{2}-2Re(T_{r}W_{X}(T)(T_{r23}W_{g}(T)^{*}))\] So that design amounts to the same optimization algorithm but with the given state \(W_{g}(T)\) replaced by its particle trace over the Y and Z dimensions \(W_{g}(T)\to T_{r_{23}}(W_{g}(T))\). or equivalently, \[\left\langle nmps\right|U(T)\left|n^{\prime}m^{\prime}p^{\prime}s^{\prime} \right\rangle=\delta_{ss^{\prime}}\delta[n-n^{\prime}]\delta[m-m^{\prime}] \delta[p-p^{\prime}]\exp(-\iota E_{nmps}T)\] where, \(E_{nmps}=\frac{\pi^{2}}{2m_{0}L^{2}}(n^{2}+m^{2}+p^{2})+\frac{\epsilon sB_{0} }{2m}\), if \(B_{0}\) is a constant then \(\left\langle s^{\prime}\right|\sigma\left|s\right\rangle=\delta_{s^{\prime}s}\). In this case, after a constant magnetic field perturbation, the evolution operator remains diagonal and hence is not of much varies. Even when the magnetic field depends only on time and not on space, our exact perturbation unitary evolution operator remains diagonal. ## 5 Conclusion We have minimized discrepancies between the given unitary gate and the gate designed in the presence of a free particle in a 1-D box that is bounded between two walls with infinite potential in the presence of a weak electric field. We have used the perturbation approach to design the gate, which is more appropriate for the physical system such as atoms, molecules, etc. We have used the Dyson series in the interaction picture to calculate the evolution operator matrix elements upto the linear and quadratic integral function of the potential. We have reduced the disparity between the supplied unitary gate and the gate developed in the presence of a free particle in a 1-D box that is limited by two walls and has infinite potential in the presence of a weak electric field. The gate that we designed is more suitable for physical systems like atoms, molecules, and so on because we employed the perturbation approach. The Dyson series was used to figure out the evolution operator matrix elements up to a linear and quadratic integral function of the potential in the interaction picture. In this paper, we have made an attempt to synthesize the quantum Fourier transform unitary gate by perturbing a particle in a box with a non-uniform time-varying electric field. The particle is assumed to carry a charge, and the total Hamiltonian of the particle is given by \[H=-\frac{1}{2}\frac{d^{2}}{dx^{2}}-eV(t,x),0\leq x\leq L\] with \(V(t,x)\) is the perturbing potential corresponding to the non-uniform electric field. The boundary conditions are that the wave function which satisfies the Schr\(\ddot{o}\)dinger equation vanishes at \(x=0\) and \(x=L\). The resulting unitary evolution operator is developed into a Dyson series retaining only upto \(\mathcal{O}(e)\) terms and this evolution operator is expressed relative to the eigenbasis of the unperturbed system ie, of a free particle which has eigenfunctions \[u_{n}(x)=(2/L)^{1/2}sin(n\pi x/L),n=1,2,...\] with corresponding energy eigenvalues being \(n^{2}\pi^{2}/2mL^{2},n=1,2,...\). After truncating this basis, we calculate the \(O(e)\) term in the Dyson series expansion by representing \(V(t,x)=\sum_{n}V_{n}(t)u_{n}(x)\) with the control functions \(V_{n}(t)\) calculated so that the Frobenius norm error square between the given QFT gate and the realized gate is as small as possible. The optimal equations are coupled linear integral equations for the unknown functions \(V_{n}(t),0\leq t\leq T,n\geq 1\) and are solved using MATLAB by discretizing linear integral equations into linear matrix equations. ## 6 Future Work Suppose that we have designed our quantum unitary gate and obtained a given set of potential coefficients \(V_{n}(t)\). Now suppose, noise enters our system, ie, instead of \(V_{n}(t)\), we have \(V_{n}(t)+w_{n}(t)\), or equivalently, \(V(t,x)\) is replaced by the randomly perturbed potential \[\sum_{n}(V_{n}(t)+w_{n}(t))u_{n}(x)\] Then, we can calculate the change in the evolution operator upto linear orders in the noise processes \(w_{n}(t)\). Thus, the perturbed unitary gate will have the form \[U(T)=U_{0}(T)+\sum_{n}\int_{0}^{T}w_{n}(t)W_{n}(t)\] where the \(W_{n}(t)^{\prime}s\) are non-random matrices. We can calculate the increase in the mean square gate energy error due to noise. \[\mathbb{E}(\parallel U_{g}-U(T)\parallel^{2})\] \[= \parallel U_{g}-U_{0}(T)\parallel^{2}+\sum_{n,m}\int_{0}^{T} \int_{0}^{T}\mathbb{E}(w_{n}(t)w_{m}(s))\] \[\times Tr(W_{n}(t)W_{n}(s))dtds\] Another possible extension to this problem is to consider a particle in a \(3-D\) box perturbed by a vector electric field and a vector magnetic field. The Hamiltonian is then: \[H(t) = \frac{(-i\hbar\nabla+eA(t,r))^{2}}{2m}-eV(t,r)\] \[= H_{0}+eV_{1}+e^{2}V_{2}\] where, \[H_{0}=-\frac{\hbar^{2}\nabla^{2}}{2m}\] \[V_{1}(t)=-\frac{i\hbar}{2m}\left[A(t,r),\nabla\right]-\frac{i \hbar}{2m}\nabla\cdot A(t,r)-V(t,r)\] \[V_{2}(t)=\frac{A^{2}(t,r)}{2m}\] Due to this perturbation, the unitary evolution \(U(t)\) must be expanded upto \(\mathcal{O}(e^{2})\) and then, matching with the desired unitary gate \(U_{g}\) has to be carried out. Specifically, \[U(t)=U_{0}(t)\left(I+eW_{1}(t)+e^{2}W_{2}(t)\right)\] where \(U_{0}(t)=exp\left(\frac{-itH_{0}}{\hbar}\right)\). So, equating each power of \(e\) gives \[i\hbar\frac{dW_{1}(t)}{dt}=\widetilde{V_{1}}(t)\] \[i\hbar\frac{dW_{2}(t)}{dt}=\widetilde{V_{1}}(t)W_{1}(t)+\widetilde{V_{2}}(t)\] where \[\widetilde{V_{k}}(t)=U_{0}(-t)V_{k}(t)U_{0}(t)\] Optimization with respect to \(A(t,r)\) and \(V(t,r)\) must be carried out so that the gate error energy: \[\left|\left|U_{g}-I-eW_{1}(T)-e^{2}W_{2}(T)\right|\right|^{2}\] is a minimum for the subject of a future prospect. In addition, we can investigate more complex quantum fourier gates and their applications in quantum computing. We can also explore the use of quantum fourier gates in other areas such as quantum cryptography and quantum communication. Investigating these complex quantum fourier gates and their applications has the potential to revolutionize our approach to problem-solving, allowing us to create more efficient algorithms. ## Data availability statement There are no associated data with my manuscript. ## Conflicts of interests On behalf of all authors, the corresponding author states that there is no conflict of interest.
2302.01226
Factor Fields: A Unified Framework for Neural Fields and Beyond
We present Factor Fields, a novel framework for modeling and representing signals. Factor Fields decomposes a signal into a product of factors, each represented by a classical or neural field representation which operates on transformed input coordinates. This decomposition results in a unified framework that accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instant-NGP, and TensoRF. Additionally, our framework allows for the creation of powerful new signal representations, such as the "Dictionary Field" (DiF) which is a second contribution of this paper. Our experiments show that DiF leads to improvements in approximation quality, compactness, and training time when compared to previous fast reconstruction methods. Experimentally, our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks. Furthermore, DiF enables generalization to unseen images/3D scenes by sharing bases across signals during training which greatly benefits use cases such as image regression from sparse observations and few-shot radiance field reconstruction.
Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
2023-02-02T17:06:50Z
http://arxiv.org/abs/2302.01226v3
# Factor Fields: A Unified Framework for Neural Fields and Beyond ###### Abstract We present Factor Fields, a novel framework for modeling and representing signals. Factor Fields decomposes a signal into a product of factors, each represented by a classical or neural field representation which operates on transformed input coordinates. This decomposition results in a unified framework that accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instanti-NGP, and TensoRF. Additionally, our framework allows for the creation of powerful new signal representations, such as the "Coefficient-Basis Factorization" (CoBaFa) which is a second contribution of this paper. Our experiments show that CoBaFa leads to improvements in approximation quality, compactness, and training time when compared to previous fast reconstruction methods. Experimentally, our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks. Furthermore, CoBaFa enables generalization to unseen images/3D scenes by sharing bases across signals during training which greatly benefits use cases such as image regression from sparse observations and few-shot radiance field reconstruction. ## 1 Introduction Effectively representing multi-dimensional digital content - like 2D images or 3D geometry and appearance - is critical for computer graphics and vision applications. These digital signals are traditionally represented discretely as pixels, voxels, textures, or polygons. Recently, significant headway has been made in developing advanced neural representations [39, 56, 40, 13, 54], which demonstrated superiority in modeling accuracy and efficiency over traditional representations for different image synthesis and scene reconstruction applications. In order to gain a better understanding of existing representations, make comparisons across their design principles, and create powerful new representations, we propose _Factor Fields_, a novel mathematical framework that unifies many previous neural representations for multi-dimensional signals. This framework offers a simple formulation for modeling and representing signals. Our framework decomposes a signal by factorizing it into multiple factor fields (\(\mathbf{f}_{1},\dots,\mathbf{f}_{N}\)) operating on suitably chosen coordinate transformations (\(\boldsymbol{\gamma}_{1},\dots,\boldsymbol{\gamma}_{N}\)) as illustrated in Fig. 1. More specifically, each factor field decodes multi-channel features at any spatial location of a coordinate-transformed signal domain. The target signal is then regressed from the factor product via a learned projection function (e.g., MLP). Our framework accommodates most previous neural representations. Many of them can be represented in our framework as a single factor with a domain transformation - for example, the MLP network as a factor with a positional encoding transformation in NeRF [39], the tabular encoding as a factor with a hash transformation in Instant-NGP [40], and the feature grid as a factor with identity transformation in DVGO [56] and Plenoxels [18]. Recently, TensoRF [13] introduced a tensor factorization-based representation, which can be seen as a representation of two (vector-matrix decomposition) or three (CP decomposition) factors with axis-aligned orthogonal 2D and 1D projections as transformations. The potential of a multi-factor representation has been demonstrated by TensoRF, which has led to superior quality and efficiency on radiance field reconstruction and rendering, while being limited to orthogonal transformations. This motivates us to generalize previous classic neural representations via a single unified framework which enables easy and flexible combinations of previous neural fields and transformation functions, yielding novel representation designs. As an example, we present _Coefficient-Basis Factorization (CoBaFa)_, a two-factor representation that is composed of (1) a basis function factor with periodic transformation to model the commonalities of patterns that are shared across the entire signal domain and (2) a coefficient field factor with identity transformation to express localized spatially-varying features in the signal. The combination of both factors allows for an efficient representation of the global and local properties of the signal. Note that, most previous single-factor representations can be seen as using only one of such functions - either a basis function, like NeRF and Instant-NGP, or a coefficient function, like DVGO and Plenoxels. In CoBaFa, jointly modeling two factors (basis and coefficients) leads to superior quality over previous methods like Instant-NGP and enables compact and fast reconstruction, as we demonstrate on various downstream tasks. As CoBaFa is a member of our general Factor Fields family, we conduct a rich set of ablation experiments over the choice of basis/coefficient functions and basis transformations. We evaluate CoBaFa against various variants and baselines on three classical signal representation tasks: 2D image regression, 3D SDF geometry reconstruction, and radiance field reconstruction for novel view synthesis. We demonstrate that our factorized CoBaFa representation is able to achieve state-of-the-art reconstruction results that are better or on par with previous methods, while achieving superior modeling efficiency. For instance, compared to Instant-NGP our method leads to better reconstruction and rendering quality, while effectively _halving_ the total number of model parameters (capacity) for SDF and radiance field reconstruction, demonstrating its superior accuracy and efficiency. Moreover, in contrast to recent neural representations that are designed for purely per-scene optimization, our factorized representation framework is able to learn basis functions across different scenes. As shown in preliminary experiments, this enables learning across-scene bases from multiple 2D images or 3D radiance fields, leading to signal representations that generalize and hence improve reconstruction results from sparse observations such as in the few-shot radiance reconstruction setting. In summary, * We introduce a common framework _Factor Fields_ that encompasses many recent radiance field / signal representations and enables new models from the Factor Fields family. * We propose _CoBaFa_, a new member of the Factor Fields family representation that factorizes a signal into coefficient and basis factors which allows for exploiting similar signatures spatially and across scales. * Our model can be trained jointly on multiple signals, recovering general basis functions that allow for reconstructing parts of a signal from sparse or weak observations. * We present thorough experiments and ablation studies that demonstrate improved performance (accuracy, runtime, memory), and shed light on the performance improvements of CoBaFa vs. other models in the Factor Fields family. ## 2 Factor Fields We seek to compactly represent a continuous \(Q\)-dimensional signal \(\mathbf{s}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{Q}\) on a \(D\)-dimensional domain. We assume that signals are not random, but structured and hence share similar signatures within the same signal (spatially and across different scales) as well as between different signals. In the following, we develop our factor fields model step-by-step, starting from a standard basis expansion. Coefficient-Basis Factorization (CoBaFa):Let us first consider a 1D signal \(s(\mathbf{x}):\mathbb{R}^{D}\rightarrow\mathbb{R}\). Using basis expansion, we decompose \(s(\mathbf{x})\) into a set of coefficients \(\mathbf{c}=(c_{1},\dots,c_{K})^{\top}\) with \(c_{k}\in\mathbb{R}\) and basis functions \(\mathbf{b}(\mathbf{x})=(b_{1}(\mathbf{x}),\dots,b_{K}(\mathbf{x}))^{\top}\) with \(b_{k}:\mathbb{R}^{D}\rightarrow\mathbb{R}\): \[\hat{s}(\mathbf{x})=\mathbf{c}^{\top}\mathbf{b}(\mathbf{x}) \tag{1}\] Note that we denote \(s(\mathbf{x})\) as the true signal and \(\hat{s}(\mathbf{x})\) as its approximation. Representing the signal \(s(\mathbf{x})\) using a global set of basis functions is inefficient as information cannot be shared spatially. We hence generalize the above formulation by (i) exploiting a spatially varying coefficient field \(\mathbf{c}(\mathbf{x})=(c_{1}(\mathbf{x}),\dots,c_{K}(\mathbf{x}))^{\top}\) with \(c_{k}:\mathbb{R}^{D}\rightarrow\mathbb{R}\) and (ii) transforming the coordinates of the basis functions via a coordinate transformation function \(\boldsymbol{\gamma}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{B}\): \[\hat{s}(\mathbf{x})=\mathbf{c}(\mathbf{x})^{\top}\mathbf{b}\left(\boldsymbol {\gamma}(\mathbf{x})\right) \tag{2}\] When choosing \(\boldsymbol{\gamma}\) to be a periodic function, this formulation allows us to apply the same basis at multiple spatial locations and optionally also at multiple different scales while varying the coefficients \(\mathbf{c}\), as illustrated in Fig. 2. Note that in general \(B\) does not need to match \(D\), and hence the domain of the basis functions also changes accordingly: \(b_{k}:\mathbb{R}^{B}\rightarrow\mathbb{R}\). Further, we obtain Eq. (1) as a special case when setting \(\mathbf{c}(\mathbf{x})=\mathbf{c}\) and \(\boldsymbol{\gamma}(\mathbf{x})=\mathbf{x}\). So far, we have considered a 1D signal \(s(\mathbf{x})\). However, many signals have more than a single dimension (e.g., 3 in the case of RGB images or 4 in the case of radiance fields). We generalize our model to \(Q\)-dimensional signals \(\mathbf{f}(\mathbf{x})\) by introducing a projection function \(\mathcal{P}:\mathbb{R}^{K}\rightarrow\mathbb{R}^{Q}\) and replacing the inner product with the element-wise/Hadamard product (denoted by \(\circ\) in the following): \[\hat{\mathbf{s}}(\mathbf{x})=\mathcal{P}\left(\mathbf{c}(\mathbf{x})\circ \mathbf{b}\left(\boldsymbol{\gamma}(\mathbf{x})\right)\right) \tag{3}\] We refer to Eq. (3) as **Coefficient-Basis Factorization (CoBaFa)**. Note that in contrast to the scalar product \(\mathbf{c}^{\top}\mathbf{b}\) in Eq. (2), the output of \(\mathbf{c}\circ\mathbf{b}\) is a \(K\)-dimensional vector which comprises the individual coefficient-basis products as input to the projection function \(\mathcal{P}\) which itself can be either linear or non-linear. In the linear case, we have \(\mathcal{P}(\mathbf{x})=\mathbf{A}\mathbf{x}\) with \(\mathbf{A}\in\mathbb{R}^{Q\times K}\). Moreover, note that for \(Q=1\) and \(\mathbf{A}=(1,\dots,1)\) we recover Eq. (2) as a special case. The projection operator \(\mathcal{P}\) can also be utilized to model the volumetric rendering operation when reconstructing a 3D radiance field from 2D image observations as discussed in Section 2.3. Factor Fields:To allow for more than 2 factors, we generalize Eq. (3) to our full Factor Fields framework by replacing the coefficients \(\mathbf{c}(\mathbf{x})\) and basis \(\mathbf{b}(\mathbf{x})\) with a set of factor fields \(\{\mathbf{f}_{i}(\mathbf{x})\}_{i=1}^{N}\): \[\tilde{\mathbf{s}}(\mathbf{x})=\mathcal{P}\left(\prod_{i=1}^{N}\mathbf{f}_{i }\left(\boldsymbol{\gamma}_{i}(\mathbf{x})\right)\right) \tag{4}\] Here, \(\prod\) denotes the element-wise product of a sequence of factors. Note that in this general form, each factor \(\mathbf{f}_{i}:\mathbb{R}^{F_{i}}\rightarrow\mathbb{R}^{K}\) may be equipped with its own coordinate transformation \(\boldsymbol{\gamma}_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{F_{i}}\). We obtain CoBaFa in Eq. (3) as a special case of our **Factor Fields** framework in Eq. (4) by setting \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{c}(\mathbf{x})\), \(\boldsymbol{\gamma}_{1}(\mathbf{x})=\mathbf{x}\), \(\mathbf{f}_{2}(\mathbf{x})=\mathbf{b}(\mathbf{x})\) and \(\boldsymbol{\gamma}_{2}(\mathbf{x})=\boldsymbol{\gamma}(\mathbf{x})\) with \(N=2\). Besides CoBaFa, the Factor Fields framework generalizes many recently proposed radiance field representations in one unified model as we will discuss in Section 3. In our formulation, \(\{\boldsymbol{\gamma}_{i}\}\) are considered deterministic functions while \(\mathcal{P}\) and \(\{\mathbf{f}_{i}\}\) are parametric mappings (e.g., polynomials, multi-layer perceptrons or 3D feature grids) whose parameters (collectively named \(\theta\) below) are optimized. The parameters \(\theta\) can be optimized either for a single signal or jointly for multiple signals. When optimizing for multiple signals jointly, we share the parameters of the projection function and basis factors (but not the parameters of the coefficient factors) across signals. Figure 2: **Local Basis.** (a) Choosing a (periodic) coordinate transformation \(\boldsymbol{\gamma}(x)\) allows for applying the same basis function \(b(x)\) at multiple spatial locations and scales. For clarity, we have chosen constant coefficients \(\mathbf{c}=\mathbf{1}\). (b) Composing multiple bases at different spatial resolutions with their respective coefficients yields a powerful representation for signal \(s(\mathbf{x})\). In practice, we use multiple bases and coefficient fields at each resolution. ### Factor Fields \(\mathbf{f}_{i}\) For modeling the factor fields \(\mathbf{f}_{i}:\mathbb{R}^{F_{i}}\rightarrow\mathbb{R}^{K}\), we consider various different representations in our Factor Fields framework as illustrated in Fig. 1 (bottom-left). In particular, we consider polynomials, MLPs, 2D and 3D feature grids and 1D feature vectors. MLPs have been proposed as signal representations in Occupancy Networks [38], DeepSDF [44] and NeRF [39]. While MLPs excel in compactness and induce a useful smoothness bias, they are slow to evaluate and hence increase training and inference time. To address this, DVGO [56] proposes a 3D voxel grid representation for radiance fields. While voxel grids are fast to optimize, they increase memory significantly and do not easily scale to higher dimensions. To better capture the sparsity in the signal, Instant-NGP [40] proposes a hash function in combination with 1D feature vectors instead of a dense voxel grid, and TensoRF [13] decomposes the signal into matrix and vector products. Our _Factor Fields_ framework allows any of the above representations to model any factor \(\mathbf{f}_{i}\). As we will see in Section 3, many existing models are hence special cases of our framework. ### Coordinate Transformations \(\boldsymbol{\gamma}_{i}\) The coordinates input to each factor field \(\mathbf{f}_{i}\) are transformed by a coordinate transformation function \(\boldsymbol{\gamma}_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{F_{i}}\). Coefficients:When the factor field \(\mathbf{f}_{i}\) represents coefficients, we use the identity \(\boldsymbol{\gamma}_{i}(\mathbf{x})=\mathbf{x}\) for the corresponding coordinate transformation since coefficients vary freely over the signal domain. Local Basis:The coordinate transformation \(\boldsymbol{\gamma}_{i}\) enables the application of the same basis function \(\mathbf{f}_{i}\) at _multiple locations_ as illustrated in Fig. 2. In this paper, we consider sawtooth, triangular, sinusoidal (as in NeRF [39]), hashing (as in Instant-NGP [40]) and orthogonal (as in TensoRF [13]) transformations in our framework, see Fig. 3. Pyramid Basis:The coordinate transformation \(\boldsymbol{\gamma}_{i}\) also allows for applying the same basis \(\mathbf{f}_{i}\) at _multiple spatial resolutions_ of the signal by transforming the coordinates \(\mathbf{x}\) with (periodic) transformations of different frequencies as illustrated in Fig. 2. This is crucial as signals typically carry both high and low frequencies, and we seek to exploit our basis representation across the full spectrum to model fine details of the signal as well as smooth signal components. Specifically, we model the target signal with a set of multi-FOV (field of view) basis functions. We arrange the basis into \(L\) levels where each level covers a different FOV. Let \([u,v]\) denote the bounding box of the signal along on dimension. The corresponding FOV is given by \((v-u)/f_{l}\) where \(f_{l}\) is the frequency at level \(l\). A large FOV basis (e.g., level \(1\)) has a low frequency and covers a large region of the target signal while a small FOV basis (e.g., level \(L\)) has a large frequency \(f_{L}\) covering a small region of the target signal. We implement our pyramid representation (PR) by multiplying the scene coordinate \(\mathbf{x}\) with the level frequency \(f_{l}\) before feeding it to the coordinate transformation function \(\boldsymbol{\gamma}_{i}\) and then concatenating the results across the different pyramid levels \(l=1,\ldots,L\): \[\boldsymbol{\gamma}_{i}^{\text{PR}}(\mathbf{x})=(\boldsymbol{\gamma}_{i}( \mathbf{x}\,f_{1}),\ldots,\boldsymbol{\gamma}_{i}(\mathbf{x}\,f_{L})) \tag{5}\] Here, \(\boldsymbol{\gamma}_{i}\) is any of the coordinate transformations in Fig. 3, and \(\boldsymbol{\gamma}_{\text{PR}}\) is the final coordinate transform of our pyramid representation. As illustrated in Fig. 2 (b), when considering one coefficient factor \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{c}(\mathbf{x})\) and one basis factor \(\mathbf{f}_{2}(\mathbf{x})=\mathbf{b}(\mathbf{x})\) with coordinate transformation \(\boldsymbol{\gamma}_{2}^{\text{PR}}(\mathbf{x})\) results in the target signal \(\mathbf{s}(\mathbf{x})\) being decomposed as the product of spatial varying coefficient maps and multi-level basis maps which comprise repeated local basis functions. ### Projection \(\mathcal{P}\) To represent multi-dimensional signals, we introduced a projection function \(\mathcal{P}:\mathbb{R}^{K}\rightarrow\mathbb{R}^{Q}\) that maps from the \(K\)-dimensional Hadamard product \(\prod_{i}\mathbf{f}_{i}\) to the \(Q\)-dimensional target signal. We distinguish two cases in our framework: The case where direct observations from the target signal are available (e.g., pixels of an RGB image) and the indirect case where observations are projections of the target signal (e.g., pixels rendered from a radiance field). Direct Observations:In the simplest case, the projection function realizes a learnable linear mapping \(\mathcal{P}(\mathbf{x})=\mathbf{A}\mathbf{x}\) with parameters \(\mathbf{A}\in\mathbb{R}^{Q\times K}\) to map the \(K\)-dimensional Hadamard product \(\prod_{i}\mathbf{f}_{i}\) to the \(Q\)-dimensional signal. However, a more flexible model is attained if \(\mathcal{P}\) is represented by a shallow non-linear multi-layer perceptron (MLP) which is the default setting in all of our experiments. Figure 3: **Coordinate Transformations.** We show various periodic (top) and non-periodic (bottom) coordinate transformations \(\boldsymbol{\gamma}\) used in our framework. Indirect Observations:In some cases, we only have access to _indirect_ observations of the signal. For example, when optimizing neural radiance fields, we typically only observe 2D images instead of the 4D signal (density and radiance). In this case, extend \(\mathcal{P}\) to also include the differentiable volumetric rendering process. More concretely, we first apply a multi-layer perceptron to map the view direction \(\mathbf{d}\in\mathbb{R}^{3}\) and the product features \(\prod_{i}\mathbf{f}_{i}\) at a particular location \(\mathbf{x}\in\mathbb{R}^{3}\) to a color value \(\mathbf{c}\in\mathbb{R}^{3}\) and a volume density \(\sigma\in\mathbb{R}\). Next, we follow Mildenhall et al. [39] and approximate the intractable volumetric projection integral using numerical integration. More formally, let \(\{(\mathbf{c}_{i},\sigma_{i})\}_{i=1}^{N}\) denote the color and volume density values of \(N\) random samples along a camera ray. The RGB color value \(\mathbf{c}\) at the corresponding pixel is obtained using alpha composition \[\mathbf{c}_{r}=\sum_{i=1}^{N}\,T_{i}\,\alpha_{i}\,\mathbf{c}_{i}\ \ \ T_{i}=\prod_{j=1}^{i-1}\,(1-\alpha_{j})\ \ \ \alpha_{i}=1-\exp\,(-\sigma_{i}\delta_{i}) \tag{6}\] where \(T_{i}\) and \(\alpha_{i}\) denote the transmittance and alpha value of sample \(i\) and \(\delta_{i}=\left\lVert\mathbf{x}_{i+1}-\mathbf{x}_{i}\right\rVert_{2}\) is the distance between neighboring samples. The _composition_ of the learned MLP and the volume rendering function in Eq. (6) constitute the projection function \(\mathcal{P}\). ### Space Contraction We normalize the input coordinates \(\mathbf{x}\in\mathbb{R}^{D}\) to \([0,1]\) before passing them to the coordinate transformations \(\boldsymbol{\gamma}_{i}(\mathbf{x})\) by applying a simple space contraction function to \(\mathbf{x}\). We distinguish two settings: For **bounded signals** with \(D\)-dimensional bounding box \([\mathbf{u},\mathbf{v}]\) (where \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{D}\)), we utilize a simple linear mapping to normalize all coordinates to the range \([0,1]\): \[\text{contract}(\mathbf{x})=\frac{\mathbf{x}-\mathbf{u}}{\mathbf{v}-\mathbf{ u}} \tag{7}\] For **unbounded signals** (e.g., an outdoor radiance field), we adopt Mip-NeRF 360's [3] space contraction function: \[\text{contract}(\mathbf{x})=\begin{cases}\mathbf{x}&\left\lVert\mathbf{x} \right\rVert_{2}\leq 1\\ \left(2-\frac{1}{\left\lVert\mathbf{x}\right\rVert_{2}}\right)\left(\frac{ \mathbf{x}}{\left\lVert\mathbf{x}\right\rVert_{2}}\right)&\left\lVert \mathbf{x}\right\rVert_{2}>1\end{cases} \tag{8}\] ### Optimization Given samples \(\{(\mathbf{x},\mathbf{s}(\mathbf{x}))\}\) from the signal, we minimize \[\operatorname*{argmin}_{\theta}\ \operatorname{\mathbb{E}}_{\mathbf{x}}\left[\left\lVert \mathbf{s}(\mathbf{x})-\hat{\mathbf{s}}_{\theta}(\mathbf{x})\right\rVert_{2 }+\Psi(\theta)\right] \tag{9}\] where \(\Psi(\theta)\) is a regularizer on the model parameters. We optimize this objective using stochastic gradient descent. Sparsity Regularization:While using the \(\ell_{0}\) norm for sparse coefficients is desirable, this leads to a difficult optimization problem. Instead, we use a simpler strategy which we found to work surprisingly well. We regularize our objective by randomly dropping a subset of the \(K\) features of our model by setting them to zero with probability \(\mu\). This forces the signal to be represented with random combinations of features at every iteration, encouraging sparsity and preventing co-adaptation of features. We implement this dropout regularizer using a random binary vector \(\mathbf{m}\) which we multiply element-wise with the factor product: \(\mathbf{m}\circ\prod_{i}\mathbf{f}_{i}\). Initialization:During all our experiments, we initialize the basis factors using the discrete cosine transform (DCT), while initializing the parameters of the coefficient factors and projection MLP randomly. We experimentally found this to improve the quality of the solution as illustrated in our ablation study in Table (a)a to Table (e)e. Multiple Signals:When optimizing for multiple signals jointly, we share the parameters of the projection function and basis factors (but not the parameters of the coefficient factors) across signals. As evidenced by our experiments in Section 4.3, sharing bases across different signals while encouraging sparse coefficients improves generalization and enables reconstruction from sparse observations. ## 3 Factor Fields As A Common Framework Advanced neural representations have emerged as a promising replacement for traditional representations and been applied to improve the reconstruction quality and efficiency in various graphics and vision applications, such as novel view synthesis [12, 72, 35, 60, 2, 39, 34, 49, 45, 69, 13, 64, 61], 3D surface reconstruction [42, 66, 62, 28], generative models [52, 10, 11, 9, 19, 20, 43], image processing [15], graphics asset modeling [50, 30, 73], inverse rendering [5, 4, 70, 6, 7, 71], dynamic scene modeling [49, 33, 46, 32], and scene understanding [47, 37], amongst others [59]. Inspired by classical factorization and learning techniques, like sparse coding [63, 65, 21] and principal component analysis (PCA) [51, 36], we propose a novel neural factorization-based framework for neural representations. Our Factor Fields framework unifies many recently published neural representations and enables the instantiation of new models in the Factor Fields family, which, as we will see, exhibit desirable properties in terms of approximation quality, model compactness, optimization time and generalization capabilities. In this section, we will discuss the relationship to prior work in more detail. A systematic performance comparison of the various Factor Field model instantiations is provided in Section 4.4. Occupancy Networks and DeepSDF:[38, 44] represent the surface implicitly as the continuous decision boundary of an MLP classifier or by regressing a signed distance value. The vanilla MLP representation provides a continuous implicit 3D mapping, allowing for the extraction of 3D meshes at any resolution. This setting corresponds to our Factor Fields model when using a single factor (i.e., \(N=1\)) with \(\mathbf{\gamma}_{1}(\mathbf{x})=\mathbf{x}\), \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{x}\), \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\), thus \(\hat{\mathbf{s}}(\mathbf{x})=\text{MLP}(\mathbf{x})\). While this representation is able to generate high-quality meshes, it fails to model high-frequency signals, such as images due to the implicit smoothness bias of MLPs. **NeRF:**[39] proposes to represent a radiance field via an MLP in Fourier space by encoding the spatial coordinates with a set of sinusoidal functions. This corresponds to our Factor Fields setting when using a single factor with \(\mathbf{\gamma}_{1}(\mathbf{x})=(\sin(\mathbf{x}f_{1}),\cos(\mathbf{x}f_{1}), \ldots,\sin(\mathbf{x}f_{L}),\cos(\mathbf{x}f_{L}))\), \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{x}\) and \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\). Here, the coordinate transformation \(\mathbf{\gamma}_{1}(\mathbf{x})\) is a sinusoidal mapping as shown in Fig. 3 (3), which enables high frequencies. **Plenoxels:**[18] use sparse voxel grids to represent 3D scenes, allowing for direct optimization without neural networks, resulting in fast training. This corresponds to our Factor Fields framework when setting \(N=1\), \(\mathbf{\gamma}_{1}(\mathbf{x})=\mathbf{x}\), \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{3D}\text{-Grid}(\mathbf{x})\), and \(\mathcal{P}(\mathbf{x})=\mathbf{x}\) for the density field while \(\mathcal{P}(\mathbf{x})=\text{SH}(\mathbf{x})\) (Spherical Harmonics) for the radiance field. In related work, DVGO [56] proposes a similar design, but replaces the sparse 3D grids with dense grids and uses a tiny MLP as the projection function \(\mathcal{P}\). While dense grid modeling is simple and leads to fast feature queries, it requires high spatial resolution (and hence large memory) to represent fine details. Moreover, optimization is more easily affected by local minima compared to MLP representations that benefit from their inductive smoothness bias. **ConvONet and EG3D:**[48, 9] use a tri-plane representation to model 3D scenes by applying an orthogonal coordinate transformation to spatial points within a bounded scene, and then representing each point as the concatenation of features queried from a set of 2D feature maps. This representation allows for aggregating 3D features using only 2D convolution, which significantly reduces memory footprint compared to standard 3D grids. The setting can be viewed as an instance of our Factor Fields framework, with \(N=1\), \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{Orthogonal-2D}(\mathbf{x})\), \(\mathbf{f}_{1}(\mathbf{x})=\text{2D-Maps}(\mathbf{x})\) and \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\). However, while the axis-aligned transformation allows for dimension reduction and feature sharing along the axis, it can be challenging to handle complicated structures due to the axis-aligned bias of this representation. **Instant-NGP:**[40] exploits a multi-level hash grid to efficiently model internal features of target signals by hashing spatial locations to 1D feature vectors. This approach corresponds to our Factor Fields framework when using \(N=1\), \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{Hashing}(\mathbf{x})\), \(\mathbf{f}_{1}(\mathbf{x})=\text{Vectors}(\mathbf{x})\) and \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\) with \(L=16\) pyramid levels. However, the multi-level hash mappings can result in dense collisions at fine levels, and the one-to-many mapping forces the model to distribute its capacity which leads to a bias towards densely observed regions and noise in areas with fewer observations. The concurrent work VQAD [57] introduces a hierarchical vector-quantized auto-decoder (VQ-AD) representation that learns an index table as the coordinate transformation function which allows for higher compression rates. **TensoRF:**[13] factorizes the radiance fields into the products of vectors and matrices (TensoRF-VM) or multiple vectors (TensoRF-CP), achieving efficient feature queries at low memory footprint. This setting instantiates our Factor Fields framework for \(N=2\), \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{Orthogonal-1D}(\mathbf{x})\), \(\mathbf{f}_{1}(\mathbf{x})=\text{Vectors}(\mathbf{x})\), \(\mathbf{\gamma}_{2}(\mathbf{x})=\text{Orthogonal-2D}(\mathbf{x})\), \(\mathbf{f}_{2}(\mathbf{x})=\text{2D-Maps}(\mathbf{x})\) for VM decomposition and \(N=3\), \(\mathbf{\gamma}_{i}(\mathbf{x})=\text{Orthogonal-1D}(\mathbf{x})\), \(\mathbf{f}_{i}(\mathbf{x})=\text{Vectors}(\mathbf{x})\) for CP decomposition. Moreover, TensoRF uses both SH and MLP models for the projection \(\mathcal{P}\). Similar to ConvONet and EG3D, TensoRF is sensitive to the orientation of the coordinate system due to the use of an orthogonal coordinate transformation function. Note that, with the exception of TensoRF [13], all of the above representations factorize the signal using a single factor field, that is \(N=1\). As we will show in Table (a)a to Table (d)d, using multiple factor fields (i.e., \(N>1\)) provides stronger model capacity. **ArXiv Preprints:** The field of neural representation learning is advancing fast and many novel representations have been published as preprints on ArXiv recently. We now briefly discuss the most related ones and their relationship to our work: _Phasorial Embedding Fields (PREF)_[22] proposes to represent a target signal with a set of phasor volumes and then transforms them into the spatial domain with an inverse fast Fourier Transform (iFFT) for compact representation and efficient scene editing. This method shares a similar idea with DVGO and extends the projection function \(\mathcal{P}\) in our formulation with an iFFT function. _Tensor4D_[53] extends the triplane representation to 4D human reconstruction by using \(3\) triplane Factor Fields and indexing plane features with \(3\) orthogonal coordinate transformations (i.e., Orthogonal-2D\((\mathbf{x})=(xy,xt,yt)\), Orthogonal-2D\((\mathbf{x})=(xz,xt,zt)\), Orthogonal-2D\((\mathbf{x})=(yz,yt,zt)\)). Similarly, _HexPlane, K-Planes_[17, 8] represent dynamic 3D scenes by decomposing a 4D spacetime grid into 6 feature planes spanning each pair of coordinate axes (\(x,y,z,t\)), and connect the plane features either with concatenation or element-wised products. _NeRFlayer_[55] represents dynamic scenes via deformation, newness and decomposition fields, using multiple factors similar to TensoRF. It further extends the features by a time dimension. _D-TensoRF_[23] reconstructs dynamic scenes using matrix-matrix factorization, similar to the VM factorization of TensoRF, but replacing \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{Orthogonal-1D}(\mathbf{x})\) and \(\mathbf{f}_{1}(\mathbf{x})=\text{Vectors}(\mathbf{x})\) with \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{Orthogonal-2D}(\mathbf{x})\) and \(\mathbf{f}_{1}(\mathbf{x})=\text{2D-Maps}(\mathbf{x})\). _Quantized Fourier Features (QFF)_[31] factorizes internal features into bins of Fourier features, corresponding to our Factor Fields framework when using \(N=1\), \(\mathbf{\gamma}_{1}(\mathbf{x})=\text{sinusoidal}(\mathbf{x})\), \(\mathbf{f}_{1}(\mathbf{x})=\text{2D-Maps}(\mathbf{x})\), and \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\) for the \(2\)D signal representation. **Coefficient-Basis Factorization (CoBaFa):** Besides existing representations, our Factor Fields framework enables the design of novel representations with desirable properties. As an example, we now discuss the CoBaFa representation which we have already introduced in Eq. (3). CoBaFa offers implicit regularization, compactness and fast optimization while also generalizing across multiple signals. The central idea behind the CoBaFa representation is to decompose the target signals into two fields: a global field (i.e., the basis) and a local field (i.e., the coefficient). The global field promotes structured signal signatures shared across spatial locations and scales as well as between signals, while the local field allows for spatially varying content. More formally, CoBaFa factorizes the target signal into coefficient fields \(\mathbf{f}_{1}(\mathbf{x})=\mathbf{c}(\mathbf{x})\) and basis functions \(\mathbf{f}_{2}(\mathbf{x})=\mathbf{b}(\mathbf{x})\) which differ primarily by their respective coordinate transformation: We choose the identity mapping \(\mathbf{\gamma}_{1}(\mathbf{x})=\mathbf{x}\) for \(\mathbf{c}(\mathbf{x})\) and a periodic coordinate transformation \(\mathbf{\gamma}_{2}(\mathbf{x})\) for \(\mathbf{b}(\mathbf{x})\), see Fig. 3 (top). As representation of the two factor fields \(\mathbf{f}_{1}\) and \(\mathbf{f}_{2}\) we may choose any of the ones illustrated in Fig. 1 (bottom-left). To facilitate comparison with previous representations, we use the sawtooth function as the basis coordinate transformation \(\mathbf{\gamma}_{2}\) and uniform grids (i.e., 2D Maps for 2D signals and 3D Grids for 3D signals) as the representation of the coefficient fields \(\mathbf{f}_{1}\) and basis functions \(\mathbf{f}_{2}\) for most of our experiments. Besides, we also systematically ablate the number of factors, the number of pyramid levels, the coordinate transformation and the field representation in Section 4.4. ## 4 Experiments We now present extensive evaluations of our Factor Fields framework and CoBaFa representation. We first briefly discuss our implementation and hyperparameter configuration. We then compare the performance of CoBaFa with previously proposed representations on both per-signal reconstruction (optimization) and across-signal generalization tasks. At the end of this section, we examine the properties of our Factor Fields framework by varying the number of factors \(N\), level number \(L\), different types of transformation functions \(\mathbf{\gamma}_{i}\) field representation \(\mathbf{f}_{i}\), and field connector \(\circ\). ### Implementation We implement our Factor Fields framework using vanilla PyTorch without customized CUDA kernels. Performance is evaluated on a single RTX 6000 GPU using the Adam optimizer [26] with a learning rate of \(0.02\). We instantiate CoBaFa using \(L=6\) pyramid levels with frequencies (linearly increasing) \(f_{l}\in[2.,3.2,4.4,5.6,6.8,8.]\), and channels \(K=[4,4,4,2,2,2]\cdot 2^{\kappa}\), where \(\kappa\) controls the number of channels per level. We use \(\kappa=3\) for our 2D experiments and \(\kappa=1\) for our 3D experiments. The model parameters \(\theta\) are distributed across 3 model components: coefficients \(\theta_{\mathbf{c}}\), basis \(\theta_{\mathbf{b}}\), and projection function \(\theta_{\mathcal{P}}\). The size of each component can vary greatly depending on the chosen representation. In the following experiments, we refer to the default model setting as "CoBaFa-Grid", which implements the coefficients \(\mathbf{c}\) and bases \(\mathbf{b}\) with learnable tensor grids, \(\mathcal{P}(\mathbf{x})=\text{MLP}(\mathbf{x})\), and \(\mathbf{\gamma}(\mathbf{x})=\text{Sawtooth}(\mathbf{x})\), where \(\text{Sawtooth}(\mathbf{x})=\text{x}1.0\). In the CoBaFa-Grid setting, the total number of optimizable parameters is mainly determined by the resolution of the coefficient and basis grids \(M_{\mathbf{c}}^{l}\), \(M_{\mathbf{b}}^{l}\): \[|\theta|=|\theta_{\mathcal{P}}|+|\theta_{\mathbf{c}}|+|\theta_{ \mathbf{b}}|=|\theta_{\mathcal{P}}|+\sum_{l=1}^{L}{M_{\mathbf{c}}^{l}}^{D}+\,K_ {l}\cdot{M_{\mathbf{b}}^{l}}^{D} \tag{10}\] We implement the basis grid using linearly increasing resolutions \(M_{\mathbf{b}}^{l}\in[32,128]\cdot\frac{(v-u)}{1024}\) with interval \([32,128]\) and scene bounding box \([u,v]\). This leads to increased resolution for modeling higher resolution signals in our experiments. We use the same coefficient grid resolution \(M_{\mathbf{b}}^{l}\) across all \(L\) levels for query efficiency and to lower per-signal memory footprint. ### Single Signals We first evaluate the accuracy and efficiency of our CoBaFa-Grid representation on various multi-dimensional signals, comparing it to several recent neural signal representations. Towards this goal, we consider three popular benchmark tasks for evaluating neural representations: 2D image regression, 3D Signed Distance Field (SDF) reconstruction and Radiance Field Reconstruction / Novel View Synthesis. We evaluate each method's ability to approximate high-frequency patterns, interpolation quality, compactness, and robustness to ambiguities and sparse observations. 2D Image Regression:In this task, we directly regress RGB pixel colors from pixel coordinates. We evaluate our CoBaFa-Grid on fitting four complex high-resolution images, where the total number of pixels ranges from \(4\,\mathrm{M}\) to \(213\,\mathrm{M}\). In Fig. 4, we show the reconstructed images with the corresponding model size, optimization time, and image PSNRs, and compare them to Instant-NGP [40], a state-of-the-art neural representation that supports image regression and has shown superior quality over prior art including Fourier Feature Networks [58] and SIREN [54]. Compared to Instant-NGP, our model consistently achieves higher PSNR on all images when using the same model size, demonstrating the superior accuracy and efficiency of our model. On the other hand, while Instant-NGP achieves faster optimization owing to its highly optimized CUDA-based framework, our model, implemented in pure PyTorch, leads to comparably fast training while relying on a vanilla PyTorch implementation without custom CUDA kernels which simplifies future extensions. **Signed-Distance Field Reconstruction:** Signed Distance Function (SDF), as a classic geometry representation, describes a set of continuous iso-surfaces, where a 3D surface is represented as the zero level-set of the function. We evaluate our CoBaFa-Grid on modeling several challenging object SDFs that contain rich geometric details and compare with previous state-of-the-art neural representations, including Fourier Feature Networks [58], SIREN [54], and Instant-NGP [40]. To allow for fair comparisons in terms of the training set and convergence, we use the same training points for all methods by pre-sampling \(8\,\mathrm{M}\) SDF points from the target meshes for training, with \(80\%\) points near the surface and the remaining \(20\%\) points uniformly distributed inside the unit volume. Following the evaluation setting of Instant-NGP, we randomly sample \(16\,\mathrm{M}\) points for evaluation and calculate the geometric IOU metric based on the SDF sign \[gIoU=\frac{\sum(s(\mathbf{X})>0)\cap(\hat{s}(\mathbf{X})>0)}{\sum(s(\mathbf{ X})>0)\cup(\hat{s}(\mathbf{X})>0)} \tag{11}\] where \(\mathbf{X}\) is the evaluation point set, \(s(\mathbf{X})\) are the ground truth SDF values, and \(\hat{s}(\mathbf{X})\) are the predicted SDF values. Fig. 5 shows a quantitative and qualitative comparison of all methods. Our method leads to visually better results, it recovers high-frequency geometric details and contains less noise on smooth surfaces (e.g., the elephant face). The high visual quality is also reflected by the highest gIoU value of all methods. Meanwhile, our method also achieves the fastest reconstruction speed, while using less than half of the number of parameters used by CUDA-kernel enabled Instant-NGP, demonstrating the high accuracy, efficiency, and compactness of our factorized representation. **Radiance Field Reconstruction:** Radiance field reconstruction aims to recover the 3D density and radiance of each volume point from as multi-view RGB images. The geometry and appearance properties are updated via inverse volume rendering, as proposed in NeRF [39]. Recently, many encoding functions and advanced representations have been proposed that significantly improve reconstruction speed and quality, such as sparse voxel grids [18], hash tables[40] and tensor decomposition[13]. In Table 1, we quantitatively compare CoBaFa-Grid with several state-of-the-art fast radiance field reconstruction methods (Plenoxel [18], DVGO [56], Instant-NGP [40] and TensoRF-VM [13]) on both synthetic [39] as well as real scenes (Tanks and Temple objects) [27]. Our method achieves high reconstruction quality, significantly outperforming NeRF, Plenoxels, and DVGO on both datasets, while being significantly more compact than Plenoxels and DVGO. We also outperform Instant-NGP and are on par with TensoRF regarding reconstruction quality, while being highly compact with only \(5.1\,\mathrm{M}\) parameters, less than one-third of TensoRF-VM and one-half of Instant-NGP. Our CoBaFa-Grid also optimizes faster than TensoRF, at slightly over \(10\) minutes, in addition to our superior compactness. Additionally, unlike Plenoxels and Instant-NGP which rely on their own CUDA framework for fast reconstruction, our imple Figure 4: **2D Image Regression. This figure shows images represented using our CoBaFa-Grid model. The respective image resolutions and numbers of model parameters are shown below each image. Moreover, we also report a comparison to Instant-NGP (first number) in terms of optimization time and PSNR metrics (Instant-NGP vs Ours) at the bottom using the same number of model parameters. Note that our method achieves better reconstruction quality on all images when using the same model size. While optimization is slower than Instant-NGP, we use a vanilla PyTorch implementation without customized CUDA kernels. “Girl With a Pearl Earring” renovation ©Koorosh Orooj (CC BY-SA 4.0).** mentation uses the standard PyTorch framework, making it easily extendable to other tasks. In general, our model leads to state-of-the-art results on all three challenging benchmark tasks with both high accuracy and efficiency. Note that the baselines are mostly single-factor, utilizing either a local field (such as DVGO and Plenoxels) or a global field (such as Instant-NGP). In contrast, our CoBaFa model is a two-factor method, incorporating both local coefficient and global basis fields, hence resulting in better reconstruction quality and memory efficiency. ### Generalization Recent advanced neural representations such as NeRF, SIREN, ACORN, Plenoxels, Instant-NGP and TensoRF optimize each signal separately, lacking the ability to model multiple signals jointly or learning useful priors from multiple signals. In contrast, our CoBaFa representation not only enables accurate and efficient per-signal reconstruction (as demonstrated in Section 4.2) but it can also be applied to generalize across signals by simply sharing the basis field across signal instances. We evaluate the benefits of basis sharing by conducting experiments on image regression from partial pixel observations and few-shot radiance field reconstruction. For these experiments, instead of CoBaFa-Grid, we adopt CoBaFa-MLP-B (i.e., \((5)\) in the Table (d)d) as our CoBaFa representation, where we utilize a tensor grid to model the coefficient and \(6\) tiny MLPs (two layers with \(32\) neurons each) to model the basis. We find that CoBaFa-MLP-B performs better than CoBaFa-Grid in the generalization setting, owing to the strong inductive smoothness bias of MLPs. Image Regression from Sparse Observations:Unlike the image regression experiments conducted in Sec. 4.2 which use all image pixels as observations during optimization, this experiment focuses on the scenario where only part of the pixels are used during optimization. Without additional priors, a single-signal optimization easily overfits in this setting due to the sparse observations and the limited inductive bias, hence failing to recover the unseen pixels. We use our CoBaFa-MLP-B model to learn data priors by pre-training it on \(800\) facial images from the FFHQ dataset[24] while sharing the MLP basis and projection function parameters. The final image reconstruction task is conducted by optimizing the coefficient grids for each new test image. Figure 5: **Signed-Distance Field Reconstruction. We reconstruct SDFs from \(8.0\,\mathrm{M}\) training points. We show qualitative visual comparisons on the top and quantitative comparisons on the bottom including the number of parameters, reconstruction time and gloU. CoBaFa-Grid and iNGP [40] are trained for \(10k\) iterations, while SIREN [54] and NeRF with Frequency Encodings [58] are trained for \(200k\) iterations.** In Fig. 6, we show the image regression results on three different facial images with various masks and compare them to baseline methods that do not use any data priors, including Instant-NGP and our CoBaFa-MLP-B without pre-training. As expected, Instant-NGP can accurately approximate the training pixels but results in random noise in the untrained mask regions. Interestingly, even without pre-training and priors from other images, our CoBaFa-MLP-B is able to capture structural information to some extent within the same image being optimized; as shown in the eye region, the model can learn the pupil shape from the right eye and regress the left eye (masked during training) by reusing the learned structures in the shared basis functions. As shown on the right of Fig. 6, our CoBaFa-MLP-B with pre-trained prior clearly achieves the best reconstruction quality with better structures and boundary smoothness compared to the baselines, demonstrating that our factorized CoBaFa model allows for learning and transferring useful prior information from the training set. **Few-Shot Radiance Field Reconstruction:** Reconstructing radiance fields from few-shot input images with sparse viewpoints is highly challenging. Previous works address this by imposing sparsity assumptions [41, 25] in per-scene optimization or training feed-forward networks [67, 14, 29] from datasets. Here we consider \(3\) and \(5\) input views per scene and seek a novel solution that leverages data priors in pre-trained basis fields of our CoBaFa model during the optimization task, similar to addressing the problem of image regression with partial pixels above. It is worth-noting that the views are chosen in a quarter sphere, thus the overlapping region between views is quite limited. Specifically, we first train CoBaFa models on \(100\) Google Scanned Object scenes [16], which contains \(250\) views per scene. During cross-scene training, we maintain \(100\) per-scene coefficients and share the basis \(\mathbf{c}\) and projection function \(\mathcal{P}\). After cross-scene training, we use the mean coefficient values of pre-trained coefficient fields as the initialization, while fixing the pre-trained functions (\(\mathbf{c}\) and \(\mathcal{P}\)) and fine-tuning the coefficient field for new scenes with few-shot observations. In this experiment, we compare results from both CoBaFa-MLP-B and CoBaFa-Grid with and without the pre-training. We also compare with Instant-NGP and previous few-shot reconstruction methods, including PixelNeRF [67] and MVSNeRF [14], re-train with the same training set and test using the same 3 or 5 views. As shown in Table 2 and Fig. 7, our pre-trained CoBaFa representation with MLP basis provides strong regularization for few-shot reconstruction, resulting in fewer artifacts and better reconstruction quality than the single-scene optimization methods without data priors and previous few-shot reconstruction \begin{table} \begin{tabular}{l c c c c c c c c} & & & \multicolumn{4}{c}{Synthetic-NeRF} & \multicolumn{2}{c}{NSVF} \\ \cline{3-8} Method & BatchSize & Steps & Time \(\downarrow\) & Size(M)\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) \\ \hline NeRF [39] & 4096 & 300k & \(\sim\)35h & 01.25 \(\bullet\) & 31.01 & 0.947 & 25.78 & 0.864 \\ Plenoxels [18] & 5000 & 128k & 11.4m \(\circ\) & 194.5 & 31.71 & 0.958 & 27.43 & 0.906 \\ DVGO [56] & 5000 & 30k & 15.0m & 153.0 & 31.95 \(\bullet\) & 0.957 & 28.41 \(\bullet\) & 0.911 \(\bullet\) \\ Instant-NGP [40] & 10k-85k & 30k & 03.9m \(\circ\) & 11.64 \(\bullet\) & 32.59 \(\circ\) & 0.960 \(\bullet\) & 27.09 & 0.905 \\ TensoRF-VM [13] & 4096 & 30k & 17.4m & 17.95 & 33.14 \(\circ\) & 0.963 \(\circ\) & 28.56 \(\circ\) & 0.920 \(\circ\) \\ \hline CoBaFa-Grid (Ours) & 4096 & 30k & 12.2m \(\bullet\) & 05.10 \(\circ\) & 33.14 \(\circ\) & 0.961 \(\circ\) & 28.73 \(\circ\) & 0.922 \(\circ\) \\ \end{tabular} \end{table} Table 1: **Novel View Synthesis with Radiance Fields.** We compare our method to previous radiance field reconstruction methods on the Synthetic-NeRF[39] and NSVF[34] datasets. We report the scores reported in the original papers whenever available. We also show average reconstruction time and model size for the Synthetic-NeRF dataset to compare the efficiency of the methods. Figure 6: **Image Regression from Sparse Observations.** Results obtained by fitting each model to all unmasked pixels. We use randomly placed black squares as masks for the bottom two rows and an image of text and small icons as mask for the top row. The symbol \({}^{*}\) denotes pre-training of the basis factors using the FFHQ facial image set. Our pre-trained model (CoBaFa-MLP-B\({}^{*}\)) learns robust basis fields which lead to better reconstruction compared to the per-scene baselines Instant-NGP and CoBaFa-MLP-B. methods that also use pre-trained networks. In particular, without any data priors, single-scene optimization methods (Instant-NGP and ours w/o prior) lead to a lot of outliers due to overfitting to the few-shot input images. Previous methods like MVSNeRF and PixelNeRF achieve plausible reconstructions due to their learned feed-forward prediction which avoids per-scene optimization. However, they suffer from blurry artifacts. Additionally, the strategy taken by PixelNeRF and MVSNeRF assumes a narrow baseline and learns correspondences across views for generalization via feature averaging or cost volume modeling which does not work as effectively in a wide baseline setup. On the other hand, by pre-training shared basis fields on multiple signals, our CoBaFa model can learn useful data priors, enabling the reconstruction of novel signals from sparse observations via optimization. ### Influence of Design Choices in Factor Fields Factor Fields is a general framework that unifies many previous representations with different settings. In this section, we aim to analyze the properties of these variations and offer a comprehensive understanding of the components of the proposed representation framework. We conduct extensive evaluations on the four main components of our Factor Fields framework: factor number \(N\), level number \(L\), coordinate transformation function \(\mathbf{\gamma}_{i}\), field representation \(\mathbf{f}_{i}\), and field connector \(\circ\). We present a comprehensive assessment of the representations' capabilities in terms of efficiency, compactness, reconstruction quality, as well as generalizability, with a range of tasks including 2D image regression (with all pixels), and per-scene and across-scene 3D radiance field reconstruction. Note that, the settings in per-scene and across-scene radiance field reconstruction are the same as introduced in Section 4.2 and Section 4.3, while for the 2D image regression task, we use the same model setting as in Section 4.2 and test on \(256\) high fidelity images at a resolution of \(1024\times 1024\) from the DIV2K dataset [1]. To enable meaningful comparisons, we evaluate the variations within the same code base and report their performance using the same number of iterations number, batch size, training point sampling strategy and pyramid frequencies. Correspondingly, the results for Instant-NGP, EG3D, OccNet, NeRF, DVGO, TensoRF-VM/-CP in Tab. 3 are based on our reimplementation of the original methods in our unified Factor Fields framework with the corresponding design parameters shown in the tables. \begin{table} \begin{tabular}{l l l l l l} \multicolumn{5}{c}{3 views} & \multicolumn{2}{c}{5 views} \\ \hline Method & Time\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) \\ \hline iNGP & 03:38 \(\approx\) 14.74 & 0.776 & 20.79 & 0.860 \\ CoBaFa-Grid & 13:39 & 18.13 & 0.805 & 20.83 & 0.847 \\ CoBaFa-MLP-B & 18:24 & 16.31 & 0.804 & 22.26 & 0.900 \(\bullet\) \\ \hline PixelNeRF & 00:00 \(\approx\) 21.37 \(\bullet\) & 0.878 \(\approx\) 22.73 & 0.896 \\ MVSNeRF & 00:00 \(\approx\) 20.50 & 0.868 & 22.76 \(\bullet\) 0.891 \\ CoBaFa-Grid\({}^{*}\) & 13:18 \(\bullet\) & 20.77 \(\rightarrow\) 0.871 \(\bullet\) & 25.41 \(\rightarrow\) 0.915 \(\rightarrow\) \\ CoBaFa-MLP-B\({}^{*}\) & 18:44 & 21.96 \(\rightarrow\) 0.891 \(\rightarrow\) 26.91 \(\rightarrow\) 0.927 \(\bullet\) \\ \end{tabular} \end{table} Table 2: **Few-shot Radiance Field Reconstruction.** We show quantitative comparisons of few-shot radiance field reconstruction from 3 or 5 viewpoints regarding optimization time and novel view synthesis quality (PSNRs and SSIMs). Results are averaged across 8 test scenes. The results of Instant-NGP and our CoBaFa models are generated based on per-scene optimization, while CoBaFa models with \({}^{*}\) use pre-trained basis factors across scenes. We train the feed-forward networks of PixelNeRF and MVSNeRF using the same dataset we learn our shared basis factors, and the results of PixelNeRF and MVSNeRF are generated from the networks via direct feed-forward inference. Our CoBaFa-MLP-B\({}^{*}\) with pre-trained MLP basis factors leads to the best reconstruction quality. Figure 7: **Radiance Fields from 5 Views.** We visualize novel view synthesis results of six test scenes, corresponding to the quantitative results in Tab. 2. We show our CoBaFa-MLP-B model w/ and w/o pre-trained data priors (bottom two rows) and compare it to Instant-NGP, PixelNeRF and MVSNeRF (top three rows). Our model with pre-trained basis factors can effectively utilize the learned data priors, resulting in superior qualitative results with fewer outliers compared to single-scene models (iNGP and ours w/o priors), as well as sharper details compared to feed-forward models (PixelNeRF and MVSNeRF). Factor Number \(N\):As illustrated in Eq. (4), the factor number \(N\) refers to the number of factors used to represent the target signal. We show comparisons between single-factor models (1,3,5) and two-factor models (2,4,6) in Table 3a. Compared to the models (1) iNGP, (3) EG3D and (5) CoBaFa-no-C which only use a single factor, the models (2) CoBaFa-Hash-B, (4) TensoRF-VM and (6) CoBaFa-Grid use the same factor as (1), (3), (5) respectively, but are extended to two-factor models with an additional factor, leading to 3dB and 0.35 dB PSNR improvement in image regression and 3D radiance field reconstruction tasks, while also increasing training time (\(\sim 10\%\)) and model size (\(\sim 5\%\)) as expected. Despite the marginal computational overhead, introducing additional factors in the framework clearly leads to better reconstruction quality and represents the signal more effectively. The multiplication between factors allows the two factor fields to modulate each other's feature encoding and represent the entire signal more flexibly in a joint manner, alleviating the problem of feature collision in Instant-NGP and related problems of other single-factor models. Additionally, as shown in Table 1, multi-factor modeling (e.g., \(N>=2\)) is able to provide more compact modeling while maintaining a similar level of reconstruction quality. Moreover, it allows for generalization by partially sharing the fields across instances, such as the across-scene radiance field modeling in the few-shot radiance reconstruction task. Level Number \(L\):Our CoBaFa model adopts multiple levels of transformations to achieve pyramid basis fields, similar to the usage of a set of sinusoidal positional encoding functions in NeRF [39]. We compare multi-level models (including CoBaFa and NeRF) with their reduced single-level versions that only use a single transformation level in Table 3b. Note that Occupancy Networks (OccNet, row (1)) do not leverage positional encodings and can be seen as a single-level version of NeRF (row (2)) while the model with multi-level sinusoidal encoding functions (NeRF) leads to about 10dB PSNR performance boost for both 2D image and 3D reconstruction tasks. On the other hand, the single-level CoBaFa models are also consistently worse than the corresponding multi-level models in terms of speed and reconstruction quality, despite the performance drops being not as severe as those in purely MLP-based representations. Coordinate Transformation \(\gamma_{i}\):In Table 3c, we evaluate four coordinate transformation functions using our CoBaFa representation. These transformation functions include sinusoidal, triangular, hashing and sawtooth. Their transformation curves are shown in Fig. 3. In general, in contrast to the random hashing function, the periodic transformation functions (2, 3, 4) allow for spatially coherent information sharing through repeated patterns, where neighboring points can share spatially adjacent features in the basis fields, hence preserving local connectivity. We observe that the periodic basis achieves clearly better performance in modeling dense signals (e.g., 2D images). For sparse signals such as 3D radiance fields, all four transformation functions achieve high reconstruction quality on par with previous state-of-the-art fast radiance field reconstruction approaches [56, 40, 13]. Field Representation \(\mathbf{f}_{i}\):In Table 3d, we compare various functions for representing the factors in our framework (especially our CoBaFa model) including MLPs, Vectors, 2D Maps and 3D Grids, encompassing most previous representations. Note that discrete feature grid functions (3D Grids, 2D Maps, and Vectors) generally lead to faster reconstruction than MLP functions (e.g. CoBaFa-Grid is faster than CoBaFa-MLP-Band CoBaFa-MLP-C). While all variants can lead to reasonable reconstruction quality for single-signal optimization, our CoBaFa-Grid representation that uses grids for both factors achieves the best performance on the image regression and single-scene radiance field reconstruction tasks. On the other hand, the task of few-shot radiance field reconstruction benefits from basis functions that impose stronger regularization. Therefore, representations with stronger inductive biases (e.g., the Vectors in TensoRF-VM and MLPs in CoBaFa-MLP-B) lead to better reconstruction quality compared to other variants. Field Connector \(\circ\):Another key design choice of our Factor Fields framework and CoBaFa model is to adopt the element-wise product to connect multiple factors. Directly concatenating features from different components is an alternative choice and exercised in several previous works [39, 9, 40]. In Table 3e, we compare the performance of the element-wised product against the direct concatenation in three model variants. Note that the element-wise product consistently outperforms the concatenation operation in terms of reconstruction quality for all models on all applications, demonstrating the effectiveness of using the proposed product-based factorization framework. ## 5 Conclusion and Future Work In this work, we present a novel unified framework for (neural) signal representations which factorizes a signal as a product of multiple factor fields. We demonstrate that Factor Fields generalizes many previous neural field representations (like NeRF, Instant-NGP, DVGO, TensoRF) and enables new representation designs. In particular, we propose a novel representation - CoBaFa - with Coefficient-Basis factorization, as a new model in the Factor Fields family, which factorizes a signal into a localized coefficient field and a global basis field with periodic transformations. We extensively evaluate our CoBaFa model on three signal reconstruction tasks including 2D image regression, 3D SDF reconstruction, and radiance field reconstruction. We demonstrate that our CoBaFa model leads to state-of-the-art reconstruction quality, better or on par with previous methods on all three tasks, while achieving faster reconstruction and more compact model sizes than most methods. Our CoBaFa model is able to generalize across scenes by learning shared basis field factors from multiple signals, allowing us to reconstruct new signals from sparse observations. We show that, using such pre-trained basis factors, our method enables high-quality few-shot radiance field reconstruction from only 3 or 5 views, outperforming previous methods like PixelNeRF and MVS-NeRF in the sparse view / wide baseline setting. In general, our framework takes a step towards a generic neural representation with high accuracy and efficiency. We believe that the flexibility of our framework will help to inspire future research on efficient signal representations, exploring the potential of multi-factor representations or novel coordinate transformations. ## 6 Acknowledgements Many thanks to Bozidar Antic, Zehao Yu, Hansheng Chen, Shaofei Wang for helpful discussion and suggestions. This work was supported by the SNF grant 200021, 204840.
2301.00468
Non-volatile electrically programmable integrated photonics with a 5-bit operation
Scalable programmable photonic integrated circuits (PICs) can potentially transform the current state of classical and quantum optical information processing. However, traditional means of programming, including thermo-optic, free carrier dispersion, and Pockels effect result in either large device footprints or high static energy consumptions, significantly limiting their scalability. While chalcogenide-based non-volatile phase-change materials (PCMs) could mitigate these problems thanks to their strong index modulation and zero static power consumption, they often suffer from large absorptive loss, low cyclability, and lack of multilevel operation. Here, we report a wide-bandgap PCM antimony sulfide (Sb2S3)-clad silicon photonic platform simultaneously achieving low loss, high cyclability, and 5-bit operation. We switch Sb2S3 via an on-chip silicon PIN diode heater and demonstrate components with low insertion loss (<1.0 dB), high extinction ratio (>10 dB), and high endurance (>1,600 switching events). Remarkably, we find that Sb2S3 can be programmed into fine intermediate states by applying identical and thermally isolated pulses, providing a unique approach to controllable multilevel operation. Through dynamic pulse control, we achieve on-demand and accurate 5-bit (32 levels) operations, rendering 0.50 +- 0.16 dB contrast per step. Using this multilevel behavior, we further trim random phase error in a balanced Mach-Zehnder interferometer. Our work opens an attractive pathway toward non-volatile large-scale programmable PICs with low-loss and on-demand multi-bit operations.
Rui Chen, Zhuoran Fang, Christopher Perez, Forrest Miller, Khushboo Kumari, Abhi Saxena, Jiajiu Zheng, Sarah J. Geiger, Kenneth E. Goodson, Arka Majumdar
2023-01-01T20:43:21Z
http://arxiv.org/abs/2301.00468v1
###### Non-volatile electrically programmable integrated photonics with a 5-bit operation ## Abstract Scalable programmable photonic integrated circuits (PICs) can potentially transform the current state of classical and quantum optical information processing. However, traditional means of programming, including thermo-optic, free carrier dispersion, and Pockels effect result in either large device footprints or high static energy consumptions, significantly limiting their scalability. While chalcogenide-based non-volatile phase-change materials (PCMs) could mitigate these problems thanks to their strong index modulation and zero static power consumption, they often suffer from large absorptive loss, low cyclability, and lack of multilevel operation. Here, we report a wide-bandgap PCM antimony sulfide (Sb\({}_{2}\)S\({}_{3}\))-clad silicon photonic platform simultaneously achieving low loss, high cyclability, and 5-bit operation. We switch Sb\({}_{2}\)S\({}_{3}\) via an on-chip silicon PIN diode heater and demonstrate components with low insertion loss (\(<\)1.0 dB), high extinction ratio (\(>\)10 dB), and high endurance (\(>\)1,600 switching events). Remarkably, we find that Sb\({}_{2}\)S\({}_{3}\) can be programmed into fine intermediate states by applying identical and thermally isolated pulses, providing a unique approach to controllable multilevel operation. Through dynamic pulse control, we achieve on-demand and accurate 5-bit (32 levels) operations, rendering \(0.50\pm 0.16\) dB contrast per step. Using this multilevel behavior, we further trim random phase error in a balanced Mach-Zehnder interferometer. Our work opens an attractive pathway toward non-volatile large-scale programmable PICs with low-loss and on-demand multi-bit operations. ## Main Programmable photonic integrated circuits (PICs), usually composed of arrays of tunable beam splitters and phase shifters, can change their functionalities on demand. This flexibility has recently extended their traditional applications from optical interconnects to optical computing[1], optical programmable gate arrays[2], and quantum information processing[3]. Further scaling of the programmable PICs requires the constituent devices to have a smaller footprint and lower power consumption. However, current programmable PICs are primarily based on weak volatile tuning mechanisms, such as thermo-optic effects[4; 5], free-carrier effects[6], and Pockels effects[7], which provide a small change in refractive index (\(\Delta\)n \(<\) 0.01). Therefore, the resulting devices usually suffer from large footprints (\(>\) 100 \(\upmu\)m), and require a constant power supply (\(\sim\)10 mW[1]). Moreover, thermo-optic effects incur severe thermal crosstalk, necessitating additional heaters and control circuits for crosstalk compensation. These requirements severely limit the current PIC's integration density and energy efficiency. Chalcogenide-based non-volatile phase-change materials (PCMs) can potentially alleviate these issues[8; 9; 10]. PCMs have two stable, reversibly switchable micro-structural phases (amorphous and crystalline, termed here as a- and c-phase), with drastically different optical refractive indices (\(\Delta\)_n_ \(\sim\) 1). Thanks to the non-volatile phase transition, no static power is needed to hold the state upon switching PCMs' phase. The substantial refractive index change \(\Delta\)_n_ and nonvolatility enable compact reconfigurable devices (\(\sim\)10 \(\upmu\)m[10; 11]) with zero static energy consumption[12; 13; 14; 15; 16; 17]. Moreover, PCMs are compatible with large-scale integration since they can be easily deposited by sputtering[12; 15; 17; 18; 19] or thermal evaporation[11] onto almost any PIC materials, including silicon and silicon nitride. Despite these advantages, archetypal PCMs such as Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\) (GST) and GeTe exhibit strong absorption in both phases at relevant optical communication wavelengths. This hinders their utility in phase shifters - an essential building block for programmable PICs. Emerging wide-bandgap PCMs, such as GeSbSeTe (GSST)[20], antimony selenide (Sb\({}_{2}\)Se\({}_{3}\))[21], and antimony sulfide (Sb\({}_{2}\)S\({}_{3}\))[22] can circumvent this loss and have recently generated strong interest in the community[11; 16; 23; 24; 25]. In particular, Sb\({}_{2}\)S\({}_{3}\) shows the widest bandgap among all these PCMs, allowing transparency down to \(\sim\) 600 nm in the amorphous phase[22]. Moreover, the lack of selenium in Sb\({}_{2}\)S\({}_{3}\) makes it less toxic[26] and less prone to cause chamber contamination during sputtering or evaporation processes. Thus, Sb\({}_{2}\)S\({}_{3}\) is much more amenable to be adapted in a commercial foundry. Despite these promises, high endurance electrical control of Sb\({}_{2}\)S\({}_{3}\) remained unsolved. Here, we demonstrate electrically controlled Sb\({}_{2}\)S\({}_{3}\)-clad silicon photonic devices with low insertion loss (\(<\)1dB), high extinction ratio (\(>\)10dB), and high endurance (\(>\)1,600 switching events). The phase transition is actuated by silicon PIN (p-doped-intrinsic-n-doped) diode heaters. We established the versatility of this hybrid platform with three different integrated photonic devices, including microring resonators, Mach-Zehnder interferometers (MZIs), and asymmetric directional couplers. We also observed multilevel (32 levels) operation, highest among all reported electrically controlled PCM-silicon platforms, with a resolution of 0.50 \(\pm\) 0.16 dB per step. This multilevel operation is achieved by sending multiple thermally separated (\(\sim\)1 second) near-identical electrical pulses for both amorphization and crystallization. We leveraged this multilevel behavior to trim a balanced MZI to correct the random phase error caused by fabrication imperfections. The multilevel operation is crucial to avoid under or over-correction during trimming. Our work opens a new avenue for non-volatile electrically programmable PICs with low loss and on-demand multilevel operation. ## Results We characterized the sputtered Sb\({}_{2}\)S\({}_{3}\) thin films to obtain the refractive indices and verify the crystallization capability (annealed under 325\({}^{\circ}\) for 10 mins) of the as-deposited a-Sb\({}_{2}\)S\({}_{3}\) (Supplementary Section S1). The silicon photonic devices (Fig. 1-Fig. 3) were designed to operate at the telecommunication O-band (1260 - 1360 nm) and fabricated on a standard silicon-on-insulator (SOI) wafer with 220 nm silicon and 2 \(\upmu\)m buried oxide. The 500-nm-wide waveguides are fabricated by partially etching 120-nm silicon. We then deposited 450-nm-wide Sb\({}_{2}\)S\({}_{3}\) onto the SOI chip via sputtering. The slightly smaller width than the waveguide is to compensate for electron beam lithography (E-beam) overlay tolerance. Our simulation results show a change in effective index \(\Delta n_{eff}\)\(\approx\) 0.018 between a- and c-Sb\({}_{2}\)S\({}_{3}\) (Supplementary Section S2). The Sb\({}_{2}\)S\({}_{3}\) films are electrical controlled via on-chip silicon PIN micro-heaters[15, 17]. The p\({}^{++}\) and n\({}^{++}\) doping regions were designed 200 nm away from the waveguide to avoid free-carrier absorption loss[17], indicated in the scanning electron microscope (SEM) images with false colors in Fig. 1c, Fig. 2c, and Fig. 3c. The Sb\({}_{2}\)S\({}_{3}\) stripes are encapsulated with 40 nm of Al\({}_{2}\)O\({}_{3}\) grown by atomic layer deposition (ALD) under 150degC. This conformal encapsulation is critical to prevent Sb\({}_{2}\)S\({}_{3}\) from oxidation and thermal reflowing, and thus is essential to attain high endurance. To show that our Sb\({}_{2}\)S\({}_{3}\)-clad silicon photonic platform is versatile and compatible with most PIC components, we demonstrate three widely used PIC components: (1) a microring resonator to show low-loss tuning of cavities, (2) a balanced MZI to demonstrate a full \(\pi\) phase shift and a broadband operation, and (3) an asymmetric directional coupler to create a compact programmable unit (see simulation results in Supplementary Section S2). We deposited 10-\(\upmu\)m-long, 20-nm-thick Sb\({}_{2}\)S\({}_{3}\) on a micro-ring resonator with 30 \(\upmu\)m radius (Figs. 1a-c). The free spectral range (FSR) is \(\sim\)2.42 nm (Fig. 1d), and the bus-ring gap is 280 nm to achieve a near-critically coupled device. We switched the as-deposited a-Sb\({}_{2}\)S\({}_{3}\) to c-Sb\({}_{2}\)S\({}_{3}\) on the microring resonator by applying three 1.6 V, 200-ms-long pulses (or SET pulses) separated by 1 second and then re-amorphized the material via three 7.5 V, 150-ns-short pulses also separated by 1 second (RESET pulses). We note that the 200-ms-long SET pulse is indeed significantly longer than other reported PCMs, such as GST (50 \(\upmu\)s[17] or 100 \(\upmu\)s[16]) and Sb\({}_{2}\)Se\({}_{3}\) (5 \(\upmu\)s[11] or 100 \(\upmu\)s[16]). But we found that the SET pulse duration could be further reduced to around 100 \(\upmu\)s after the first few cycles (see Method and Supplementary Section S12). Moreover, the slow crystallization allows a large volume of Sb\({}_{2}\)S\({}_{3}\) to amorphize (Supplementary Section S13). Three pulses instead of a single pulse were used to ensure complete amorphization and crystallization (see Methods). Fig. 1d shows a resonance shift of \(\sim\)0.394 nm upon switching the 10 \(\upmu\)m Sb\({}_{2}\)S\({}_{3}\) from the a- (blue) to the c-phase (orange), corresponding to a \(\pi\)-phase shift length \(L_{\pi}\) of \(\sim\)30.7 \(\upmu\)m, significantly shorter than the 1-millimeter \(L_{\pi}\) of ferroelectric non-volatile phase shifter[27]. The SET and RESET processes were repeated for 10 cycles. The shift in resonance is highly repeatable, as suggested by the slight standard deviation (Fig. 1d). The excess loss from a-Sb\({}_{2}\)S\({}_{3}\) is negligible[19], and in c-Sb\({}_{2}\)S\({}_{3}\) hybrid waveguides the loss is estimated to be 0.024 dB/\(\upmu\)m (0.72 dB/\(\uppi\)), which is three times larger than our simulation result (0.26 dB/\(\uppi\)) (Supplementary Section S2). We attribute this excess loss to the scattering from the Sb\({}_{2}\)S\({}_{3}\) thin film due to non-uniform deposition/liftoff and local crystal grains in c-Sb\({}_{2}\)S\({}_{3}\) (Supplementary Section S3). We verified that the loss due to mode mismatch at the transition between the bare silicon waveguide and the Sb\({}_{2}\)S\({}_{3}\)-loaded waveguide is small (\(\sim\)0.013 dB/facet), consistent with the fact that the thin Sb\({}_{2}\)S\({}_{3}\) film should not significantly change the mode shape (Supplementary Section S2). Fig. 1: **A high-Q ring resonator loaded with 10 \(\upmu\)m of 20-nm-thick Sb\({}_{2}\)S\({}_{3}\).** (a) Schematic of the device (the encapsulation ALD Al\({}_{2}\)O\({}_{3}\) layer is not shown), (b) optical, and (c) Scanning-Electron Microscope (SEM) images of the micro-ring resonator. The Sb\({}_{2}\)S\({}_{3}\) thin film, n\({}^{++}\), and p\({}^{++}\) doped silicon regions are represented by false colors (blue, orange, and green, respectively). (d) Measured micro-ring spectra in two phases (SET: three 1.6 V, 200-ms-long pulses to change Sb\({}_{2}\)S\({}_{3}\) to the crystalline state; RESET: three 7.5 V, 150-ns-short pulses to change Sb\({}_{2}\)S\({}_{3}\) to the amorphous state). The spectra are averaged over ten cycles of reversible electrical switching, and the shaded area shows the standard deviation. Norm. Trans. Stands for normalized transmission, normalized to a reference waveguide on the same chip. ### Nonvolatile Mach-Zehnder switch integrated with Sb\({}_{2}\)S\({}_{3}\) phase shifter Figs. 2a-c show a balanced MZI operating at wavelengths between 1,320 nm and 1,360 nm with both arms covered with 30-\(\upmu\)m-long 20-nm-thick Sb\({}_{2}\)S\({}_{3}\). A multimode interferometer with a 50:50 splitting ratio was designed and fabricated, as shown in Fig. 2d (simulation results in Supplementary Section S2). Initially, the light comes out mainly from the bar port with an extinction ratio of \(\sim\)13 dB (Fig. 2e). The light coming out from the bar port instead of the cross port in this balanced MZI can be explained by the random phase errors in two arms due to fabrication imperfection, especially that in the S-bend. One can overcome such imperfection by exploiting a wider waveguide to improve fabrication robustness[28]. Alternatively, this random phase error can be corrected using Sb\({}_{2}\)S\({}_{3}\) for post-fabrication trimming, as we show later in this paper. The Sb\({}_{2}\)S\({}_{3}\) on one arm was switched by two 1.7 V, 200-ms SET electric pulses to provide a full \(\uppi\) phase shift. An 8.1 V, 150-ns short RESET electric pulse switched the device back to the initial state. Figs. 2e and 2f show the transmission spectra normalized to a reference waveguide when the Sb\({}_{2}\)S\({}_{3}\) film is in the amorphous and crystalline phases, respectively. The c-Sb\({}_{2}\)S\({}_{3}\) displays a complete spectrum flip, showing a bar state with an extinction ratio of 15 dB. We then recorded the bar port transmission at 1,330 nm for 100 switching events without device degradation (Supplementary Section S4). Figure 2: **A Mach-Zehnder interferometer with both arms covered with 20-nm-thick Sb\({}_{2}\)S\({}_{3}\).** (a) Sb\({}_{2}\)S\({}_{3}\) phase shifter schematic (the encapsulation ALD Al\({}_{2}\)O\({}_{3}\) layer is not shown). (b) Optical and (c) SEM images of the Sb\({}_{2}\)S\({}_{3}\) phase shifter. (d) Optical micrograph of the 50:50 splitting ratio multi-mode interferometer. (e, f) Transmission spectra at both bar and cross ports for (e) a-Sb\({}_{2}\)S\({}_{3}\) (RESET: an 8.1 V, 150 ns pulse) and (f) c-Sb\({}_{2}\)S\({}_{3}\) (SET: two 1.7 V, 200 ms pulses). The green and red lines represent the measured transmission at cross and bar ports. The shaded region indicates the standard deviation of transmission for 5 cycles of measurements. ### Compact asymmetric directional coupler switch We also designed and fabricated a compact asymmetric directional coupler (coupling length \(L_{c}\simeq 79\)\(\upmu\)m) (simulation in Supplementary Section S2), as shown in Figs. 3a-c. The coupler consists of two waveguides with different widths. The narrower 409-nm-wide waveguide (hybrid waveguide) was capped with 20-nm-thick Sb\({}_{2}\)S\({}_{3}\) and designed to allow phase match with the wider 450-nm-wide waveguide (bare waveguide) for c-Sb\({}_{2}\)S\({}_{3}\)[29]. As such, the input light could completely couple to the cross port in one coupling length. Once Sb\({}_{2}\)S\({}_{3}\) is switched to the amorphous state, the effective index of the hybrid waveguide changes while the bare waveguide remains the same. The resulting phase mismatch changes the coupling strength and coupling length. Then, co-optimizing the gap and waveguide length permits a complete bar transmission. The unique c-Sb\({}_{2}\)S\({}_{3}\) phase matching approach, instead of a-Sb\({}_{2}\)S\({}_{3}\)[13, 15, 29, 30], allows a more symmetric performance regardless of the input port (Supplementary Section S2), crucial for a \(2\times 2\) device. If phase mismatch happens in the c-Sb\({}_{2}\)S\({}_{3}\) state, the slight loss of c-Sb\({}_{2}\)S\({}_{3}\) on one of the waveguides will result in different bar state insertion loss when the light goes from different input ports. We note that the 79-\(\upmu\)m coupling length can be potentially reduced (\(\sim 34\)\(\upmu\)m) by depositing a thicker (50 nm) Sb\({}_{2}\)S\({}_{3}\) to provide stronger refractive index modulation. Figs. 3d and 3e show the transmission spectra for a- and c-Sb\({}_{2}\)S\({}_{3}\), switched with three 9.6 V, 500 ns RESET, and 2.7 V, 200 ms SET pulses, respectively. The insertion losses are 2 dB (0.5 dB), and the extinction ratios are around 10 dB (11 dB) for a (c)-Sb\({}_{2}\)S\({}_{3}\)). The unexpected high insertion loss when the Sb\({}_{2}\)S\({}_{3}\) is in the amorphous state can be attributed to several factors, Figure 3: **An asymmetric directional coupler with Sb\({}_{2}\)S\({}_{3}\)-Si hybrid waveguide.** (a) Schematic (the encapsulation ALD Al\({}_{2}\)O\({}_{3}\) layer is not shown), (b) optical, and (c) SEM images of the asymmetric directional coupler. (d, e) Transmission spectra at bar and cross ports for (d) a-Sb\({}_{2}\)S\({}_{3}\) (RESET: three 9.6 V, 500 ns pulses) and (e) c-Sb\({}_{2}\)S\({}_{3}\) (SET: three 2.7 V, 200 ms pulses). The result was averaged over five measurements, and the shaded region indicates the standard deviation. including gap discrepancy, Sb\({}_{2}\)S\({}_{3}\) overlay deviation, or the cross-port grating coupler fabrication imperfection. To estimate the actual loss of the device, we apply the c-Sb\({}_{2}\)S\({}_{3}\) loss extracted from the ring resonator to the simulation and calculate this device's insertion loss to be \(\sim\)0.1 dB (0.9 dB) for a(c)-Sb\({}_{2}\)S\({}_{3}\) (Supplementary Section S2). Fig. 4 shows 1,600 switching events for the asymmetric directional coupler. Limited by our measurement setup, we separately measured the cross (Fig. 4a) and bar ports (Fig. 4b). The higher insertion loss (\(\sim\)1 dB) at around event 500 was due to optical fiber misalignment. We note that almost no performance degradation occurred at the end (Supplementary Section S5); hence, 1,600 switching events are not the limit of this device. The cross-port transmission shows a relatively large variation for a-Sb\({}_{2}\)S\({}_{3}\) (Fig. 4a blue scatterers, from \(\sim\) -15 dB to \(\sim\) -35 dB), which was caused by incomplete amorphization or thermal reflow of the Sb\({}_{2}\)S\({}_{3}\) film. Since the plot is on a logarithmic scale, such a large variation in a-Sb\({}_{2}\)S\({}_{3}\) (due to higher transmission) is not visible in Fig. 4b. ### Multilevel 5-bit operation with dynamic electrical control Our Sb\({}_{2}\)S\({}_{3}\)-Si integrated structures further show a stepwise multilevel operation up to 32 levels with dynamic pulse control. In Fig. 5a, we show the multilevel transmission at both cross and bar ports of an asymmetric directional coupler while sending in RESET 10 V, 550 ns pulse every other second to amorphized the Sb\({}_{2}\)S\({}_{3}\). We started the experiment with a "coarse" tuning, Figure 4: **Cyclability test for a-Sb\({}_{2}\)S\({}_{3}\)-based asymmetric directional coupler.** Measured transmission at the (a) cross and (b) bar ports. The blue and orange scatterers represent the normalized transmission when Sb\({}_{2}\)S\({}_{3}\) is in the amorphous and crystalline phases. The phase change condition is the same as in Fig.3. The device was switched for over 1,600 events with no significant insertion loss and performance degradation. where unoptimized, identical pulses were sent to partially amorphize the c-Sb\({}_{2}\)S\({}_{3}\) device to demonstrate multiple levels. The asymmetric directional coupler was originally in the "cross state" (red region). After one partial amorphization pulse, it was reconfigured into an intermediate state (orange region), where light comes out from both cross and bar ports. After six pulses, a complete "bar state" (green region) was achieved. We repeated this experiment five times for each port and plotted the average transmission levels and the standard deviation. The variation is attributed to the stochastic phase change process using electrical controls[31]. We also experimentally tested the partial crystallization of the device and multilevel operation (Supplementary Section S5), which is based on the growth-dominant nature of Sb\({}_{2}\)S\({}_{3}\) crystallization process[29]. In the following experiments, we mainly focused on partial amorphization because of lower energy consumption and more operation levels with finer resolution. Such stepwise multilevel operation by applying identical pulses is distinctly different from previously reported multilevel operations in GST[15, 17] and Sb\({}_{2}\)Se\({}_{3}\)[11, 16], where different voltage amplitudes or pulse duration were used to access multiple levels during amorphization. The pulse-number-dependent behavior is quite counterintuitive: one expects that after the first amorphization pulse, the thin Sb\({}_{2}\)S\({}_{3}\) film would have reached its new equilibrium phase. Moreover, since the thermal processes relax within 10 \(\upmu\)s (Supplement Section S6), each pulse is independent due to the relatively long one-second interval. As a result, the subsequent identical and separated pulses should not further change the material phase. To understand the origin of the multi-level operation, we closely inspected four partially amorphized Sb\({}_{2}\)S\({}_{3}\) devices under the microscope. We observed a few separate patches (Supplement Section S7) and a region that grew with more voltage pulses. As reported in some literature, one possibility could have been that a- and c-Sb\({}_{2}\)S\({}_{3}\) have significantly different thermal conductivities and specific heat capacities. But we measured the thermal conductivities to be similar (a-Sb\({}_{2}\)S\({}_{3}\): 0.2 W/m/K; c-Sb\({}_{2}\)S\({}_{3}\): 0.4 W/m/K, see Methods), and hence, we ruled out this as a possible explanation. We hypothesize that this unique behavior comes from Sb\({}_{2}\)S\({}_{3}\)'s multiple crystalline phases. Sb\({}_{2}\)S\({}_{3}\) has at least two distinct crystalline phases[32], which may differ in the amorphization conditions. The partial amorphization pulse can cause amorphization in the hottest region, but at the lower temperature region, it may cause phase transition to the other crystalline phase. These regions get amorphized in subsequent pulses, resulting in multilevel operation. An even finer multilevel operation was realized by monitoring the transmission level and dynamically changing the pulse conditions slightly. Here, we demonstrate on-demand 5-bit operation in a quasi-continuously tunable directional coupler, as shown in Fig. 5b. We dynamically controlled the partial amorphization pulses to have slightly lower, near-identical voltages (ranging from 9.65 V to 9.85 V) and obtained up to 32 levels. Fig. 5b demonstrates 5-bit operation (32 distinct levels) with a target resolution of 0.5 dB per level step at 1,340 nm (see the detailed pulse conditions in Supplementary Section S8). We emphasize that dynamic control is necessary to mitigate the stochastic nature of electrically controlled PCMs[31], hence essential for a reliable many-level operation. In Fig. 5b, a linear fit shows a slope of -0.50 dB per step and a standard deviation of 0.16 dB among five experiments, indicating a repeatable operation. While the 5-bit operation of GST was shown using laser pulses[33], our demonstration is thus far the highest number of operating levels reported using electrical control in PCMs-based photonics. Moreover, our multilevel operation does not require sophisticated heater Fig. 5: **A quasi-continuously tunable directional coupler based on multilevel Sb2S.** (a) (Coarse tuning applying identical electrical pulses) Time trace measurement of the directional coupler when sending in an amorphization pulse (10V, 550ns) each second. Green (red) curves represent cross (bar) transmission. The result is averaged over five experiments for each port, and the error bar shows the standard deviation. Depending on the pulses, a “bar”, “intermediate”, and “cross” state can be achieved as indicated by the red, orange, and green regions. (b) (Fine tuning using dynamically controlled near-identical electrical pulses) Normalized transmission at 1,340 nm at the bar port shows 5-bit operation (32 distinct levels), achieved by dynamically controlling the number, amplitude, and duration of pulses sent in (near-identical, 9.65 V \(\sim\) 9.85 V, 550 ns). A precise transmission level of 0.50 \(\pm\) 0.16 dB per step and 32 levels were simultaneously achieved. The only slight difference between the target and achieved transmission demonstrates an on-demand operation. The error bars represent the standard deviation over five experiments. geometry engineering, such as the segmented doped silicon heater design[11], and solely relies on the unique phase-change dynamics of Sb\({}_{2}\)S\({}_{3}\). ### Random phase error correction in balanced MZIs exploiting multilevel operation Finally, exploiting the multilevel operation, we corrected random phase errors in a balanced MZI. A perfectly balanced MZI should initially be in an all-cross state. However, random phase errors due to fabrication imperfections can easily build up to a phase error of \(\pi\), making the initial state unpredictable. Therefore, balanced MZIs usually require extra calibration[28]. For example, Fig. 6a shows the transmission spectra of a phase error corrupted balanced MZI, indicating a high bar transmission. Both arms of the MZI are loaded with 40-\(\upmu\)m-long Sb\({}_{2}\)S\({}_{3}\) film to guarantee a phase tuning range of more than \(\pm\pi\). The corrected MZI spectra by multilevel tuning of Sb\({}_{2}\)S\({}_{3}\) are demonstrated in Fig. 6b, showing a pure "cross state" with a high extinction ratio of 24 dB. The trimming process is shown in Fig. 6c. We sent in a partial amorphization pulse (8.8 V, 150 ns) every other second, which gradually increased the portion of amorphous Sb\({}_{2}\)S\({}_{3}\), resulting in quasi-continuous changes of the bar (blue) and cross (orange) transmission (at 1,340 nm). The correction finishes once a bar transmission minimum is reached, indicated by the red arrows in Fig. 6c, and the spectra reported in Fig. 6b were then measured. Further pulses increase bar transmission because of the over-compensated phase. We performed the same experiment three times, indicated by different colored regions in Fig. 6c. Complete phase error correction was observed in all three instances, exhibiting excellent repeatability of our trimming process. Note that a binary tuning cannot accomplish this task due to the random initial phase error. Even multilevel operations with limited discrete-level resolution can cause over- or under-corrected phase error, ultimately determining the trimming resolution. We highlight that this method requires zero static energy supply once the phase error is corrected, as the phase transition in Sb\({}_{2}\)S\({}_{3}\) is non-volatile (\(>\) 77 days, Supplementary Section S9). Thanks to the relatively fine operation levels, slight over-tuning does not significantly affect the performance. The trimming resolution can be further improved using dynamically changed, near-identical electrical pulses, as shown in the previous Section. Moreover, if the phase error is over compensated, the device could be tuned back with partial crystallization pulses (Supplementary Section S5). In the future, our trimming process can potentially be fully automated by real-time adjusting the pulse numbers according to the measured transmission. Figure 6: **Random phase error correction in a balanced MZI based on the multilevel operation.** (a) Transmission spectrum of a balanced MZI with phase error due to fabrication imperfections. The red (green) line represents bar (cross) transmission. Light transmits mostly from the bar port instead of the ideal cross port. (b) Transmission spectrum after the correction with 8.8 V, 150 ns pulses. The device is in a “cross state” with a high extinction ratio of \(\sim\)24 dB, indicating a phase-error-free device. (c) Time trace measurement for three experiments shows the phase error correction step. The RESET pulses are sent at a one-second interval. As more pulses were sent in, the bar transmission continuously decreased until it reached a minimum of \(\sim\)-24 dB (red arrow), and then it went up. In all the experiments, the optimal error correction was obtained. The slight volatile increase (only visible when transmission is smaller than -12 dB) after each amorphization pulse could be attributed to the relatively long thermal relaxation time, but the transmission stabilizes before the next pulse. The blue (orange) represents bar- (cross-) transmission. ## Discussion We demonstrated a multi-bit integrated electro-photonic platform using wide bandgap PCM Sb\({}_{2}\)S\({}_{3}\) and doped silicon PIN heaters. Operating in the telecommunication O-band, the Sb\({}_{2}\)S\({}_{3}\)-Si hybrid ring resonators, MZIs, and directional couplers exhibit a low insertion loss (\(<1.0\) dB) and a high extinction ratio (\(>10\) dB). We report record-high electrical cyclability of Sb\({}_{2}\)S\({}_{3}\) (\(>1\),\(600\) switching events). Notably, the Sb\({}_{2}\)S\({}_{3}\)-based photonic devices could be tuned into distinct intermediate levels in a stepwise fashion by consecutively sending identical, thermally isolated partial amorphization pulses. Such pulse-number-dependent multilevel tuning allows high-resolution operation levels compared to different voltages or pulse widths. We demonstrate precise on-demand 5-bit operation with \(0.50\pm 0.16\) dB per level step in a tunable beam splitter. We further exploited this behavior to demonstrate random phase error correction in a balanced MZI, which will find practical applications. Our work opens a new pathway toward various integrated photonics applications, such as post-fabrication trimming, optical information processing, and optical quantum simulations, where the nonvolatile on-demand multi-bit operation is of paramount interest. ## Methods ### SOI Device Fabrication The silicon photonic devices were fabricated on a commercial SOI Wafer with 220-nm-thick Silicon on 2-\(\upmu\)m-thick SiO\({}_{2}\) (WaferPro). All devices were defined using electron-beam lithography (EBL, JEOL JBX-6300FS) with a positive-tone E-beam resist (ZEP-520A) and partially etched by \(\sim\)120 nm in a fluorine-based inductively coupled plasma etcher (ICP, Oxford PlasmaLab 100 ICP-18) with mixed SF\({}_{6}\)/C\({}_{4}\)Fs. The etching rate was around 2.8 nm/sec. The doping regions were defined by two additional EBL rounds with 600-nm-thick poly (methyl methacrylate) (PMMA) resist and implanted by boron (phosphorus) ions for p\({}^{++}\) (n\({}^{++}\)) doping regions with a dosage of \(2\times 10^{15}\) ions per cm\({}^{2}\) and ion energy of 14 keV (40 keV). The chips were annealed at 950 \({}^{\circ}\)C for 10 min (Expertech CRT200 Anneal Furnace) for dopant activation. Ideal Ohmic contact was formed after removal of the surface native oxide via immersing the chips in 10:1 buffered oxide etchant (BOE) for 10 seconds. The metal contacts were then immediately patterned by a fourth EBL step using PMMA. Metallization was done by electron-beam evaporation (CHA SEC-600) and lift-off of Ti/Pd (5 nm/180 nm). After a fifth EBL defining the Sb\({}_{2}\)S\({}_{3}\) window, a 40-nm Sb\({}_{2}\)S\({}_{3}\) thin film was deposited using a GST target (AJA International) in a magnetron sputtering system (Lesker Lab 18), followed by a lift-off process. We note the actual Sb\({}_{2}\)S\({}_{3}\) thickness on the waveguide was reduced to around 20nm because of the narrow PMMA trench (Supplementary Section S10). The Sb\({}_{2}\)S\({}_{3}\) was then encapsulated by 40-nm-thick Al\({}_{2}\)O\({}_{3}\) through thermal ALD (Oxford Plasmalab 80PLUS OpAL ALD) at 150 \({}^{\circ}\)C. To ensure good contact between the electric probe and metal pads while applying electrical pulses, the Al\({}_{2}\)O\({}_{3}\) on the metal contacts was removed by defining a window using a sixth EBL with 600 nm PMMA, then etching in a chlorine-based inductively coupled plasma etcher (ICP-RIE, Oxford PlasmaLab 100 ICP-18). _Optical simulation_ The refractive index data for Sb\({}_{2}\)S\({}_{3}\) were measured by an ellipsometer (Woollam M-2000)[34]. The phase shifters and asymmetric directional couplers were designed (verified) by a commercial photonic simulation software package Lumerical MODE (FDTD). _Heat transfer simulation_ The doped silicon PIN heater was simulated with a commercial Multiphysics simulation software COMSOL Multiphysics[35, 36]. In the simulation, a heat transfer in a solid model is coupled with a semiconductor model to simulate the transient time performance. _Optical transmission measurement setup_ The programmable units were measured with a 25\({}^{\circ}\)-angled vertical fiber-coupling setup. The stage temperature was controlled at 26\({}^{\circ}\)C by a thermoelectric controller (TEC, TE Technology TC-720). A tunable continuous-wave laser (Santec TSL-510) sent in the input light, the polarization of which was controlled by a manual fiber polarization controller (Thorlabs FPC526) to achieve a maximum fiber-to-chip coupling efficiency. A low-noise power meter (Keysight 81634B) measured the static optical transmission. The transmission spectra of all Sb\({}_{2}\)S\({}_{3}\) devices were normalized to the spectra of the nearest reference waveguide. For the on-chip electrical switching, electrical pulses were applied to the on-chip metal contacts via a pair of electrical probes on two probe positioners (Cascade Microtech DPP105-M-AI-S). The crystallization and amorphization pulses were generated from a pulse function arbitrary generator (Keysight 81160A). The tunable laser, power meter, thermal controller, source meter, and pulse function arbitrary generator were controlled by a LabView program[35]. When electrically switching Sb\({}_{2}\)S\({}_{3}\), we found that a single amorphization or crystallization voltage pulse switches the device to transmission levels with large state-to-state variations (Supplementary Section S11). This could be attributed to the switching of Sb\({}_{2}\)S\({}_{3}\) into random intermediate structural phases since it has two (or more) crystalline phases[32]. This issue was tackled by using three identical pulses to switch the Sb\({}_{2}\)S\({}_{3}\) phase completely. We also note that sub-millisecond pulses failed to trigger the crystallization for the first time. We increased the voltage of a 500-\(\upmu\)s pulse up to the material ablation voltage level without observing any crystallization. This suggests that the thermally induced Sb\({}_{2}\)S\({}_{3}\) crystallization process is relatively slow, different from laser-induced crystallization[22], where a crystallization speed of tens of nano-second was demonstrated. We attribute this behavior to the difficulty in the initial nucleation. After the first crystallization process, consecutive crystallization process could happen at a scale of several hundreds of \(\upmu\)s (still slow enough to avoid re-crystallization during the amorphization process, Supplementary Section S12). We hypothesize that crystallization becomes easier after the initial nuclei are formed by the first long thermal process. We note that we used 200-ms pulses in the reported experiments because this shorter pulse required after the first crystallization was found later. Therefore, the pulse condition in this work could be further optimized. However, this limited crystallization speed is generally acceptable for many PIC applications, such as nonvolatile switching fabrics, optical programmable gate arrays, optical neural networks, where high speed is not essential and the zero-static power is more important[37, 38]. Moreover, the slow crystallization speed could prevent unintentional recrystallization during long-pulse amorphization and permit a thicker layer of Sb\({}_{2}\)S\({}_{3}\) to switch completely. We verified that a pulse with 10 \(\upmu\)s duration was able to trigger a large degree of amorphization (Supplementary Section S13). This capability of actuating phase transition in thicker films is useful in photonics, since a much larger volume of PCMs are switched compared to electronic applications. _Thermal parameter measurement_ The thermal conductivity of our films was measured with time-domain thermoreflectance (TDTR)[39, 40]. TDTR uses ultrafast modulated laser heating through the absorption of a thin metallic transducer layer (70 nm Pt). An unmodulated probe laser then measures the surface temperature through a proportional change in the transducer reflectivity. Measurements were taken at a pump beam modulation frequency of 10 MHz to ensure the thermal penetration depth exceeded the thickness of the films. Knife edge measurements provided 1/e\({}^{2}\) beam radii of 5.5 and 3.1 \(\pm\) 0.05 \(\upmu\)m for the pump and probe, respectively. The resulting thermoreflectance data were then fit to the solution of a 3D heat diffusion model for a multi-layer stack of materials (Pt-film-substrate) and the effective thermal resistance was determined as a function of film thickness. A linear regression of the foregoing data provided the thermal boundary resistance of the material stack, which was then used to determine the intrinsic thermal conductivities for the films. The properties of the Pt layer, the films, and the Si substrate were determined from independent measurements or adopted from the literature [41, 42]. ## Acknowledgments The authors thank Asir Intisar Khan, Kathryn M. Neilson, Prof. Eric Pop at Stanford University, and Prof. Juejun Hu at MIT for the insightful discussions. **Funding:** The research is funded by National Science Foundation (NSF-1640986, NSF-2003509), ONR-YIP Award, DARPA-YFA Award, NASA-STTR Award 80NSSC22PA980, and Intel. F.M. is supported by a Draper Scholars Program. Part of this work was conducted at the Washington Nanofabrication Facility/ Molecular Analysis Facility, a National Nanotechnology Coordinated Infrastructure (NNCI) site at the University of Washington, with partial support from the National Science Foundation via awards NNCI-1542101 and NNCI-2025489. Part of this work was performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-1542152). **Author contributions:** R.C. and A.M. conceived the project. R.C. simulated and fabricated the Sb\({}_{2}\)S\({}_{3}\) silicon photonic devices, performed optical characterizations and data analysis. Z.F. and J.Z. helped with the device simulation, fabrication, characterization, and data analysis. F.M., S.G., K.K. and A.S. helped with the optical measurements. C.P. and K.E.G. measured thermal conductivity of Sb\({}_{2}\)S\({}_{3}\) thin films. A.M. supervised and planned the project. R.C. wrote the manuscript with input from all the authors. **Competing interests:** Authors declare that they have no competing interests. **Data availability:** The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References * [1] Shen, Y. _et al._ Deep learning with coherent nanophotonic circuits. _Nature Photonics_**11**, 441-446 (2017). * [ * [2] Capmany, J., Gasulla, I. & Perez, D. Microwave photonics: The programmable processor. _Nature Photonics_**10**, 6-8 (2016). * [3] Harris, N. C. _et al._ Quantum transport simulations in a programmable nanophotonic processor. _Nature Photonics_**11**, 447-452 (2017). * [4] Perez, D. _et al._ Multipurpose silicon photonics signal processor core. _Nature Communications_**8**, 1-9 (2017). * [5] Zhang, W. & Yao, J. Photonic integrated field-programmable disk array signal processor. _Nature Communications_**11**, 1-9 (2020). * [6] Reed, G. T., Mashanovich, G., Gardes, F. Y. & Thomson, D. J. Silicon optical modulators. _Nature Photon_**4**, 518-526 (2010). * [7] Wang, C. _et al._ Integrated lithium niobate electro-optic modulators operating at CMOS-compatible voltages. _Nature_**562**, 101-104 (2018). * [8] Wuttig, M., Bhaskaran, H. & Taubner, T. Phase-change materials for non-volatile photonic applications. _Nature Photonics_**11**, 465-476 (2017). * [9] Abdollahramezani, S. _et al._ Tunable nanophotonics enabled by chalcogenide phase-change materials. _Nanophotonics_**9**, 1189-1241 (2020). * [10] Fang, Z., Chen, R., Zheng, J. & Majumdar, A. Non-Volatile Reconfigurable Silicon Photonics Based on Phase-Change Materials. _IEEE Journal of Selected Topics in Quantum Electronics_**28**, 1-17 (2022). * [11] Rios, C. _et al._ Ultra-compact nonvolatile phase shifter based on electrically reprogrammable transparent phase change materials. _PhotoniX_**3**, 26 (2022). * [12] Zheng, J. _et al._ GST-on-silicon hybrid nanophotonic integrated circuits: a non-volatile quasi-continuously reprogrammable platform. _Optical Materials Express_**8**, 1551 (2018). * [13] Xu, P., Zheng, J., Doylend, J. K. & Majumdar, A. Low-Loss and Broadband Nonvolatile Phase-Change Directional Coupler Switches. _ACS Photonics_**6**, 553-557 (2019). * [14] Fang, Z. _et al._ Non-Volatile Reconfigurable Integrated Photonics Enabled by Broadband Low-Loss Phase Change Material. _Advanced Optical Materials_**9**, (2021). * [15] Chen, R. _et al._ Broadband Nonvolatile Electrically Controlled Programmable Units in Silicon Photonics. _ACS Photonics_ (2022) doi:10.1021/acsphotonics.2c00452. * [16] Fang, Z. _et al._ Ultra-low-energy programmable non-volatile silicon photonics based on phase-change materials with graphene heaters. _Nat. Nanotechnol._ 842-848 (2022) doi:10.1038/s41565-022-01153-w. * [17] Zheng, J. _et al._ Nonvolatile Electrically Reconfigurable Integrated Photonic Switch Enabled by a Silicon PIN Diode Heater. _Advanced Materials_**32**, 2001218 (2020). * [18] Rios, C. _et al._ Integrated all-photonic non-volatile multi-level memory. _Nature Photonics_**9**, 725-732 (2015). * [19] Fang, Z. _et al._ Non-Volatile Reconfigurable Integrated Photonics Enabled by Broadband Low-Loss Phase Change Material. _Advanced Optical Materials_**9**, 2002049 (2021). * [20] Zhang, Y. _et al._ Broadband transparent optical phase change materials for high-performance nonvolatile photonics. _Nature Communications_**10**, 1-9 (2019). * [21] Delaney, M., Zeimpekis, I., Lawson, D., Hewak, D. W. & Muskens, O. L. A New Family of Ultralow Loss Reversible Phase-Change Materials for Photonic Integrated Circuits: Sb2S3 and Sb2Se3. _Advanced Functional Materials_**30**, 2002447 (2020). * [22] Dong, W. _et al._ Wide Bandgap Phase Change Material Tuned Visible Photonics. _Advanced Functional Materials_**29**, 1806181 (2019). * [23] Shalaginov, M. Y. _et al._ Reconfigurable all-dielectric metalens with diffraction-limited performance. _Nature Communications_**12**, 1-8 (2021). * [24] Lu, L. _et al._ Reversible Tuning of Mie Resonances in the Visible Spectrum. _ACS Nano_**15**, 19722-19732 (2021). * [25] Moitra, P. _et al._ Programmable Wavefront Control in the Visible Spectrum Using Low-Loss Chalcogenide Phase-Change Metasurfaces. _Advanced Materials_**n/a**, 2205367. * [26] Spallholz, J. E. On the nature of selenium toxicity and carcinostatic activity. _Free Radical Biology and Medicine_**17**, 45-64 (1994). * [27] Geler-Kremer, J. _et al._ A ferroelectric multilevel non-volatile photonic phase shifter. _Nat. Photon._**16**, 491-497 (2022). * [28] Song, L., Li, H., Dai, D. & Dai, D. Mach-Zehnder silicon-photonic switch with low random phase errors. _Opt. Lett., OL_**46**, 78-81 (2021). * [29] Teo, T. Y. _et al._ Comparison and analysis of phase change materials-based reconfigurable silicon photonic directional couplers. _Opt. Mater. Express, OME_**12**, 606-621 (2022). * [30] Zhang, Q. _et al._ Broadband nonvolatile photonic switching based on optical phase change materials: beyond the classical figure-of-merit. _Optics Letters_**43**, 94 (2018). * [31] Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. _Nature Nanotechnology_**11**, 693-699 (2016). * [32] Gutierrez, Y. _et al._ Interlaboratory Study on Sb2S3 Interplay between Structure, Dielectric Function and Amorphous-to-Crystalline Phase Change for Photonics. _iScience_ 104377 (2022) doi:10.1016/j.isci.2022.104377. * [33] Li, X. _et al._ Fast and reliable storage using a 5 bit, nonvolatile photonic memory cell. _Optica_**6**, 1 (2019). * [34] Xu, P., Zheng, J., Doylend, J. K. & Majumdar, A. Low-Loss and Broadband Nonvolatile Phase-Change Directional Coupler Switches. _ACS Photonics_**6**, 553-557 (2019). * [35] Zheng, J. _et al._ Nonvolatile Electrically Reconfigurable Integrated Photonic Switch Enabled by a Silicon PIN Diode Heater. _Advanced Materials_**32**, 2001218 (2020). * [36] Zheng, J., Zhu, S., Xu, P., Dunham, S. & Majumdar, A. Modeling Electrical Switching of Nonvolatile Phase-Change Integrated Nanophotonic Structures with Graphene Heaters. _ACS Appl. Mater. Interfaces_**12**, 21827-21836 (2020). * [37] Chen, R. _et al._ Opportunities and Challenges for Large-Scale Phase-Change Material Integrated Electro-Photonics. _ACS Photonics_ (2022) doi:10.1021/acsphotonics.2c00976. * [38] Zhang, Y. _et al._ Myths and truths about optical phase change materials: A perspective. _Applied Physics Letters_**118**, 210501 (2021). * [39] Kwon, H., Perez, C., Park, W., Asheghi, M. & Goodson, K. E. Thermal Characterization of Metal-Oxide Interfaces Using Time-Domain Thermoreflectance with Nanograting Transducers. _ACS Appl. Mater. Interfaces_**13**, 58059-58065 (2021). * [40] Perez, C. _et al._ Dominant Energy Carrier Transitions and Thermal Anisotropy in Epitaxial Iridium Thin Films. _Advanced Functional Materials_**n/a**, 2207781. * [41] Cernoskova, E., Todorov, R., Cernosek, Z., Holubova, J. & Benes, L. Thermal properties and the structure of amorphous Sb2Se3 thin film. _J Therm Anal Calorim_**118**, 105-110 (2014). * [42] Ben Nasr, T., Maghraoui-Meherzi, H. & Kamoun-Turki, N. First-principles study of electronic, thermoelectric and thermal properties of Sb2S3. _Journal of Alloys and Compounds_**663**, 123-127 (2016). **Supplementary information** **Non-volatile electrically programmable integrated** **photonics with a 5-bit operation** Rui Chen\({}^{1,\ast}\), Zhuoran Fang\({}^{1}\), Christopher Perez\({}^{3}\), Forrest Miller\({}^{1}\), Khushboo Kumari\({}^{1}\), Abhi Saxena\({}^{1}\), Jiajiu Zheng\({}^{1}\), Sarah J. Geiger\({}^{4}\), Kenneth E. Goodson\({}^{3}\), Arka Majumdar\({}^{1,2,\ast}\) \({}^{1}\)_Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA_ \({}^{2}\)_Department of Physics, University of Washington, Seattle, WA 98195, USA_ \({}^{3}\)_Department of Mechanical and Aerospace Engineering, Stanford University, Stanford, CA 94305, United States_ \({}^{4}\)_The Charles Stark Draper Laboratory, Cambridge, MA 02139, USA_ \(\ast\)_Email: [email protected] and [email protected]_ **This supplementary information includes:** S1. Sb\({}_{2}\)S\({}_{3}\) thin film characterization S2. Lumerical simulation for Sb\({}_{2}\)S\({}_{3}\)-SOI hybrid phase shifters and directional couplers S3. Local crystal grains - micrograph image S4. Extra measurement cyclability test results S5. Multilevel performance based on Sb\({}_{2}\)S\({}_{3}\) partial crystallization S6. Heat transfer simulation S7. Micrograph images for different intermediate levels S8. The Table shows pulse conditions for 5-bit operation S9. Performance retention after 77 days S10. Sb\({}_{2}\)S\({}_{3}\) stripe thickness after liftoff depends on the pattern width S11. Single-pulse vs. multi-pulse switching S12. Nucleation-speed limited initial crystallization and successive faster crystallization S13. Long-pulse amorphization - evidence for large volume amorphization ### Section S1. Sb\({}_{2}\)S\({}_{3}\) thin film characterization We started with characterizing the thin film Sb\({}_{2}\)S\({}_{3}\) using ellipsometry, X-Ray diffraction, and Raman spectroscopy (_Fig. S1_). A layer of 60-nm Sb\({}_{2}\)S\({}_{3}\) was sputtered on a silicon wafer. The as-deposited a-Sb\({}_{2}\)S\({}_{3}\) was characterized first and then switched to the crystalline phase by annealing at 325C for 10 minutes under nitrogen flow. The Sb\({}_{2}\)S\({}_{3}\) was characterized by ellipsometry, and the data is fitted with Cody-Lorentz models with low mean square error (\(\sim\)5). The fitted complex refractive indices in _Fig. S1(a)_ show a drastic change in the real part \(n\) (\(\sim\)0.7 at 1310 nm), while almost zero \(\kappa\) (0.0003 for amorphous and 0.03 for crystalline phases at 1310 nm) across the entire telecommunication O- and C-band. Therefore, switching the Sb\({}_{2}\)S\({}_{3}\) produces a phase-only modulation. The microstructural phase transition after annealing and the stoichiometry were verified under X-ray diffraction (XRD) and Raman spectroscopy[1], as shown in _Fig. S1(b)_ and _Fig. S1(c)_, by the characteristic lattice constants in the XRD and wavenumber shifts in the Raman spectrum. These characteristics show good agreement with existing literature[2]. **Section S2.** **Lumerical simulation for Sb\({}_{2}\)S\({}_{3}\)-SOI hybrid phase shifters and directional couplers** ## S2.1. Sb\({}_{2}\)S\({}_{3}\)-based silicon phase shifters _Fig. S2_ shows the simulated \(\pi\) phase shift length \(L_{\pi}\) and insertion loss for the Sb\({}_{2}\)S\({}_{3}\)-based phase shifters. The refractive indices of Sb\({}_{2}\)S\({}_{3}\) used here were obtained experimentally in the previous section. For a generic 500-nm-wide, 220-nm-high SOI waveguide with 20-nm-thick, 450-nm-wide Sb\({}_{2}\)S\({}_{3}\) on the top, an effective index contrast of 0.018 is obtained at 1310 nm in _Figs. S2(a, b)_, indicating a \(\pi\) phase shift length \(L_{\pi}\approx 38\ \mu m\). _Figs. S2(c, d)_ shows that a wider and thicker Sb\({}_{2}\)S\({}_{3}\) film reduces \(L_{\pi}\) and does not impact the insertion loss much (-0.23 dB/\(\pi\)). We adopted a generic waveguide width of 500 nm to avoid any extra taper structure, which renders \(L_{\pi}\approx 38\ \mu m\). The calculated mode coupling between the bare silicon waveguide mode and the c-Sb\({}_{2}\)S\({}_{3}\)-Si hybrid waveguide mode is 99.7% (or -0.013 dB), which is negligible. We note that \(L_{\pi}\) could be further reduced by exploiting a wider or thicker Sb\({}_{2}\)S\({}_{3}\). Although thick Sb\({}_{2}\)S\({}_{3}\) gives a more compact device footprint, the complete phase transition is more difficult. The vertical temperature gradient at thick Sb\({}_{2}\)S\({}_{3}\) films could lead to Sb\({}_{2}\)S\({}_{3}\) ablation at the bottom while not high enough temperature for amorphization at the top. Unintentional re-amorphization could also happen with a thick Sb\({}_{2}\)S\({}_{3}\) layers. Moreover, recrystallization could also occur because of the crystallization kinetics and finite heat diffusion time[3]. To accommodate this tradeoff, we pick 20 nm as the experimental thickness, providing both a complete, repeatable Sb\({}_{2}\)S\({}_{3}\) phase transition and a relatively compact \(L_{\pi}\) of 38\(\upmu\)m. Since this is the first experiment to switch Sb\({}_{2}\)S\({}_{3}\) electrically, we chose a conservative thickness of 20 nm. Considering the good switching behavior observed in our experiment, a thicker Sb\({}_{2}\)S\({}_{3}\) could be used in future experiments to provide even more compact devices. The largest thickness of Sb\({}_{2}\)S\({}_{3}\) films that could be switched completely require more experimental exploration. ### S2.2. Sb\({}_{2}\)S\({}_{3}\)-based tunable asymmetric directional coupler The asymmetric directional coupler is designed using the Lumerical Mode simulator and the coupled-mode theory. The idea has been depicted in the main text and the literature[4, 5, 6]. _Figs. S3 (a, b)_ show two main optimization steps: (i). find the Sb\({}_{2}\)S\({}_{3}\)-loaded waveguide width with a fixed Sb\({}_{2}\)S\({}_{3}\) thickness so that it is phase matched with a bare SOI waveguide; (ii). obtain a high transmission at the bar port when switching the Sb\({}_{2}\)S\({}_{3}\) by optimizing the gap between two waveguides to allow the coupling length in the bar state to be an even multiple of the coupling length in the cross state[4]. Unlike the \(1\times 2\) coupler in ref[4], here we consider a \(2\times 2\) directional coupler, where light comes in from both input ports. To achieve an input port independent performance, the Sb\({}_{2}\)S\({}_{3}\)-SOI hybrid waveguide and the bare SOI waveguide are phase matched when Sb\({}_{2}\)S\({}_{3}\) is in the crystalline phase. Intuitively, the light interacts with the slight loss of the Sb\({}_{2}\)S\({}_{3}\) waveguide half of the \(L_{c}\) regardless of the input port in this configuration. After accomplishing the optimization in the Lumerical MODE simulator, we ran a Finite Difference Time Domain (FDTD) simulation to verify the design, and the results are in _Figs. S3 (c-f)_. The transmission spectra in _Figs. S3 (c, e)_ show that a bar (cross) state is achieved by controlling the phase of Sb\({}_{2}\)S\({}_{3}\) to amorphous (crystalline). At 1310 nm, the insertion loss is around 0.1 dB and 0.3 dB for the bar and cross state, respectively, and the extinction ratio is larger than 15 dB in both states. The higher insertion loss for the cross-state is mainly due to the slight material absorption of c-Sb\({}_{2}\)S\({}_{3}\). The inset in _Fig. S3 (e)_ shows that our device performance is symmetric and independent of the input port. _Figs. S3 (d, f)_ shows the field propagation profile corresponding to _Figs. S3 (c, e)_. Again, we can verify an input-port independent performance. On the contrary, if the waveguides are phase matched for a-Sb\({}_{2}\)S\({}_{3}\), the loss would be much higher when light inputs from the Sb\({}_{2}\)S\({}_{3}\) waveguide. _Fig. S4_ shows the simulated spectrum of an optimized design, where 20-nm a-Sb\({}_{2}\)S\({}_{3}\) is phase-matched with a bare SOI waveguide. As shown in the zoomed-in bar transmission plot _Fig. S4 (b)_, a difference of 0.3 dB between two input ports is observed when Sb\({}_{2}\)S\({}_{3}\) is in its crystalline phase. This seemingly slight asymmetry could lead to unintentionally unbalanced optical paths in a large-scale PIC system. We adopted the relatively thin Sb\({}_{2}\)S\({}_{3}\) with 20 nm thickness to ensure a complete phase transition. However, such a thin PCM layer provides limited tunability, resulting in a large device footprint. Fortunately, we note the slow crystallization speed of Sb\({}_{2}\)S\({}_{3}\) could potentially enable a much thicker PCM layer to crystallize (see Section S2.1), which will shrink the device size. Using the same design methodology, we numerically designed asymmetric directional couplers with Sb\({}_{2}\)S\({}_{3}\) thickness ranging from 30 nm to 50 nm in _Table. S1_. Remarkably, the designed directional coupler with 50-nm-thick Sb\({}_{2}\)S\({}_{3}\) shows a significantly reduced coupling length (\(\sim\)34\(\mu\)m). S2.3. Estimate loss of asymmetric directional coupler using experimentally extracted Sb\({}_{2}\)S\({}_{3}\) loss We extract the actual loss of c-Sb\({}_{2}\)S\({}_{3}\), including the crystal grain scattering loss, by match the loss measured in the high-Q ring resonator (0.76 dB/\(\mu\)m) with MODE simulation. We obtained an extinction coefficient \(\kappa^{\prime}\approx\) 0.015, compared to ellipsometry measurement result of \(\kappa\) = 0.005 at a wavelength of 1310 nm. We then estimate the actual loss of the asymmetric directional coupler using the new complex refractive index. _Fig. S5_ shows the simulation results, where the insertion loss is around 0.85 dB and the extinction ratio is 25 dB. We note the amorphous Sb\({}_{2}\)S\({}_{3}\) loss is very small in the experiment, so the initial simulation result still holds (0.1 dB insertion loss and 15 dB extinction ratio). ### S2.4. A multimode interferometer (MMI) A 120-nm partially etched MMI was designed using Lumerical Eigen-Mode Expansion (EME) Method. The design was further tweaked and verified with 3D FDTD simulations. The optimized multimode device parameters are annotated in _orange in Fig. S6 (a)_. _Figs. S6 (a, b)_ show the simulated electric field propagation profile and the transmission spectra at both output ports. The optimized MMI exhibits an insertion loss \(<\) 0.1 dB at 1310 nm and a 1-dB bandwidth \(>\) 60 nm, as shown in _Fig. S6 (b)_. The measured transmission spectra are presented in _Fig. S6 (c)_. The slight spectral fluctuation (\(\sim\)0.1 dB) is attributed to the interference between back-reflected light at the interface between the single mode tapers and the multimode region. Fig. S6: Low-loss 50:50 multimode interferometer (MMI) at 1310 nm. (a) Field profile and (b) transmission spectrum using FDTD simulation. The white dotted line outlines the device shape. (c) Measured transmission spectrum of the fabricated device. ### Section S3. Local crystal grains - micrograph image In _Fig. S7_, we took a micrograph image of 60-nm-thick Sb\({}_{2}\)S\({}_{3}\) on a blanket silicon chip, where the Sb\({}_{2}\)S\({}_{3}\) films were annealed under 325\({}^{\circ}\)C and in the crystalline phase. The micrograph image shows a non-uniform surface with a lot of local crystal grains and even flower-shaped patterns (circled in red), which could lead to unintended scattering loss for in-plane light propagation. We hypothesize that the crystal grains form because crystalline Sb\({}_{2}\)S\({}_{3}\) has a density reduction of around 24% compared to its amorphous form[7]. Hence, a significant strain is induced during the phase transition, and results in a non-uniform surface. We also speculate that the flower-like patterns occurs near the nuclei. It is unclear what the c-Sb\({}_{2}\)S\({}_{3}\) surface looks like on our devices because those 20-nm 450-nm-wide Sb\({}_{2}\)S\({}_{3}\) films are too small to image optically. The 40-nm-thick encapsulation alumina further prevents the accurate usage of SEM. Further material study is required to quantitatively understand the crystalline patterns formed. ### Section S4. Extra cyclability test results #### MZI cyclability result - 100 cycles The Sb\({}_{2}\)S\({}_{3}\) phase shifter in an MZI was switched reversibly by alternatively applying SET and RESET pulses. We note this cyclability test was performed with a single pulse scheme, so a significant variation was observed compared to _Fig. 4_ in the main text, where three pulses were used for each amorphization or crystallization. Yet, we found 100 consecutive switching events (_Fig. S8_) where the device was switched with a high extinction ratio of \(\sim\)15 dB. #### Asymmetric directional coupler transmission before and after 2,300 switching events We switched the asymmetric directional coupler for 2,300 switching events. However, some drift in the optical measurement setup occurred, so only 1,600 events were reported in the main text. In this section, we compare the SEM image and the transmission spectrum before and after 2,300 switching events, and give an explanation of the performance degradation or the transmission drifting. _Figs. S9 (a, b)_ show the scanning electron-beam micrograph images of Sb\({}_{2}\)S\({}_{3}\) directional couplers before and after the cyclability test. We note that the black dots on the lower waveguide (_Fig. S9 (a)_) is likely the crystalline Sb\({}_{2}\)S\({}_{3}\) nuclei formed during the rapid thermal annealing. The rough Sb\({}_{2}\)S\({}_{3}\) edge in _Fig. S9 (b)_ indicates that switching the relatively large volume of the Sb\({}_{2}\)S\({}_{3}\) stripe many times results in a distorted Sb\({}_{2}\)S\({}_{3}\). This is because the thin film always tends to minimize the surface energy during melt-quench process, hence leads to sinusoidal edge. This phenomena is known as Rayleigh instability of fluid motions[8]. The result of such surface-energy-induced instability could be discrete circular patterns, having been used for advanced nano-fabrication[9]. Therefore, to further improve the endurance of our devices, one possible approach is to pattern the long Sb\({}_{2}\)S\({}_{3}\) stripes to subwavelength gratings to ensure a lower initial surface energy and a low excess loss. We note that this subwavelength grating idea has been experimentally demonstrated in other works using GST[10, 11, 12], hence could be a future direction to improving the presented work. The transmission spectra before and after the cyclability test are shown in _Fig. S9 (c)_. A minor drift in the spectra could be attributed to the thermal reflowing, which changes the Sb\({}_{2}\)S\({}_{3}\) shape, hence breaking the phase matching condition. Nevertheless, the device still shows a low insertion loss (\(<\) 1dB degradation) and high extinction ratio (\(\sim\) 15 dB). ### Section S5. Multilevel performance based on Sb2S3 partial crystallization By sending in relatively low voltage crystallization pulses (voltage 3.1 V, duration 200 ms, leading-edge 10 ms, falling edge 100 ms), we obtained partial crystallization performance in _Fig. S10_. Such behavior could be explained by the growth-dominant nature of Sb2S34. The growth rate could be controlled by the temperature (pulse voltages); the total crystallization volume could be controlled with the duration of crystallization pulses. Further engineering the pulse parameters could potentially achieve more operation levels. ### Section S6. Heat transfer simulation results A heat transfer simulation in COMSOL Multiphysics is presented in this section [6, 13, 14, 15, 16]. The large thermal gradient between Sb\({}_{2}\)S\({}_{3}\) and its surroundings leads to a fast cooling rate. Such a fast cooling rate (\(>10^{9}\)\(K/s\)[17]) is necessary for complete amorphization to take place. Simulation shows that the temperature returns to near room temperature within a few \(\mu\)s (_Fig. S11_). The \(\mu\)s-level thermal relaxation time rules out any residual heat when we applied the one-second interval electrical pulses for stepwise multilevel amorphization. ### Section S7. Micrograph images for different intermediate levels We inspected devices at different intermediate levels under the microscope in _Fig. S12_. One can see a gradual increase of amorphous Sb\({}_{2}\)S\({}_{3}\) as the degree of amorphization increases. The amorphous Sb\({}_{2}\)S\({}_{3}\) is not located precisely in the center of the long strip, where the temperature is the highest. We attribute this to the possible non-uniform doping profile and recrystallization of Sb\({}_{2}\)S\({}_{3}\) near the center. ### Section S8. Table shows pulse conditions for 5-bit operation We engineered the pulse condition dynamically to achieve the high 0.5-dB precision. We started with low voltage (9.65 V) that could barely trigger the amorphization and gradually increase the amplitude. In the meantime, we monitored the transmission after sending each pulse. If the transmission is not changed, we increase the voltage slightly (by 0.05 V). If the transmission changes, but smaller than (0.5 \(\pm\) 0.1) V (this value is our target), we continue to send more pulses until the targetted transmission is obtained. If the transmission is changed by larger than (0.5 + 0.1) V, we partially crystallize the device and repeat the previous steps. We note that it is necessary to monitor the transmission and dynamically change the pulse conditions to mitigate the stochastic nature of PCMs[18]. The following Table shows the pulse conditions in the 5-bit operation experiments. \begin{tabular}{c c c c c c} **23** & 9.8\(\times\)2 & 9.8 & 9.8 & 9.73 & 9.75\(\times\)2 \\ \hline **24** & 9.85 & 9.8 \(\times\)2 & 9.8 & 9.73 & 9.78 \\ \hline **25** & 9.85 & 9.85 & 9.85 & 9.75 & 9.8\(\times\)2 \\ \hline **26** & 9.85 & 9.85 & 9.8 & 9.75 & 9.8 \\ \hline **27** & 9.85 & 9.85 & 9.85 & 9.75 \(\times\)2 & 9.85 \\ \hline **28** & 9.85 & 9.85 & 9.85 & 9.77 & 9.8 \\ \hline **29** & 9.85 & 9.85 & 9.85 & 9.77 \(\times\)3 & 9.85 \\ \hline **30** & 9.85 & 9.85 & 9.85 & 9.77 \(\times\)3 & 9.85 \\ \hline **31** & 9.85 & 9.85 & 9.85 & 9.77 \(\times\)3 & 9.85 \\ \hline **32** & 9.85 & 9.85 & 9.85 \(\times\)2 & 9.77 \(\times\)3 & 9.85 \\ \hline \end{tabular} ### Section S9. Performance retention after 77 days Our device is non-volatile under the ambient environment. We measured a device after leaving it in the ambient (open in the air) for 77 days and compared its performance in _Fig. S13_. The device retains its performance excellently, with slight variation, which the small difference in the grating coupler coupling condition (coupling angle) might cause. ### Section S10. Sb\({}_{2}\)S\({}_{3}\) stripe thickness after liftoff depends on the pattern width We find that a narrow resist trench for Sb\({}_{2}\)S\({}_{3}\) deposition and liftoff leads to a reduced Sb\({}_{2}\)S\({}_{3}\) thickness than blanket Sb\({}_{2}\)S\({}_{3}\) deposition. This could be attributed to that part of Sb\({}_{2}\)S\({}_{3}\) was blocked by the resist due to a deep trench. We fabricated a chip with different patch width (300, 400, 500 nm) and measure the thickness after Sb\({}_{2}\)S\({}_{3}\) liftoff. The measured results (_Table. S3_) shows a reduction factor of around 0.5 compare to blanket deposition. We also measured the widths of the Sb\({}_{2}\)S\({}_{3}\) widths which agree reasonably well with the Ebeam defined widths. Since this is not the main interest of this measurement, we only measured part of the Sb\({}_{2}\)S\({}_{3}\) stripes, and the ones not measured are denoted as '-'. \begin{table} \begin{tabular}{l l l l} \hline \hline **Deposited** & **Width after** & **Deposited** & **Thickness** \\ **width** & **Liftoff** & **thickness** & **after liftoff** \\ \hline **300** & 373 & 30 & 13.8 \\ \hline **300** & 354 & 30 & 14 \\ \hline **400** & 471 & 30 & 16.9 \\ \hline **400** & 471 & 30 & 17 \\ \hline **300** & - & 40 & 16.4 \\ \hline **400** & 413 & 40 & 19.5 \\ \hline **300** & - & 50 & 21 \\ \hline **400** & - & 50 & 26 \\ \hline **400** & 471 & 50 & 25.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Sb\({}_{2}\)S\({}_{3}\) thickness reduces after liftoff measured by AFM (Unit: nm) ### Section S11. Single-pulse vs. multi-pulse switching In our experiment, single pulses could not trigger a reliable, complete phase transition. We sent in a single pulse for SET/RESET. The final state shows a significant variation (_Fig. S14 (a)_). This could be attributed to the multiple crystalline phases and incomplete thermal process due to the relatively thick Sb\({}_{2}\)S\({}_{3}\). We tackled this issue by sending two or three pulses to trigger a complete phase transition. The resulting transmission spectra of another ring resonator are shown in _Fig. S14 (b)_. This multi-pulse switching scheme achieved a much more consistent performance across ten cycles. We emphasize that this behavior is distinctly different compared to GST or Sb\({}_{2}\)Se\({}_{3}\), where we were able to actuate the phase transition under a single shot, in a very similar PIC. While we provided a hypothesis for such behavior, more material studies are warranted to explain the behavior. ### Section S12. Nucleation-speed limited initial crystallization and successive faster crystallization Experimentally, we observed a very slow Sb\({}_{2}\)S\({}_{3}\) crystallization for the first time, where even 500 \(\upmu\)s pulses could not trigger the crystallization. The devices were then crystallized with DC voltage or pulses with very long durations such as 200 ms. This could be explained by the small enthalpy of fusion of Sb\({}_{2}\)S\({}_{3}\) (\(\sim\)40.64 kJ/mol\({}^{19}\), compared to fusion enthalpy\({}^{20}\) of GST 625 J/cm\({}^{3}\) or \(\frac{625}{\rho}M=\frac{625~{}J/cm^{3}}{5.87~{}g/cm^{3}}\cdot 1026.8~{}g/mol=109.3~{} kJ/mol\), where \(\rho\) is the density\({}^{21}\) and \(M\) is the molar mass of GST), which necessitates a large critical nuclei size to overcome the crystallization energy barrier. After the first crystallization, however, the successive crystal growth process happens without any requirement for overcoming the energy barrier. We were able to see partial crystallization using much shorter 100-\(\upmu\)s pulses, as shown in Fig. S15. We note this slow crystallization guarantees larger volumes of Sb\({}_{2}\)S\({}_{3}\) amorphized again without recrystallization and is thus could be beneficial for photonic applications. In the next section, we experimentally show that indeed the amorphization could be triggered with very long pulses. ### Section S13. Long-pulse amorphization - evidence for large volume amorphization One appealing aspect of using "slow" PCMs, such as Sb\({}_{2}\)S\({}_{3}\), is that the slow crystallization process could avoid any unintentional recrystallization during the melt-quench process, hence could ensure the amorphization for a large volume of PCM. This capability of completely switching large volumes of PCMs is crucial in photonics, where the volume (typically orders of \(\upmu\)m\({}^{3}\)) are much larger than in electronic applications (typically orders of (10 nm)\({}^{3}\)) and takes precedence over the phase transition speed [22]. Here, we verify Sb\({}_{2}\)S\({}_{3}\)'s resistance to unintentional recrystallization by comparing the longest amorphization time of an asymmetric directional coupler to other reported PCM devices. _Fig. S16_ shows that the Sb\({}_{2}\)S\({}_{3}\) on the directional coupler could be amorphized under different pulse durations, ranging from 500 ns to 10 \(\upmu\)s. A large degree of amorphization was observed even when we increased the pulse durations to 10 \(\upmu\)s. This 10-\(\upmu\)s pulse duration is much longer than in the reported GST [6, 14, 23] (\(<200\) ns) or SbSe [24] (\(<1\)\(\upmu\)s, but the thickness is 30 nm) switching experiments. This successful amorphization with long pulses supports that Sb\({}_{2}\)S\({}_{3}\) is inherently more suitable for large volume amorphization because recrystallization is surpressed.
2307.05846
Assessing the calibration of multivariate probabilistic forecasts
Rank and PIT histograms are established tools to assess the calibration of probabilistic forecasts. They not only check whether an ensemble forecast is calibrated, but they also reveal what systematic biases (if any) are present in the forecasts. Several extensions of rank histograms have been proposed to evaluate the calibration of probabilistic forecasts for multivariate outcomes. These extensions introduce a so-called pre-rank function that condenses the multivariate forecasts and observations into univariate objects, from which a standard rank histogram can be produced. Existing pre-rank functions typically aim to preserve as much information as possible when condensing the multivariate forecasts and observations into univariate objects. Although this is sensible when conducting statistical tests for multivariate calibration, it can hinder the interpretation of the resulting histograms. In this paper, we demonstrate that there are few restrictions on the choice of pre-rank function, meaning forecasters can choose a pre-rank function depending on what information they want to extract from their forecasts. We introduce the concept of simple pre-rank functions, and provide examples that can be used to assess the location, scale, and dependence structure of multivariate probabilistic forecasts, as well as pre-rank functions tailored to the evaluation of probabilistic spatial field forecasts. The simple pre-rank functions that we introduce are easy to interpret, easy to implement, and they deliberately provide complementary information, meaning several pre-rank functions can be employed to achieve a more complete understanding of multivariate forecast performance. We then discuss how e-values can be employed to formally test for multivariate calibration over time. This is demonstrated in an application to wind speed forecasting using the EUPPBench post-processing benchmark data set.
Sam Allen, Johanna Ziegel, David Ginsbourger
2023-07-11T23:37:48Z
http://arxiv.org/abs/2307.05846v1
# Assessing the calibration of multivariate probabilistic forecasts ###### Abstract Rank and PIT histograms are established tools to assess the calibration of probabilistic forecasts. They not only check whether an ensemble forecast is calibrated, but they also reveal what systematic biases (if any) are present in the forecasts. Several extensions of rank histograms have been proposed to evaluate the calibration of probabilistic forecasts for multivariate outcomes. These extensions introduce a so-called pre-rank function that condenses the multivariate forecasts and observations into univariate objects, from which a standard rank histogram can be produced. Existing pre-rank functions typically aim to preserve as much information as possible when condensing the multivariate forecasts and observations into univariate objects. Although this is sensible when conducting statistical tests for multivariate calibration, it can hinder the interpretation of the resulting histograms. In this paper, we demonstrate that there are few restrictions on the choice of pre-rank function, meaning forecasters can choose a pre-rank function depending on what information they want to extract concerning forecast performance. We introduce the concept of simple pre-rank functions, and provide examples that can be used to assess the location, scale, and dependence structure of multivariate probabilistic forecasts, as well as pre-rank functions that could be useful when evaluating probabilistic spatial field forecasts. The simple pre-rank functions that we introduce are easy to interpret, easy to implement, and they deliberately provide complementary information, meaning several pre-rank functions can be employed to achieve a more complete understanding of multivariate forecast performance. We then discuss how e-values can be employed to formally test for multivariate calibration over time. This is demonstrated in an application to wind speed forecasting using the EUPPBench post-processing benchmark data set. ## 1 Introduction It is standard practice for operational weather centres to issue forecasts that are probabilistic. Such forecasts typically take the form of an ensemble of possible weather scenarios, allowing weather centres to quantify the uncertainty inherent in their predictions. However, despite the unequivocal utility of ensemble forecasting, there is no guarantee that the issued ensemble forecasts are reliable, or calibrated, in the sense that they align statistically with the corresponding observations. To determine whether or not forecasts can be trusted, methods are required to analyse forecast calibration. Although several notions of forecast calibration exist for real-valued outcomes (see e.g. Gneiting et al., 2007; Gneiting and Resin, 2021), it is common in practice to assess whether forecasts are _probabilistically calibrated_. Probabilistic calibration of ensemble predictions can be visualised using rank histograms (also called Talagrand diagrams; Anderson, 1996; Talagrand, 1997; Hamill and Colucci, 1997). Given a set of ensemble forecasts and observations, rank histograms display the ranks of the observations when pooled among the corresponding ensemble members, thereby assessing whether the observations and the ensemble members are exchangeable. These graphical diagnostic tools are useful in practice since they not only assess calibration, but they also reveal what deficiencies (if any) are present in the forecasts. As such, rank histograms have become an integral component of probabilistic weather forecast evaluation. Several extensions of rank histograms have been proposed to evaluate the calibration of probabilistic forecasts for multivariate outcomes. These multivariate rank histograms introduce a so-called _pre-rank function_ that condenses the multivariate forecasts and observations into univariate objects, from which a standard rank histogram can be constructed (Gneiting et al., 2008). Proposed extensions differ in the choice of pre-rank function: Smith and Hansen (2004) and Wilks (2004) suggested using the minimum spanning tree of the multivariate ensemble members and observation as a pre-rank function; Gneiting et al. (2008) introduced a pre-rank function based on a multivariate ranking of the ensemble members and observation; Thorarinsdottir et al. (2016) proposed two alternative approaches that leverage the average rank of the observation across the individual dimensions; while Knuppel et al. (2022) recently argued that proper scoring rules are designed to summarise the information contained in the multivariate forecast and observation into a single value, and they therefore provide a canonical choice for a pre-rank function when testing for multivariate calibration. Several of these choices have been reviewed and compared using simulated forecasts and observations (Thorarinsdottir and Schuhen, 2018; Wilks, 2017). These comparisons illustrate how the interpretation of the multivariate rank histogram depends on the choice of pre-rank function, and that different pre-rank functions are more adept at identifying different types of mis-calibration in the forecasts. The authors of these studies therefore recommend that multiple pre-rank functions are used to construct multivariate rank histograms, to obtain a more complete understanding of how the multivariate forecasts behave. However, despite the attention they have received in the literature, multivariate rank histograms are relatively rarely employed in practice. For example, weather centres regularly issue ensemble forecast fields over relevant spatial domains, and, although these forecast fields are inherently multivariate, their calibration is rarely assessed beyond evaluating univariate calibration at the individual locations or grid points. In the univariate case, the rank histogram shows the relative position of the observation among the ensemble members. In the multivariate case, existing pre-rank functions generate multivariate rank histograms with different interpretations, and practitioners may be uncertain as to which pre-rank function(s) they should apply when evaluating their forecasts. The pre-rank function is typically designed to preserve as much information as possible when condensing the multivariate forecasts and observations into univariate objects, leading to a multivariate rank histogram with a less intuitive interpretation. In particular, forecast systems with contrasting biases can often result in a similarly-shaped histogram. In this paper, we argue that the main purpose of rank histograms is to identify the deficiencies in probabilistic forecasts by providing a graphical visualisation of forecast calibration. In order to achieve this effectively in a multivariate context, it is imperative that the pre-rank function is straightforward to interpret. We generalise and apply the arguments of Gneiting et al. (2008, Section 2.3) and Ziegel (2017) that any function that transforms multivariate vectors to univariate values can be used as a pre-rank function, which facilitates flexible assessments of forecast calibration. The forecaster can choose the pre-rank functions depending on what information they want to extract from their forecasts. For example, if interest is on the dependence structure of the multivariate forecast distribution, then a pre-rank function could be chosen that quantifies this dependence structure; if interest is on extreme events, then the pre-rank function could quantify how extreme the forecast or observation is, and so on. We can formally test whether or not a prediction system is calibrated by checking whether its (multivariate) rank histogram is flat. Wilks (2019) recently compared approaches to achieve this using popular measures of histogram flatness. Here, we advocate an alternative approach based on e-values (Arnold et al., 2021), which provide a dynamic alternative to p-values when conducting statistical hypothesis tests. While classical tests require that the evaluation period is fixed in advance, e-values generate statistical tests that are valid sequentially. This makes them particularly relevant in sequential forecasting settings (see also Henzi and Ziegel, 2022), allowing us to test forecast calibration sequentially over time without compromising type-I-error guarantees. We additionally discuss appropriate methods to address the problem of multiple testing that arises when several pre-rank functions are used to assess multivariate calibration. In the following section, we introduce rank histograms, both in a univariate and multivariate setting. Section 3 discusses existing pre-rank functions that have been proposed in the literature, and introduces possible alternatives that assess particular features of multivariate forecasts. These pre-rank functions are employed in a simulation study in Section 4. Section 5 outlines how e-values can be employed to sequentially test for forecast calibration, and Section 6 presents a case study in which multivariate rank histograms and e-values are used to assess the calibration of gridded wind speed ensemble forecasts over Europe. Section 7 concludes. R code to implement the proposed multivariate histograms in practice is available at [https://github.com/sallen12/MultiVCalibration](https://github.com/sallen12/MultiVCalibration). ## 2 Rank histograms In this paper, we restrict attention to ensemble forecasts, since multivariate weather forecasts are almost exclusively in this form. However, the proposed framework readily applies to continuous forecast distributions, and this is treated in detail in the Appendix. An ensemble forecast with \(M\) members is a collection of possible scenarios \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\in\mathbb{R}^{\,d}\) for the future outcome \(\mathbf{y}=\mathbf{x}_{0}\in\mathbb{R}^{\,d}\). An ensemble forecast can be interpreted as a multivariate probabilistic forecast by considering the empirical distribution of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\) as the predictive distribution. To assess the calibration of ensemble forecasts for real-valued outcomes (\(d=1\)), we can record the rank of each observation \(\mathbf{y}\in\mathbb{R}\) among the corresponding ensemble members \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\in\mathbb{R}\) for a large number of forecast cases, and check whether all ranks occur with the same frequency (up to sampling variation). This is typically achieved by displaying the distribution of the ranks in a histogram (Anderson, 1996; Talagrand, 1997; Hamill and Colucci, 1997). Formally, the (randomised) rank of a real number \(z_{0}\in\mathbb{R}\) amongst \(z_{0},\ldots,z_{M}\in\mathbb{R}\) is defined as \[\text{rank}(z_{0};z_{1},\ldots,z_{M})=1+\sum_{i=1}^{M}\mathbb{1}\{z_{i}<z_{0} \}+W\in\{1,\ldots,M+1\},\] where \(\mathbb{1}\) denotes the indicator function, and \(W\) is zero if \(N=\#\{i=1,\ldots,m\mid z_{0}=z_{i}\}\) is zero, and is uniformly distributed on \(\{1,\ldots,N\}\) otherwise. In the univariate case, an ensemble forecast is probabilistically calibrated if its rank histogram is flat. If the rank histogram is not flat, then its shape often provides additional information regarding how the forecasts are mis-calibrated: a \(\cup\)-shaped histogram suggests the observations are frequently either above or below all ensemble members, implying the forecasts are under-dispersed; a \(\cap\)-shaped histogram implies that the forecasts are over-dispersed; and a triangular histogram suggests that the forecasts tend to either over- or under-predict the outcome, indicative of a systematic forecast bias. Due to this straightforward interpretation of the rank histogram's shape, they are now well-established when evaluating operational weather forecasts. Several attempts have been made to emulate this behaviour when assessing the calibration of multivariate ensemble forecasts. Since the notion of a rank is not well-defined on \(\mathbb{R}^{\,d}\), it is customary to introduce a so-called pre-rank function \[\rho:\mathbb{R}^{\,d}\times\underbrace{\mathbb{R}^{\,d}\times\cdots\times \mathbb{R}^{\,d}}_{M\text{ times}}\to\mathbb{R}\] that is invariant under permutations of the last \(M\)-arguments. The pre-rank function converts the multivariate observations and ensemble members to univariate objects, from which a standard rank histogram can be constructed (Gneiting et al., 2008). That is, the calibration of the multivariate ensemble forecast \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\) can be assessed by considering the univariate rank of the transformed observation \(\rho(\mathbf{y},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})\) within the transformed ensemble members \(\rho(\mathbf{x}_{1},\mathbf{y},\mathbf{x}_{2},\ldots,\mathbf{x}_{M}),\ldots, \rho(\mathbf{x}_{M},\mathbf{y},\mathbf{x}_{1},\ldots,\mathbf{x}_{M-1})\). A multivariate rank histogram is obtained by repeating this for a large number of forecast cases, and displaying the relative frequency that \(\rho(\mathbf{y},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})\) assumes each possible rank. If the ensemble members and the observation are exchangeable, the resulting rank histogram is uniform, and we say that the multivariate ensemble forecasts are calibrated with respect to the pre-rank function \(\rho\). The simplest way to construct an admissible pre-rank function is to choose \(\rho\) such that it does not depend on the last \(M\) arguments. We term such pre-rank functions _simple_ pre-rank functions, and omit the last \(M\) arguments in their definition \[\rho:\mathbb{R}^{\,d}\to\mathbb{R}.\] Simple pre-rank functions therefore transform the multivariate forecasts and observations to univariate summary statistics. The calibration of multivariate ensemble forecasts can analogously be assessed by ranking the transformed observation \(\rho(\mathbf{y})\) within the transformed ensemble members \(\rho(\mathbf{x}_{1}),\ldots,\rho(\mathbf{x}_{M})\), and displaying the resulting ranks within a histogram. This approach can also readily be adapted to assess the calibration of forecasts on an arbitrary outcome space \(\Omega\) by using a pre-rank function \(\rho:\Omega\rightarrow\mathbb{R}\). While most pre-rank functions introduced in the literature are not simple, we argue that simple pre-rank functions are often more intuitive in practice, since they can easily be designed to focus evaluation on specific aspects of the multivariate forecasts. ## 3 Pre-rank functions ### Existing pre-rank functions To construct a canonical extension of the univariate rank histogram, Gneiting et al. (2008) introduced a multivariate rank as a pre-rank function: \[\rho_{mv}(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})=\sum_{m=0}^{M} \mathbb{1}\{\mathbf{x}_{m}\preceq\mathbf{x}_{0}\},\] where \(\mathbf{x}_{m}\preceq\mathbf{x}_{0}\) signifies that \(x_{m,j}\leq x_{0,j}\) for all \(j=1,\ldots,d\) with \(\mathbf{x}_{m}=(x_{m,1},\ldots,x_{m,d})\in\mathbb{R}^{d}\) for \(m=0,\ldots,M\). For example, a pre-rank of \(M+1\) for \(\mathbf{y}=\mathbf{x}_{0}\) corresponds to a vector that exceeds the elements of the ensemble \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\) in all dimensions, whereas a low pre-rank suggests that \(\mathbf{y}=\mathbf{x}_{0}\) is not larger than the ensemble members in all dimensions. In high dimensions, it is often the case that few elements of \(\mathbf{x}_{0},\ldots,\mathbf{x}_{M}\) are comparable with respect to \(\preceq\), resulting in most elements receiving the same pre-rank. Randomisation of these pre-ranks then trivially yields a flat rank histogram. Recognising this, Thorarinsdottir et al. (2016) introduced two alternative pre-rank functions that are more robust to the dimensionality. The average rank pre-rank function is defined as \[\rho_{av}(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})=\frac{1}{d} \sum_{j=1}^{d}\operatorname{rank}(x_{0,j};x_{1,j},\ldots,x_{M,j}).\] The interpretation of the average rank is similar to the multivariate rank, but it typically leads to fewer ties between elements of \(S\) and therefore requires less randomisation, making it more applicable in higher dimensions. Thorarinsdottir et al. (2016) also proposed the band-depth pre-rank function: \[\rho_{bd}(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})=\frac{1}{d} \sum_{j=1}^{d}\left[M+1-\operatorname{rank}(x_{0,j};x_{1,j},\ldots,x_{M,j}) \right]\left[\operatorname{rank}(x_{0,j};x_{1,j},\ldots,x_{M,j})-1\right].\] This representation assumes that there are no ties between \(x_{0,j},x_{1,j},\ldots,x_{M,j}\) for \(j=1,\ldots,d\), though a more general formula exists for when this assumption does not hold (Thorarinsdottir et al., 2016). The band-depth pre-rank function measures the centrality of \(\mathbf{x}_{0}\) among \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\), with a large pre-rank suggesting that \(\mathbf{x}_{0}\) is close to the centre, and a small pre-rank indicating that \(\mathbf{x}_{0}\) is an outlying scenario. Smith and Hansen (2004) and Wilks (2004) alternatively proposed using the inverse length of the minimum spanning tree of the set \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\) as a pre-rank function for \(\mathbf{x}_{0}\). This pre-rank function also measures the centrality of \(\mathbf{x}_{0}\), resulting in multivariate rank histograms with a similar interpretation to those generated using the band-depth pre-rank function. For concision, this minimum spanning tree approach is omitted from the applications in the following sections. Finally, Knuppel et al. (2022) recently introduced pre-rank functions based on proper scoring rules. Proper scoring rules are functions that take a probabilistic forecast and an observation as inputs, and output a single numerical value that quantifies the forecast accuracy, thereby allowing competing forecast systems to be ranked and compared (e.g. Gneiting and Raftery, 2007). Knuppel et al. (2022) argue that, by design, proper scoring rules condense the information contained in the forecasts and observations into a single value, and they therefore provide an appealing choice of pre-rank function when assessing multivariate forecast calibration. For example, a pre-rank function can be derived using the energy score, arguably the most popular scoring rule when evaluating probabilistic forecasts for multivariate outcomes (Gneiting and Raftery, 2007): \[\rho_{es}(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M})=\frac{1}{M} \sum_{m=1}^{M}\|\mathbf{x}_{m}-\mathbf{x}_{0}\|-\frac{1}{2M^{2}}\sum_{m=1}^{M }\sum_{k=1}^{M}\|\mathbf{x}_{m}-\mathbf{x}_{k}\|, \tag{1}\] where \(\|\cdot\|\) denotes the Euclidean distance in \(\mathbb{R}^{d}\). This pre-rank function measures the distance between \(\mathbf{x}_{0}\) and the ensemble members \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\). A low pre-rank therefore indicates that \(\mathbf{x}_{0}\) is similar to the ensemble members, whereas outlying values will receive higher pre-ranks. As Knuppel et al. (2022) remark, the latter term in this pre-rank function does not depend on \(\mathbf{x}_{0}\), and could therefore be removed without changing the resulting ranks. Alternative multivariate scoring rules could also readily be used in place of the energy score. ### Generic pre-rank functions The pre-rank functions listed above all depend non-trivially on both \(\mathbf{x}_{0}\) and \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\). That is, the function that condenses the multivariate observations and ensemble members into univariate objects depends itself on these observations and forecasts (hence the term "pre-rank"). This is not problematic if the pre-rank functions are invariant to permutations of \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\). In this section, we argue that it is often more intuitive to employ simple pre-rank functions that depend only on \(\mathbf{x}_{0}\). We demonstrate how this approach allows us to target particular aspects of the multivariate forecasts when assessing calibration. As is standard when evaluating multivariate forecasts, many of the pre-rank functions discussed herein require that the different dimensions are on the same scale. If this cannot be assumed, then the forecasts and observations should be standardised prior to evaluation. This standardisation becomes part of the pre-rank function, so it should either depend entirely on past data or be permutation invariant. For example, one can use the past climatological mean and standard deviation along each dimension, or the mean and standard deviation of the observation and ensemble members. The interpretation of the multivariate rank histograms will generally change depending on whether one has standardised or not, since dimensions on a larger scale will typically have more influence on the results; we are then checking calibration with respect to a different pre-rank function. If the goal is to visualise the (mis-)calibration of the multivariate forecasts, then the most important aspect of the pre-rank function is that it leads to multivariate rank histograms that are interpretable. Forecasters should therefore employ pre-rank functions that are capable of extracting the most relevant information from their forecasts. This does not rule out the pre-rank functions introduced above, but it highlights that practitioners need not limit themselves to these approaches when assessing multivariate calibration in practice. On the other hand, if the goal is to conduct a formal statistical test for multivariate calibration, in the sense that the ensemble members and the observation are exchangeable, interpretability of pre-rank functions may not be the most important aspect. We do not consider tests for a global null hypothesis of multivariate calibration in this paper, but we comment on how such tests can be constructed in principle in Section 5 based on several pre-rank functions and e-values. When evaluating multivariate forecasts, it is common to focus on the location, the scale, and the dependence structure of the forecasts. Hence, rather than choosing one pre-rank function that simultaneously tries to assimilate these different elements, we propose separate pre-rank functions that assess each aspect individually; it has repeatedly been acknowledged that several pre-rank functions should be used to obtain a more complete understanding of multivariate forecast performance, so it makes sense that these pre-rank functions focus on distinct features of the multivariate forecasts. For example, a simple pre-rank function to assess the forecast's location might take the mean of the \(d\) elements in the multivariate vector: \[\rho_{loc}(\mathbf{x}_{0})=\bar{\mathbf{x}}_{0}=\frac{1}{d}\sum_{j=1}^{d}x_{0,j}.\] This pre-rank function can readily be applied to the observed outcome and each ensemble member, and the rank of the transformed observation can then be calculated. This pre-rank is easier to calculate than the existing approaches listed above. The interpretation of the resulting rank histogram is simple: if \(\rho_{loc}(\mathbf{y})\) frequently attains a high (low) rank among the pre-ranks of the corresponding ensemble members, then the multivariate histogram will appear triangular, suggesting that the multivariate forecasts are negatively (positively) biased when predicting the mean, or location, of the observed vector. As in the univariate case, a \(\cup\)-shaped (\(\cap\)-shaped) histogram suggests that the ensemble forecasts are under-dispersed (over-dispersed) when predicting the observed mean. Similarly, if we want to assess how well our probabilistic forecasts can predict the scale, or dispersion of the multivariate observations, we could take the variance or standard deviation of the \(d\) elements as a simple pre-rank function: \[\rho_{sc}(\mathbf{x}_{0})=s_{\mathbf{x}_{0}}^{2}=\frac{1}{d}\sum_{j=1}^{d} \left(x_{0,j}-\bar{\mathbf{x}}_{0}\right)^{2}.\] This is valid even if the number of dimensions is small: we are not using this to estimate some unknown population variance, but rather as a simple measure of dispersion in the multivariate vector \(\mathbf{x}_{0}\). The interpretation of the resulting multivariate rank histogram is analogous to above, but with the spread of the forecasts over the multivariate domain as the target variable, rather than the mean. Assessing the dependence structure is less trivial, since most measures of dependence, such as correlation coefficients, are estimated from a time series of multivariate observations. The pre-rank function, on the other hand, should take only a single realisation \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) as an input. However, tools do exist to quantify the dependence in multivariate vectors. For example, the variogram at a chosen lag \(h\in\{1,\ldots,d-1\}\) measures the variation between dimensions separated by lag \(h\), with a small value suggesting strong dependence between these dimensions. A variogram-based pre-rank function can therefore be derived to measure the dependence between different dimensions, such as \[\rho_{dep}(\mathbf{x}_{0};h)=-\frac{\gamma_{\mathbf{x}_{0}}(h)}{s_{\mathbf{x }_{0}}^{2}}, \tag{2}\] where \[\gamma_{\mathbf{x}_{0}}(h)=\frac{1}{2(d-h)}\sum_{j=1}^{d-h}|x_{0,j}-x_{0,j+h} |^{2}\] is an empirical variogram at lag \(h\) that only requires knowledge of the multivariate vector \(\mathbf{x}_{0}\). The negative sign ensures that a larger value of \(\rho_{dep}(\mathbf{x}_{0};h)\) indicates a larger dependence, in keeping with the interpretation of \(\rho_{loc}\) and \(\rho_{sc}\) above. Hence, if \(\rho_{dep}(\mathbf{y};h)\) is consistently large compared to \(\rho_{dep}(\mathbf{x}_{1};h),\ldots,\rho_{dep}(\mathbf{x}_{M};h)\), then the ensemble members under-estimate the dependencies between dimensions separated by lag \(h\), whereas the opposite suggests that the ensemble forecasts over-estimate the dependence at lag \(h\). By measuring dependence using the empirical variogram, we implicitly assume that the lag between dimensions is meaningful, such as in a time series setting. This can be generalised further using adapted notions of spatial distances and bins (e.g. Gaetan and Guyon, 2010). This pre-rank function depends on the choice of a lag \(h\) at which to calculate the empirical variogram. Scheuerer and Hamill (2015) construct a variogram-based proper scoring rule based on the quantity \[\sum_{i=1}^{d}\sum_{j=1}^{d}w_{i,j}|x_{0,i}-x_{0,j}|^{2},\] for nonnegative weights \(w_{i,j}\), arguing that it provides an effective means to evaluate a multivariate forecast's dependence structure. This quantity could also be used as a dependence pre-rank function, circumventing the choice of a specific lag \(h\). However, it would require selecting a matrix of nonnegative weights. If these weights are all the same, then this weighted combination will be proportional to the variance used to define \(\rho_{sc}\) above. This reflects the intrinsic difficulty in distinguishing between errors in the forecast spread and in the forecast dependence structure from only a single multivariate observation: if variables are highly dependent on each other, then the spread of the vector should be small. Dividing the empirical variogram by \(s_{\mathbf{x}_{0}}^{2}\) means this dependence pre-rank function focuses more on the dependence structure and is less sensitive to errors in the scale. ### Pre-rank functions for spatial fields The three pre-rank functions in the previous section are simple examples that condense the multivariate forecasts and observations into univariate summary statistics. Alternative pre-rank functions can also be employed in practice and we illustrate this by considering spatial field forecasts. Here, a spatial field is an element \(\mathbf{x}\) in a domain \(\mathbb{R}^{p\times q}\), where each point on the \(p\times q\) grid represents a separate dimension; for ease of notation, we write \(\mathbf{x}\) in place of \(\mathbf{x}_{0}\) in this section. The location and scale pre-rank functions can readily be applied to spatial fields by 'unravelling' \(\mathbf{x}\) to obtain a vector of length \(d=p\times q\). To employ the dependence pre-rank function, however, we must use an empirical variogram that is a function of spatial lags (see below). We discuss additional pre-rank functions that are tailored to the evaluation of spatial field forecasts. One such example was introduced recently by Scheuerer and Hamill (2018), who used the fraction of threshold exceedances (FTE) as a pre-rank function when assessing probabilistic precipitation fields. The FTE measures the proportion of values in the spatial field that exceed some threshold of interest, and the corresponding pre-rank function is \[\rho_{FTE}(\mathbf{x};t)=\frac{1}{pq}\sum_{i=1}^{p}\sum_{j=1}^{q}\mathbb{1}\, \{x_{i,j}>t\}, \tag{3}\] for some \(t\in\mathbb{R}\). The FTE is well-known in the context of spatial forecast evaluation, since it forms the basis of the fraction skill score (Roberts and Lean, 2008). By employing it as a pre-rank function, we can assess to what extent the forecasts reliably predict in how many dimensions the threshold \(t\) will be exceeded, giving us an idea of forecast calibration when predicting more extreme outcomes. If all components of all ensemble members and the outcome are below \(t\), which will often be the case for very large thresholds, then the corresponding pre-ranks will be all be zero and the resulting rank of the observation will be determined at random. As discussed by Scheuerer and Hamill (2018), these pre-ranks contain no information, meaning these instances can be omitted from the rank histogram without changing its interpretation. The FTE pre-rank function can also be applied to multivariate vectors \(\mathbf{x}\in\mathbb{R}^{d}\), by replacing the double summation in Equation 3 with a summation over \(j\) from \(1\) to \(d\). Alternative, well-established geostatistical principles could similarly be leveraged within this framework to assess other characteristics of multivariate forecast distributions. For example, while the empirical variogram captures the variation between elements of the multivariate vector that are separated by a given lag, another aspect to consider is whether this variation changes depending on the direction between the elements. That is, whether or not the variogram is isotropic. A variogram is said to be isotropic if it depends only on the distance between elements of the multivariate vector, and not on the direction between them. By introducing a pre-rank function that measures the isotropy of an empirical variogram, we can assess to what extent the multivariate ensemble forecasts reproduce the degree of (an)isotropy present in the observed outcomes. Several statistical tests have been proposed to assess whether or not the assumption of isotropy is valid when modelling spatial fields (see Weller and Hoeting, 2016, for a review). Here, we propose a pre-rank function inspired by the test statistic discussed in Guan et al. (2004). Let \(\mathcal{I}=\{1,\ldots,p\}\times\{1,\ldots,q\}\) represent a set of grid points, and define the empirical variogram of a field \(\mathbf{x}\in\mathbb{R}^{p\times q}\) at multivariate lag \(\mathbf{h}\in\{0,\ldots,p-1\}\times\{0,\ldots,q-1\}\) as \[\gamma_{\mathbf{x}}(\mathbf{h})=\frac{1}{2|\mathcal{I}(\mathbf{h})|}\sum_{ \mathbf{j}\in\mathcal{I}(\mathbf{h})}|x_{\mathbf{j}}-x_{\mathbf{j+h}}|^{2}, \tag{4}\] where \(\mathcal{I}(\mathbf{h})=\{\mathbf{j}\in\mathcal{I}:\mathbf{j}+\mathbf{h}\in \mathcal{I}\}\), meaning the sum is over all dimensions (i.e. grid points) that are separated by the multivariate vector \(\mathbf{h}\). Equation 2 can readily be adapted to use this spatial variogram. The isotropy pre-rank function considered here is then \[\rho_{iso}(\mathbf{x};h)=-\left\{\left[\frac{\gamma_{\mathbf{x}}((h,0))- \gamma_{\mathbf{x}}((0,h))}{\gamma_{\mathbf{x}}((h,0))+\gamma_{\mathbf{x}}((0, h))}\right]^{2}+\left[\frac{\gamma_{\mathbf{x}}((h,h))-\gamma_{\mathbf{x}}((-h,h))}{ \gamma_{\mathbf{x}}((h,h))+\gamma_{\mathbf{x}}((-h,h))}\right]^{2}\right\}. \tag{5}\] This pre-rank function quantifies the squared distance between the variogram in the horizontal direction \(\mathbf{h}=(h,0)\) and the vertical direction \(\mathbf{h}=(0,h)\), plus the squared distance between the variogram in the two diagonal directions \(\mathbf{h}=(h,h)\) and \(\mathbf{h}=(-h,h)\). This thus considers two separate orthogonal directions, and determines how much the variogram changes between these directions. Again, the minus sign just ensures that a higher pre-rank indicates a higher measure of isotropy. A pre-rank close to zero therefore suggests that the variogram does not change between the pairs of directions, whereas a lower value of this pre-rank function suggests that the variogram depends on the direction between elements of the field, indicative of anisotropy. The denominators in Equation 5 help to standardise the variogram differences, recognising that the variogram will typically be larger as the lag increases. In statistical tests, this standardisation is often performed using estimates of the covariance matrix of the variogram at different lags, typically obtained using expensive resampling methods. This standardisation ensures that the squared difference between variograms at different lags asymptotically follows a multivariate normal distribution, facilitating the introduction of a chi-squared test statistic with which to test for isotropy (Guan et al., 2004). However, this is not required when defining a pre-rank function, and hence we choose an alternative standardisation that is interpretable and straightforward to implement in practice. It is most common to use unit lags (i.e. \(h=1\)) within tests for isotropy (Weller and Hoeting, 2016), though alternative pairs of lags could also be employed in Equation 5. It is straightforward to extend this pre-rank function so that it considers multiple lags simultaneously, for example by summing \(\rho_{iso}(\mathbf{x};h)\) over different values of \(h\). As with the variogram-based pre-rank function in Equation 2, this avoids the selection of a single lag, but typically introduces additional hyperparameters. Several alternative summary measures also exist to describe the isotropy, such as the anisotropy ratio (Chiles and Delfiner, 2012, Section 2.5.2). These measures are typically more complicated to estimate than Equation 5, often requiring some assumptions about the underlying data generating process, though they may also provide informative pre-rank functions. ## 4 Simulation study ### Multivariate Gaussian We revisit the simulation study of Thorarinsdottir and Schuhen (2018) to illustrate how the multivariate rank histograms behave when there are errors in the location, scale, and correlation structure of the forecast distributions. Suppose the observations are drawn from a multivariate normal distribution with mean vector \(\mu=\mathbf{0}\), and covariance matrix \(\Sigma\) for which \[\Sigma_{i,j}=\sigma^{2}\,\exp\left(-\frac{|i-j|}{\tau}\right),\quad i,j=1, \ldots,d.\] The parameter \(\sigma^{2}>0\) controls the variance of the observations along each dimension, while \(\tau>0\) determines how quickly the correlation decays as the distance between the dimensions increases. In this sense, there is assumed to be an ordering of the variables, as is typically the case in a time series or spatial setting. We set \(d=10\), \(\sigma^{2}=1\), and \(\tau=1\). Analogous conclusions are also drawn from other configurations. For each observation, \(M=20\) ensemble members are drawn at random from a mis-specified multivariate normal distribution. We consider six possible mis-specifications, corresponding to under- and over-estimation of the mean vector \(\mu\), scale parameter \(\sigma^{2}\), and correlation parameter \(\tau\). The parameter configurations corresponding to these six scenarios are listed in Table 1. Corresponding multivariate rank histograms are displayed in Figure 1. The multivariate and average pre-rank functions are insensitive to changes in the scale, whereas the band-depth pre-rank function, in measuring the centrality of the observation among the ensemble members, distinguishes well between under- and over-dispersed forecast distributions. On the other hand, when errors are present in the mean of the forecast distributions, the band-depth pre-rank function is unable to differentiate between under-prediction and over-prediction, in contrast to the multivariate and average ranks. As Knuppel et al. (2022) remark, the energy score pre-rank function also quantifies the centrality of the observation among the ensemble members. However, in almost all cases, the energy score pre-rank function results in a multivariate rank histogram of the same shape, with the observation receiving a higher pre-rank than the ensemble members. In this case, we are comparing the energy score obtained by the ensemble forecast when the observation \(\mathbf{y}\) occurs, to the score obtained when each of the ensemble members \(\mathbf{x}_{m}\) (\(m=1,\ldots,M\)) occurs as the observation; it is not too surprising that the score is generally largest when the observation is not a member of the ensemble, resulting in these negatively skewed histograms. Hence, although this pre-rank function leads to powerful tests for multivariate calibration, the corresponding rank histograms are unable to distinguish between different types of forecast errors. This could perhaps be circumvented by treating the observation as an additional ensemble member when calculating Equation 1, in which case this pre-rank function should behave similarly to the band-depth pre-rank function. Since none of the existing pre-rank functions can distinguish between all three types of forecast error, Wilks (2017) and Thorarinsdottir and Schuhen (2018) recommend that several pre-rank functions are implemented. This motivates the use of specific pre-rank functions that are designed to focus on each aspect of forecast performance individually. As desired, the location pre-rank function clearly discriminates between errors in the mean of the forecast distribution, the scale pre-rank function between errors in the scale, and the dependence pre-rank function between errors in the correlation of the multivariate forecasts. The dependence pre-rank function is implemented here with lag \(h=1\). The scale and dependence pre-rank functions are insensitive to biases in the mean, though the location pre-rank function also slightly detects errors in the scale and dependence structure. As discussed, while the dependence pre-rank function is insensitive to forecast dispersion errors, it is difficult to construct a pre-rank function that can identify errors in the scale of the forecast distribution, without also being insensitive to errors in the correlation structure. ### Gaussian random fields Consider now the case where the observations are spatial fields. In particular, suppose they are realisations of a Gaussian random field on a regular \(30\times 30\) grid. We assume the grid is standardised such that the distance \begin{table} \begin{tabular}{l|c c c} Description & \(\mu\) & \(\sigma^{2}\) & \(\tau\) \\ \hline Under-estimation of mean & (-0.5, -0.5, -0.5) & 1 & 1 \\ Over-estimation of mean & (0.5, 0.5, 0.5) & 1 & 1 \\ Under-estimation of variance & **0** & 0.85 & 1 \\ Over-estimation of variance & **0** & 1.25 & 1 \\ Under-estimation of correlation & **0** & 1 & 0.5 \\ Over-estimation of correlation & **0** & 1 & 2 \\ \end{tabular} \end{table} Table 1: Mis-specifications considered in the simulated ensemble forecasts. between adjacent grid points is one unit. This extends the previous example to a higher dimensional setting in which there is additionally spatial structure present in the data. As before, the observations are drawn from a zero-mean random field with an exponential covariance function such that the covariance between two locations \(\mathbf{i}\) and \(\mathbf{j}\) on the grid is \[\sigma^{2}\,\exp\left(-\frac{||\mathbf{i}-\mathbf{j}||}{\tau}\right),\quad \mathbf{i},\mathbf{j}\in\{1,\ldots,30\}\times\{1,\ldots,30\}, \tag{6}\] Figure 1: Results from the Multivariate Gaussian simulation study. Multivariate rank histograms constructed using the seven pre-rank functions discussed in the text when the mean is (a) under-estimated: \(\mu=(-0.5,-0.5,-0.5)\), (b) over-estimated: \(\mu=(0.5,0.5,0.5)\); when the variance is (c) under-estimated: \(\sigma^{2}=0.85\), (d) over-estimated: \(\sigma^{2}=1.25\); when the correlations are (e) under-estimated: \(\tau=0.5\), (f) over-estimated: \(\tau=2\). The histograms were constructed from 10,000 observations and ensemble forecasts. which is governed by a scale parameter \(\sigma^{2}=1\) and correlation parameter \(\tau=1\). Ensembles of size \(M=20\) are similarly drawn from a mis-specified Gaussian random field, and the calibration of these multivariate ensemble forecasts is again assessed under several possible mis-specifications. In this case, results are presented for errors in the scale, correlation, and isotropy of the forecast distributions. The multivariate rank histograms corresponding to biases in the forecast mean are similar to those in Figure 1 (not shown). The errors in the scale and correlation structure are analogous to those listed in Table 1. The exponential covariance function used to generate the observations depends only on the distance between the different dimensions, and not the direction between them. This observation-generating process is therefore isotropic. To generate forecasts that misrepresent the isotropy in this process, geometric anisotropy is introduced by rescaling the grids in the vertical direction by a factor of 1.25. A second scenario is also considered whereby the observation fields are generated using this approach, with the forecasts obtained using the (isotropic) covariance function in Equation 6. For concision, the multivariate rank and energy score pre-rank functions have been omitted from this higher-dimensional simulation study: as discussed, in high dimensions, the multivariate rank trivially leads to a flat rank histogram, since most pre-ranks are determined at random; the energy score pre-rank function behaves as in the previous simulation study, and fails to distinguish between the different types of forecast errors. Results are also presented for the FTE and isotropy pre-rank functions introduced in Section 3.3. The FTE pre-rank function is employed with a threshold of \(t=1\), roughly equal to the 85th percentile of a standard normal distribution. These multivariate rank histograms are displayed in Figure 2. The average pre-rank function fails to identify errors in the scale and isotropy, whereas the band-depth rank differentiates well between under- and over-estimation of the scale and dependence in the forecast distributions. Unsurprisingly, the location pre-rank function is largely insensitive to any of the forecast errors, since it is designed to focus on the mean of the predictive distributions. The scale pre-rank function is highly sensitive to errors in the variance and dependence structure, while the dependence pre-rank function (at unit multivariate lag) is able to detect errors in the correlation structure. The FTE pre-rank function is insensitive to the scale of the forecast distributions, though it does distinguish between errors in the correlation structure. Finally, none of these pre-rank functions can identify errors in the isotropy of the forecast distributions. The measure of isotropy given in Equation 5 appears to yield a pre-rank function that is very sensitive to these errors, and is therefore capable of informing the forecaster when the forecast fields misrepresent the isotropy of the true variogram. This is true both when the observed fields are generated from an isotropic process but the forecast fields are not, and also vice versa. This pre-rank function is insensitive to all other types of forecast errors, meaning it focuses uniquely on the isotropy. ## 5 Monitoring calibration with e-values To demonstrate the behaviour of the different pre-rank functions, the simulation studies in the previous section consist of forecasts that are independent and identically distributed. The performance of weather forecasts, however, generally varies over time: for example, in different seasons, weather regimes, or due to updates in numerical weather prediction models. Since these conditional biases may cancel each other out such that they are not visible in the rank histograms, it is additionally useful to monitor how forecast calibration evolves over time. While several metrics have been proposed to quantify the flatness of a rank histogram (see Wilks, 2019), we advocate an approach based on e-values, which simultaneously provide a sequentially valid test for forecast calibration (Arnold et al., 2021). E-values provide a dynamic alternative to p-values when conducting statistical hypothesis tests, and while classical tests require that the evaluation period is fixed in advance, e-values generate statistical tests that are valid sequentially, i.e. under optimal stopping. An e-value for a given null hypothesis is a nonnegative random variable \(E\) such that \(\mathbb{E}[E]\leq 1\) when the null hypothesis is true. Realisations of \(E\) that are less than one therefore support the null hypothesis, whereas larger e-values provide evidence against the null hypothesis. When testing for the flatness of a rank histogram, a suitable null hypothesis is that the rank of the observation is uniformly distributed on the set of possible ranks \(\{1,\ldots,M+1\}\). Arnold et al. (2021) propose monitoring Figure 2: Results from the Gaussian random field simulation study. Multivariate rank histograms constructed using the seven pre-rank functions discussed in the text when the variance is (a) under-estimated: \(\sigma^{2}=0.85\), (b) over-estimated: \(\sigma^{2}=1.25\); when the correlations are (c) under-estimated: \(\tau=0.5\), (d) over-estimated: \(\tau=2\); when the degree of isotropy is (e) under-estimated, (f) over-estimated. The histograms were constructed from 10,000 observations and ensemble forecasts. calibration using the e-values \[E_{t}=(M+1)p_{A}(R_{t}),\] where \(R_{t}\) is a random variable denoting the rank of the observation among the ensemble members at time \(t\geq 1\), and \(p_{A}(r)\) is the probability of rank \(r\in\{1,\ldots,M+1\}\) occurring under the alternative hypothesis. The alternative distribution \(p_{A}\) can be estimated from the ranks observed prior to the current time. This can be done empirically, but if the number of ensemble members is even moderately large, Arnold et al. (2021) suggest modelling \(p_{A}\) using a beta-binomial distribution, whose parameters are estimated sequentially using maximum likelihood estimation based on the previously observed ranks. Essentially, if a rank \(r\) has already been observed more often than the expected frequency under the null hypothesis, then \(p_{A}(r)\) should be larger than \(1/(M+1)\), resulting in a value \(E_{t}>1\). This recognises that the rank histogram is becoming less similar to a flat histogram, giving evidence against the null hypothesis. Conversely, if \(r\) has previously been observed less often than expected, then \(E_{t}\) will be smaller than one, recognising that by observing \(r\) the rank histogram becomes closer to a flat histogram. In a sequential setting, if \(E_{t}\) is an e-value conditional on the information available at time \(t-1\), for each \(t\geq 1\), then the product \(e_{t}=\prod_{i=1}^{t}E_{i}\) represents the cumulative evidence for or against the null hypothesis up until time \(t\), and is itself an e-value. Theoretical arguments then justify rejecting the null hypothesis at time \(t\) and significance level \(\alpha\) if \(e_{t}\geq 1/\alpha\), and this can be monitored over time without having to fix the sample size in advance. This is equivalent to rejecting the null hypothesis if \(1/e_{t}\leq\alpha\); that is, the reciprocal of an e-value constitutes a conservative p-value. The calibration of forecasts with a lead time \(k\) of one time unit can readily be assessed using this approach. For lead times \(k>1\), the situation is more complicated and we refer to Arnold et al. (2021) for theoretical details. In summary, we can aggregate \(k\) sequences of e-values that are separated by lag \(k\) using the average product \[e_{t}=\frac{1}{k}\sum_{j=1}^{k}\prod_{i\in\mathcal{K}_{j}(t)}E_{i},\] where \(\mathcal{K}_{j}(t)=\{j+sk:s=0,1,\ldots,t-1,\ j+sk\leq t\}\) is the set of indices between \(j\) and \(t\) that are separated by lag \(k\). Arnold et al. (2021) demonstrate that if this aggregated sequence is scaled by the constant \((e\log(k))^{-1}\) (with \(e=\exp(1)\)), then \(e_{t}\) will exceed \(1/\alpha\) with probability at most \(\alpha\). Hence, when testing whether or not a forecast system is calibrated at lead time \(k>1\), the null hypothesis of calibration can be rejected at time \(t\) and significance level \(\alpha\) if \(e_{t}\geq e\log(k)/\alpha\). By visualising \(e_{t}\) as a function of time, it becomes easy to identify periods where evidence against calibration is accumulating. Figure 6 presents examples for the forecasts considered in the following section. An increasing e-value indicates a period of mis-calibration, whereas a declining e-value suggests that either the forecasts are calibrated, or the type of mis-calibration has changed. While the e-value does not specify what type of mis-calibration is present in the forecasts, a time series of e-values can be used to identify periods worth analysing in further detail (see Arnold et al., 2021, for further details). E-values allow us to test whether multivariate forecasts are probabilistically calibrated with respect to a chosen pre-rank function. When several pre-rank functions are used to assess different facets of calibration, one can compute e-values for each pre-rank function, but care must be taken when hypotheses of calibration are rejected since this introduces a multiple testing problem. The simplest approach to account for multiple testing is to apply a Bonferroni correction; that is, instead of rejecting the null hypothesis when the e-value surpasses the threshold \(1/\alpha\) or \(e\log(k)/\alpha\), for forecasts of lag 1 and lag \(k>1\) respectively, a hypothesis is only rejected if the e-value surpasses the threshold \(\ell/\alpha\) or \(\ell e\log(k)/\alpha\), respectively, where \(\ell\) is the number of pre-rank functions considered. As with p-values, there are more powerful methods to correct for multiple testing than the Bonferroni correction, and an alternative procedure for e-values is proposed in Vovk and Wang (2021). If one aims to construct powerful tests for the global null hypothesis of exchangeability of the multivariate ensemble members and the observation, it appears sensible to use a limited number of pre-rank functions that provide complementary information, and to combine the resulting e-values into one e-value by predictable mixing; we refer readers to Waudby-Smith and Ramdas (2023) and Casgrain et al. (2023), where predictable mixing of test martingales is discussed and is referred to as so-called betting strategies. The same reasoning applies in the case of general multivariate predictive distributions; here, the null hypothesis would be that the forecasts are auto-calibrated, see Appendix. We do not explore this proposal in this paper since we want to emphasise understanding deficiencies in the calibration of multivariate ensemble forecasts, and this cannot be achieved using one global test for calibration. ## 6 Case study ### Data Section 4 demonstrates how the different pre-rank functions behave when evaluating forecasts in an idealised framework. In this section, we apply the same pre-rank functions to evaluate the calibration of gridded ensemble forecasts for European wind speeds. We consider 10m wind speed forecasts and observations taken from the European Meteorological Network's (EUMETNET) post-processing benchmark dataset (EUPPBench; Demaeyer et al., 2023), which has recently been introduced to provide a "common platform based on which different post-processing techniques of weather forecasts can be compared." The dataset is also canonical when illustrating forecast evaluation methods. The EUPPBench dataset contains daily forecasts issued by the European Center for Medium-range Weather Forecasts' (ECMWF) Integrated Forecasting System (IFS) during 2017 and 2018. Twenty years of reforecasts are also available for every Monday and Thursday in this two year period, which are generated by the same forecast model but with a smaller number of ensemble members (11 instead of 51). We restrict attention here to the 20 years of reforecasts at a lead time of five days, meaning we work with ensembles of size \(M=11\). These forecasts are compared to ERA5 reanalyses (Hersbach et al., 2020), which provide a best guess for the observed wind speed fields. The forecasts and observations are on a regular longitude-latitude grid that covers a small domain in central Europe (2.5-10.5E, 45.75-53.5N). The grid has a horizontal resolution of \(0.25^{\circ}\), roughly corresponding to 25 kilometers, and is therefore comprised of 33 distinct longitudes and 32 latitudes. This domain is displayed in Figure 3. When forecasting surface weather variables such as wind speed, the ensemble forecasts issued by numerical weather models are typically subject to systematic biases. To remove these biases, it is common to statistically post-process the numerical model output. Statistical post-processing methods use a historical archive of forecasts and observations to learn and then remove systematic errors in the raw ensemble forecasts. Since the goal of this paper is to illustrate the utility afforded by multivariate rank histograms, we employ a simple, well-known univariate post-processing scheme to re-calibrate the wind speed ensemble forecasts at each grid point separately, and then compare competing approaches to convert these univariately post-processed forecasts into spatially-coherent probabilistic forecast fields. Readers are referred to Vannitsem et al. (2018) for a thorough overview of statistical post-processing methods. To post-process the IFS ensemble forecasts at each grid point, we employ a standard ensemble model output statistics (EMOS) approach (Gneiting et al., 2005), in which the future wind speed is assumed to follow a truncated logistic distribution (Scheuerer and Moller, 2015). The predictive distributions are truncated below at zero, meaning positive probability is assigned only to positive wind speeds. The location parameter of the truncated logistic distribution is a linear function of the ensemble mean forecast at the same time and location, and the scale parameter is similarly a linear function of the ensemble standard deviation. The parameters of this post-processing model are estimated using the first 15 years of reforecasts, and the resulting predictions are then assessed using the remaining 5 years. This results in 3135 forecast and observation fields for model training, and 1045 pairs for verification. This univariate post-processing method re-calibrates the wind speed forecasts at each individual grid point. However, in doing so, the multivariate dependencies present in the raw ensemble forecast are lost. To obtain forecast distributions that have a realistic multivariate dependence structure, it is common to extract evenly-spaced quantiles from the univariate post-processed distributions, and then reorder these quantiles according to a relevant dependence template. The most popular approach to achieve this is ensemble copula coupling (ECC; Schefzik et al., 2013). ECC is an empirical copula approach that uses the raw ensemble forecasts issued by the numerical weather models as a template to reorder the post-processed forecast distributions. ECC is known to be a straightforward and effective approach to reinstall the dependencies present in the numerical ensemble forecasts, and is frequently implemented in operational post-processing suites. For comparison, we additionally compare the resulting forecast fields to those obtained using an alternative copula-based reordering scheme, namely the Schaake Shuffle (Clark et al., 2004). Like ECC, the Schaake Shuffle reorders evenly-spaced quantiles from the post-processed forecast distributions according to some dependence template. In contrast to ECC, the Schaake Shuffle uses a random selection of past multivariate observations to construct the dependence template, rather than the raw ensemble forecast. Although the Schaake Shuffle can leverage an arbitrary number of previous observations in this dependence template, only eleven observations are used here, so that the resulting ensemble forecasts have the same number of members as those generated using ECC. Further details regarding these two approaches can be found in Lakatos et al. (2022) and references therein. Several variants of both ECC and the Schaake Shuffle have also recently been proposed, but we restrict attention here to the two most widely-used implementations; Lakatos et al. (2022) find that these extensions do not provide significant benefits. The calibration of the forecast fields obtained from these two post-processing methods is compared to the calibration of the raw numerical model output. An example of the forecast fields generated using the three methods is presented in Figure 3. ### Results Firstly, consider the univariate calibration of the competing forecasting methods. Figure 4 displays the univariate rank histograms corresponding to the raw ensemble forecasts (i.e. the IFS reforecasts) before and after undergoing post-processing. The ranks are aggregated across all 1056 grid points. Although we describe two different multivariate post-processing methods, these differ only in how they reorder the univariate post-processed forecasts, and thus result in the same univariate rank histograms. Figure 4 illustrates that the raw ensemble forecasts are slightly under-dispersed, with the observed wind speed falling outside the range of the ensemble members more often than would be expected of a calibrated forecast. The simple post-processing approach corrects these dispersion errors. Figure 3: Example observation and forecast fields on a randomly selected day. Figure 5 displays the corresponding multivariate rank histograms. The same pre-rank functions are employed as in Section 4.2. The dependence and isotropy pre-rank functions are implemented with unit lag, and the FTE pre-rank function uses a threshold \(t=6\)ms\({}^{-1}\), roughly corresponding to the \(90^{th}\) percentile of the wind speeds in the training data across all grid points. The average and location pre-rank functions suggest that the raw ensemble forecasts are relatively well-calibrated when predicting the mean wind speed over the domain. However, the corresponding band-depth histogram indicates that these forecasts do not reliably capture the centrality of the observation among the ensemble members. The IFS forecast fields also appear to reliably predict the variance of the observed wind speed fields, as well as the number of threshold exceedances, but they severely under-estimate the dependence between adjacent grid points, and over-estimate the measure of isotropy in the ERA5 reanalyses. Consider now the post-processed forecasts that are reordered according to ECC. The multivariate rank histogram corresponding to the band-depth pre-rank function is much closer to uniform. While this may suggest that these post-processed forecasts improve upon the multivariate calibration of the IFS output, we can use targeted pre-rank functions to identify remaining sources of mis-calibration in these forecasts. In particular, post-processing using EMOS and ECC does not correct the errors in the dependence structure that manifest in the IFS forecast fields, and the resulting forecasts also over-estimate the measure of isotropy at unit lag. Aside from the band-depth pre-rank function, the multivariate rank histograms for the ECC forecasts exhibit the same patterns as those for the raw IFS forecasts. The Schaake Shuffle performs similarly, though while these forecasts also under-estimate the dependence between adjacent grid points, they do so to a lesser degree than IFS and ECC. These patterns are reinforced by e-values. Figure 6 displays the cumulative e-values when assessing the calibration of the forecast systems with respect to three of the chosen pre-rank functions. E-values are displayed for the band-depth, scale, and dependence pre-rank functions. In all cases, a "burn-in" period of 100 forecast-observation pairs is required from which to estimate \(p_{A}\) when calculating the e-values. The three forecasting methods issue calibrated predictions of the scale of the ERA5 reanalyses fields. For the band-depth pre-rank function, there is quickly sufficient evidence to conclude that the multivariate rank histogram corresponding to the raw ensemble forecasts is not uniform. The post-processing methods, on the other hand, appear probabilistically calibrated with respect to the band-depth pre-rank function. For the dependence pre-rank function, as suggested by the multivariate rank histograms, there is sufficient evidence at the 5% level to suggest that none of the multivariate forecasts are calibrated. Hence, although post-processing offers improvements upon the raw IFS forecasts, there is still vast potential to improve upon these baseline post-processing methods. Moreover, we expect that the mis-calibration of all forecasting methods would be more pronounced if the forecasts were compared to station observations rather than reanalysis fields. Since multivariate forecast calibration is assessed using multiple pre-rank functions, one might ask what constitutes a good set of pre-rank functions? A collection of pre-rank functions will be most useful when the individual pre-rank functions provide complementary information. To assess this, it is useful to have Figure 4: Univariate rank histograms corresponding to the raw ensemble forecasts before and after post-processing. The dashed red lines indicate a uniform histogram. Figure 5: Multivariate rank histograms constructed using seven pre-rank functions for (a) the raw ensemble forecasts; the post-processed forecasts reordered using (b) ECC, (c) the Schaake Shuffle. The dashed red lines indicate a uniform histogram. a measure of dependence between the rank of \(\rho(\mathbf{y})\) for different choices of the pre-rank function \(\rho\). If the dependence is strong, then it suggests that one of the pre-rank functions provides redundant information. Table 2 contains the correlations between the ranks of \(\rho(\mathbf{y})\) among \(\rho(\mathbf{x}_{1}),\ldots,\rho(\mathbf{x}_{M})\) for the various choices of \(\rho\). Results are shown for the raw ensemble members, though very similar correlations are obtained for the two post-processing methods. There is a strong positive correlation between the average rank and location pre-rank functions, which both assess the mean behaviour of the spatial fields, and also the FTE pre-rank function. There is also strong positive correlation between the scale, dependence, and FTE pre-rank functions. The location pre-rank function is strongly correlated with the scale pre-rank function in this example, suggesting that errors when predicting the average wind speed over the spatial domain are linked to errors when predicting the variation in wind speeds over the domain. The band-depth and isotropy pre-rank functions, on the other hand, exhibit relatively low correlations with the other pre-rank functions, making them particularly useful in this application. ## 7 Conclusion For probabilistic forecasting systems to be as useful as possible, the forecasts they issue must be calibrated, in the sense that they are statistically consistent with what actually materialises. In practice, the calibration of univariate ensemble forecasts is typically assessed using rank histograms. Ensemble forecasts for multivariate outcomes can be evaluated using multivariate rank histograms, which use a so-called pre-rank function to transform the observations and ensemble members into univariate objects, prior to constructing a standard rank histogram. In this paper, we highlight that there is considerable flexibility in the choice of pre-rank \begin{table} \begin{tabular}{c|c c c c c c} & Average & Band-depth & Location & Scale & Dependence & FTE & Isotropy \\ \hline Average & 1.00 & -0.02 & 0.95 & 0.58 & 0.35 & 0.67 & -0.22 \\ Band-depth & -0.02 & 1.00 & -0.03 & -0.09 & 0.08 & -0.08 & -0.05 \\ Location & 0.95 & -0.03 & 1.00 & 0.70 & 0.47 & 0.76 & -0.27 \\ Scale & 0.58 & -0.09 & 0.70 & 1.00 & 0.73 & 0.83 & -0.35 \\ Dependence & 0.35 & 0.08 & 0.47 & 0.73 & 1.00 & 0.63 & -0.28 \\ FTE & 0.67 & -0.08 & 0.76 & 0.83 & 0.63 & 1.00 & -0.33 \\ Isotropy & -0.22 & -0.05 & -0.27 & -0.35 & -0.28 & -0.33 & 1.00 \\ \end{tabular} \end{table} Table 2: Correlation between the ranks of \(\rho(\mathbf{y})\) among the transformed raw ensemble members for the various pre-rank functions. Figure 6: Cumulative e-values for the raw ensemble forecast and the post-processed forecasts when sequentially testing for calibration. Results are shown for the multivariate rank histograms constructed using the band-depth, scale, and dependence pre-rank functions. The null hypothesis of calibration with respect to a pre-rank function is rejected at the 5% level if the cumulative e-value exceeds the dotted horizontal line at \(3e\log(5)/0.05\). The vertical grey lines divide the test data into five year periods. function, meaning practitioners can choose pre-rank functions on a case-by-case basis, depending on what information they wish to extract from their forecasts. In particular, while previously proposed pre-rank functions depend on the forecast and observed outcome, we argue that this need not be the case. Instead, simple pre-rank functions can be employed that more directly target specific aspects of the multivariate forecasts. We introduce generic pre-rank functions that can focus attention to the location, scale, and dependence structure of the multivariate forecast distributions, allowing each of these components to be assessed individually. The resulting histograms are straighforward to interpret, making it easier to assess what systematic deficiencies exist in the forecasts. We also propose suitable pre-rank functions to assess the calibration of probabilistic spatial field forecasts, which are regularly issued by operational weather forecasting centres. Although we focus here on spatial forecast fields as an example, the arguments presented herein apply to other multivariate forecasts, and future work could consider relevant pre-rank functions in other multivariate forecasting settings, such as time series forecasting. A long-standing challenge when evaluating spatial field forecasts is to design verification tools that value realistic-looking forecast fields, i.e. fields that do not violate any physical laws. If we could quantify how realistic a weather field is, then these measures of realism could be used as pre-rank functions in the framework discussed herein. Doing so would not only reveal whether our forecast fields are realistic, but, if not, the corresponding multivariate rank histograms should additionally allow us to identify how our forecasts are unrealistic. The pre-rank functions we introduced are used to compare ensemble fields obtained from a near-operational ensemble prediction system before and after having undergone statistical post-processing. These results help to understand not only how the post-processed forecasts improve upon the raw model output, but also what deficiencies these forecasts still exhibit. In recognising the limitations of the predictions, it becomes easier to remove them, resulting in more accurate and reliable forecasts in the future. While the multivariate post-processing frameworks employed in this study are commonly applied in operational post-processing suites, the multivariate rank histograms in Figure 5 suggest that these forecast still exhibit significant biases, particularly related to the dependence between the wind speed at nearby grid points. An interesting avenue for future work would therefore be to compare these results to those obtained using state-of-the-art machine learning models that have recently been introduced for multivariate post-processing (e.g. Dai and Hemri, 2021; Chen et al., 2022). The general framework outlined herein involves identifying univariate characteristics of multivariate objects, and evaluating the multivariate forecasts via their ability to predict these characteristics. This approach has also been proposed when constructing multivariate scoring rules (see e.g. Scheuerer and Hamill, 2015; Weniger et al., 2017; Heinrich-Mertsching et al., 2021). Allen et al. (2022) illustrate that this framework can essentially be interpreted as a weighting of the scoring rule, where the transformation determines which outcomes are to be emphasised when calculating forecast accuracy. Weighted versions of these multivariate rank histograms could also be employed to target particular multivariate outcomes when assessing forecast calibration, as outlined by Allen et al. (2023). Note, however, that the transformations used within scoring rules need not be interpretable, unlike the pre-rank function used to construct multivariate rank histograms, and hence some transformations that are suitable when calculating forecast accuracy may not be when interest is on forecast calibration. ## Acknowledgements The authors are grateful to the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss) and the Oeschger Centre for Climate Change Research for financially supporting this work. Sebastian Arnold, Jonas Bhend, Alexander Henzi, and Marc-Oliver Pohle are also thanked for fruitful discussions during the preparation of this manuscript.
2310.06758
slash: A Technique for Static Configuration-Logic Identification
Researchers have recently devised tools for debloating software and detecting configuration errors. Several of these tools rely on the observation that programs are composed of an initialization phase followed by a main-computation phase. Users of these tools are required to manually annotate the boundary that separates these phases, a task that can be time-consuming and error-prone (typically, the user has to read and understand the source code or trace executions with a debugger). Because errors can impair the tool's accuracy and functionality, the manual-annotation requirement hinders the ability to apply the tools on a large scale. In this paper, we present a field study of 24 widely-used C/C++ programs, identifying common boundary properties in 96\% of them. We then introduce \textit{slash}, an automated tool that locates the boundary based on the identified properties. \textit{slash} successfully identifies the boundary in 87.5\% of the studied programs within 8.5\ minutes, using up to 4.4\ GB memory. In an independent test, carried out after \textit{slash} was developed, \textit{slash} identified the boundary in 85.7\% of a dataset of 21 popular C/C++ GitHub repositories. Finally, we demonstrate \textit{slash}'s potential to streamline the boundary-identification process of software-debloating and error-detection tools.
Mohannad Alhanahnah, Philipp Schubert, Thomas Reps, Somesh Jha, Eric Bodden
2023-10-10T16:32:21Z
http://arxiv.org/abs/2310.06758v2
# slash: A Technique for Static Configuration-Logic Identification ###### Abstract Researchers have recently devised tools for debloating software and detecting configuration errors. Several of these tools rely on the observation that programs are composed of an _initialization phase_ followed by a _main-computation phase_. Users of these tools are required to _manually_ annotate the boundary that separates these phases, a task that can be time-consuming and error-prone. (Typically, the user has to read and understand the source code or trace executions with a debugger.) Because errors can impair the tools' accuracy and functionality, the manual-annotation requirement hinders the ability to apply the tools on a large scale. In this paper, we present a field study of 24 widely-used C/C++ programs, identifying common boundary properties in 96% of them. We then introduce slash, an automated tool that locates the boundary based on the identified properties. slash successfully identifies the boundary in 87.5% of the studied programs within 8.5 minutes, using up to 4.4 GB memory. In an independent test, carried out after slash was developed, slash identified the boundary in 85.7% of a dataset of 21 popular C/C++ GitHub repositories. Finally, we demonstrate slash's potential to streamline the boundary-identification process of software-debloating and error-detection tools. Introduction Software configurability has emerged as a significant focus in contemporary research [12, 18, 11, 28, 25, 16, 27, 17, 29]. Concurrently, several initiatives proposed to elevate configurability as a first-class programming element [5] and aimed to forge consensus and promote best practices [14, 20]. One best practice is to organize programs to operate in two phases: (i) a phase for _initialization_, where configuration logic checks parameters and initializes corresponding values to control the program's activities, and (ii) a _main-computation phase_ that performs actions in accordance with the specified configuration). One would hope that this structure is reflected in the code--i.e., there is a boundary between the configuration logic and the main computation. A number of recent papers [2, 6, 26, 7] also describe the advantages that this separation provides for the sake of configuration traceability [14], forensic analysis [7], optimizing programs [2, 6], and detecting configuration errors [26]. PCHECK [26] automatically generates a checker that detects configuration errors early to minimize damage from failures. It adds a call to the checker at the end of the initialization phase, as illustrated in Figure 1, which depicts the boundary location in the program Squid, a widely used open-source Web proxy server that supports 237 configurations. However, a user of PCHECK needs to _manually_ annotate the Squid source code with the boundary location. The situation is similar for program-debloating tools. Temporal-specialization [6] disables system APIs after the completion of the initialization phase, but also requires the tool user to annotate the boundary. LMCAS [2] specializes programs by executing them up to the boundary to capture the program's state according to the supplied inputs, where again the LMCAS user must annotate the location of the boundary. In the absence of a method to assist developers in identifying the boundary, the _manual_ annotation is a time-consuming and error-prone task: the user has to read and understand the source code or trace executions with a debugger. Because boundary-identification errors can impair the tools' accuracy and functionality, the manual-annotation requirement hinders the ability to apply the tools on a large scale. We thus first conduct a manual field study to comprehend how the boundary is implemented and to discern its distinguishing characteristics. Our corpus contains 24 widely used C/C++ programs (Table 1) that employ configuration files or command-line parameters. Our study identifies various categories of boundaries: single-element boundaries, multi-element boundaries, or "blended" (i.e., no boundary). The study further indicates that 23 (96%) of the programs possess a single-element boundary. Accordingly, we developed slash, a tool to identify a boundary automatically. slash focuses on the common case of identifying a boundary in applications that contain one or more single-element boundaries. slash analyzes LLVM IR, and targets programs written in an imperative programming language, such as C or C++. Our work makes the following contributions: 1. We conducted a manual field study to determine (i) to what extent real-world programs contain a boundary that separates configuration logic from the main-computation logic, and (ii) for programs that do contain a boundary, what structural patterns can be used to identify the end of the initialization/configuration phase (SS3). 2. We present an algorithm that either identifies a boundary that separates Figure 1: Location to invoke the configuration-error checker generated by PCHECK [26] in Squid—i.e., the boundary at the end of the initialization phase. PCHECK users need to annotate this location _manually_. the initialization and main-computation phases, or reports that it was unable to do so (SS5). 3. We implemented the boundary-identification algorithm in a tool, called slash, and evaluated it on (a) the 24 programs used in the manual field study, and (b) 21 popular C/C++ Github repositories not part of the manual field study (SS6). 4. We demonstrate that the boundaries that slash identifies are suitable substitutes for the ones identified manually (by the respective developers) for two software-debloating tools [6, 2] and a configuration-error-checking tool [26]. ## 2 Background This section provides background on some concepts and patterns that we relied on in our manual field study. ### Program Phases An example of a boundary is shown in Figure 2, which represents a scaled-down version of the UNIX word-count utility wc: wc reads a line from stdin, counts the number of lines and/or characters in the input stream, and prints the results on stdout. This program has two phases, the code for which is found in disjoint regions: * The _initialization phase_ starts at the entry point of main (line 1), and ends at line 16. * The _main-computation phase_ starts at line 18 and continues to the end of main (line 28). When the configuration logic in the initialization phase is executed, a parameter expressed in some _external format_--here argv[1] as a C string--is translated to an _internal format_ and assigned to one or more program variables that host run-time configuration data. These variables are known as _configuration-hosting variables_[1]. In Figure 2, after the configuration logic executes, the configuration-hosting variables count_char and count_line each hold internal-format values of \(\emptyset\) or \(1\). The main-computation logic then performs the primary processing function of the program, with its actions controlled by the values of count_char and count_line. The two regions are tied together through the values of the configuration-hosting variables: when the main-computation phase executes, the values of count_char and count_line control which portions of the main logic execute. In wc, for instance, count_char controls whether lines 20 and 24 execute, and count_line controls whether lines 21 and 26 execute. In wc, there is a boundary at line 17 that physically separates the configuration logic from the main-computation phase. ### Configuration-Logic Phases To inform the process of identifying a boundary, it is crucial to understand the configuration logic and patterns inside the initialization phase. We adopt the taxonomy of configuration design of Zhang et al. [28], which involves the following configuration phases: 1. **Parse and Assign:** run-time-configuration information is first parsed and translated. Translated values of configuration parameters are typically Booleans, integers, or strings. In a **command-line program**, the inputs are provided via command-line arguments. In C/C++ programs, Figure 2: A scaled-down version of the wc utility. The boundary could be just before line 17, just before line 18, or just before line 19. LMCAS executes wc up to the boundary during the course of its analysis but its users are required to annotate the boundary location _manually_. command-line arguments are passed to main() via the parameters argc and argv: argc holds the number of command-line arguments, and argv[] is an array (of that length) of pointers; the elements of argv[] point to the different arguments passed to the program. The application then assigns values of argv[] elements to configuration-hosting variables according to a predefined argument-value mapping. Similar logic is used in **configuration-file programs**. They also permit command-line arguments, yet they receive further arguments using configuration files, whose location is typically provided via one command-line argument. The configuration file is frequently parsed using a system-call API. For example, _Nginx_ uses the Linux system call pread, and _DNSProxy_ uses the C library function fgets. The application then usually assigns configuration-hosting variables values according to a predefined keyword-value mapping. 2. **Check** and **Exception/Error Handling:** in general, these steps are intertwined with the parse-and-assign step. They validate the provided inputs based on certain constraints, identify incorrect configuration parameters and--if present--provide user feedback and terminate the program. Parse-assign-check steps are executed inside a loop until all configuration parameters are processed or an error arises. Once parameters pass checks, the main program can utilize these (translated) values to select functionalities. This processing completion denotes the transition from the initialization to the main-computation phase. ## 3 Understanding boundary Characteristics This section describes our manual field study, conducted to determine (i) to what extent real-world programs contain a boundary that separates configuration logic from the main-computation logic, and (ii) for programs that do contain a boundary, what structural patterns exist that could be used to automate the process of identifying the boundary. ### Methodology **Selection of subject programs.** We manually inspected 24 widely-used [2, 6, 21, 22, 29] end-user and server C/C++ programs, listed in Ta ble 1. The configurations of these programs are provided either through command-line arguments or configuration files. The end-user programs include utilities (i.e., sort, objdump, diff, and gzip). The server programs include web servers (i.e., Nginx), DNS servers (i.e., DNSProxy), and database programs (i.e., PostgreSQL). **Manual-inspection procedure.** We manually inspected the source code of the programs to see if we could identify a boundary location. The inspection was conducted by one person from our team, but a second \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Program** & **kLOC** & **Config File** & **Multiple boundary** & **Inside main** \\ \hline \multicolumn{4}{|c|}{End-User Programs} \\ \hline curl-7.47.0 & 31.6 & ✓ & ✓ & ✗ \\ \hline date-8.32 & 56.9 & ✗ & ✗ & ✓ \\ \hline diff-2.8 & 37.2 & ✗ & ✗ & ✓ \\ \hline du-8.32 & 109.1 & ✗ & ✗ & ✓ \\ \hline echo-8.32 & 11.0 & ✗ & ✗ & ✓ \\ \hline gzip-1.2.4 & 26.7 & ✗ & ✓ & ✓ \\ \hline id-8.32 & 15.1 & ✗ & ✗ & ✓ \\ \hline kill-8.32 & 12.1 & ✗ & ✗ & ✓ \\ \hline objdump-2.33 & 1049.0 & ✗ & ✗ & ✓ \\ \hline psql-15 & 189.1 & ✗ & ✓ & ✓ \\ \hline readelf-2.33 & 413.7 & ✗ & ✓ & ✓ \\ \hline sort-8.32 & 55.9 & ✗ & ✗ & ✓ \\ \hline tcpdump-4.10.0 & 608.6 & ✗ & ✗ & ✓ \\ \hline uniq-8.32 & 14.8 & ✗ & ✗ & ✓ \\ \hline wc-8.32 & 17.8 & ✗ & ✓ & ✓ \\ \hline wget-1.17.1 & 165.5 & ✓ & ✓ & ✓ \\ \hline \multicolumn{4}{|c|}{Server Programs} \\ \hline bind-9.15.8 & 1755.1 & ✓ & ✓ & ✓ \\ \hline DNSProxy-1.17 & 3.4 & ✓ & ✓ & ✓ \\ \hline httpd-2.4.51 & 179.0 & ✓ & ✓ & ✓ \\ \hline knockd-0.5 & 10.9 & ✓ & ✓ & ✓ \\ \hline lighttpd-1.4.54 & 174.0 & ✓ & ✗ & ✗ \\ \hline mini-httpd1-1.19 & 16.4 & ✓ & ✗ & ✓ \\ \hline Nginx-1.19.0 & 589.2 & ✓ & ✓ & ✓ \\ \hline PostgreSQL-15 & 4626.3 & ✓ & Multi-element boundary \\ \hline \end{tabular} \end{table} Table 1: Results of the manual field study. Column 2 gives LOC (in thousands), based on readable LLVM IR. Column 3 indicates whether the program can receive additional configuration settings through a configuration file. Column 4 indicates if there are multiple acceptable (single-element) boundary locations. Column 5 specifies whether the location selected based on our manual inspection (typically the first acceptable boundary location) lies within or outside main. opinion was obtained for challenging cases, such as Nginx, httpd, and PostgreSQL. The manual field study was performed as follows: 1. We built the program and ran it with -help to display all runtime configuration parameters. For config-file programs, we also inspected the program's default configuration-file templates to identify the set of predefined keywords (e.g., Nginx uses the directive _gzip_ to enable/diable compression). 2. Next, we identified the entry-point function in the source code. Because the study considered C/C++ programs, we searched for a function named "main" that has two parameters named "argc" and "argv." 3. We identified the locations in the source code where the configuration parameters are parsed, assigned, and checked (thereby identifying the configuration-hosting variables). A regular-expression search (based on the knowledge gained from step (a) was sufficient for identifying such locations. We observed that some programs parse the configuration parameters outside of main; for instance, the main function of a command-line program might invoke another function and pass argv as a parameter. For configuration-file programs, we performed a regular-expression search to find (i) method names in APIs for reading/parsing files, and (ii) keywords used in the configuration file. To supplement source-code inspection, we ran the programs with a debugger (GDB) to track the use of argv and identify the location where the provided configuration parameters are parsed. Similarly, we ran the configuration-file programs with GDB by specifying the configuration-file-parsing API as breakpoints. 4. We sought to identify a location for the boundary. We looked to see if the location just after the end of the loop containing the parse-assign-check logic was acceptable. In some cases, the boundary location was a bit further along in the program because the values of configuration-hosting variables are sometimes set or adjusted after the parse-assign-check loop when one configuration feature overrides another. ### Results **Categories of boundaries.** Our study identifies several types of boundaries within programs: * **Single-element boundaries**: one or more sites exist that, individually, are each an acceptable boundary location (as in _wc_ from Figure 2). * **Multi-element boundaries**: No single-element boundary exists, but a collection of sites separate the program's configuration logic from the main-computation phase. * **"Blended" boundaries**: the application's configuration logic is "blended" into the main-computation phase, yielding no clear boundary of the foregoing two types. The manual-field study showed that 23 programs possess single-element boundaries. As illustrated in Figure 2, it is possible that there are multiple locations where a single-element boundary could be located. (The term "multiple single-element boundaries" should not be confused with "multi-element boundary"). E.g., in Figure 2 the boundary could be just before line 17, just before line 18, or just before line 19. Column 4 of Table 1 indicates which of the programs had multiple single-element boundaries. The remaining program, PostgreSQL (a database server), has a multi-element boundary. It is a Swiss-Army-knife program, consisting of several stand-alone programs, each with its own boundary or boundaries. The corpus of programs used in the manual field study (Table 1) had no instances of "blended" boundaries. However, our evaluation dataset of previously unseen programs (Table 4) contains three programs with "blended" boundaries. Table 2 shows the numbers of programs in each boundary category. **Field Study Results:** The study revealed that: * 23 of the 24 programs have a single-element boundary that divides the program into its configuration logic and its main-computation logic. * The assignments to configuration-hosting variables typically all occur inside a parse-assign-check loop that processes the command line or configuration file. * Because of interactions among configuration features, the values of configuration-hosting variables are sometimes adjusted after the parse-assign-check loop. ### Findings: Properties of the boundary Our study revealed a set of properties that were common across the 23 programs in which we were able to identify a single-element boundary. We use these properties in SS4 and SS5 to address the problem of _automatically_ identifying a suitable single-element boundary location. The main property that we can infer from Figure 2 is that the boundary divides a program's control-flow graph into two disjoint subgraphs. This property implies that the boundary is a so-called _articulation point_ in the program (i.e., a vertex cut of the control-flow graph, of size 1). In addition, we observed that the boundary should be: * **Reachable** from the entry point of the program. * **Executed** exactly once, which eliminates the possibility that the boundary is inside a loop or a conditional statement. To validate our hypothesis, we instrumented the programs by adding a print statement at the location where the boundary is identified (e.g., line 17 in Figure 2) and then executed the program several times with different sets of configuration parameters. If the print statement is executed, the run indicates the boundary is reachable from the entry; if the print statement is printed only once, the run indicates that the boundary is executed exactly once (for the given configuration). In the debugging performed in step (c) of the manual inspection steps in SS 3.1, we dynamically traced the uses of argv and calls to configuration-file parsing APIs as a way to pinpoint the locations where configurations are parsed, assigned, and checked. This use of dynamic tracing of argv during manual analysis suggests that a technique to identify boundaries automatically will need to perform dataflow analysis to track uses of argv statically. **Findings:** Properties relevant to identifying automatically a program's boundary include: 1. Configuration-hosting variables are data dependent or control dependent on argv. 2. The boundary should be located after at least one loop. 3. The boundary represents an articulation point in the program's control-flow graph. 4. The boundary should be reachable from the entry point and executed only once. ## 4 The boundary-Identification Problem This section provides an abstract overview of the boundary concept, presenting the process of boundary identification as a form of staging transformation (SS4.1). It then defines the boundary-identification task (SS4.2 and SS4.3). ### An Idealized View At an abstract level, automated boundary identification can be thought of as a kind of _staging transformation_[10] that isolates (or "stages") the processing of a program's configuration parameters. Staging transformations were originally proposed to separate a program's computation into stages for optimization purposes. In our context, given a program \(P(x)\) with body \(f(x)\), where \(x\) represents some configuration parameter, we wish to consider \(P\) as having the form shown below in the second line: \[\begin{array}{rcl}&P(x)&=&f(x)\\ \rightarrow&P(x)&=&\mbox{\bf let $t=\mbox{\it translate}(x)$ in $g(t)$}\end{array} \tag{1}\] Here, \(\mbox{\it translate}(\cdot)\) converts from the external configuration-specification format to the internal format, and \(t\) is a configuration-hosting variable. Thus, * "**let \(t=\mbox{\it translate}(x)\) in...**" represents the configuration logic of \(P\). * "\(g(t)\)" represents the main logic, which performs the primary processing function of \(P\), based on the value of configuration-hosting variable \(t\). The boundary-identification challenge is to find the code that constitutes _translate_ within function definition \(f\). This abstract characterization of boundary identification permits giving a "rational reconstruction" of some previous work. For instance, both PCHECK [26] and Zhang et al. [28] propose methodologies to ensure that the value of a configuration parameter is checked against an appropriate constraint \(\varphi(\cdot)\) on the parameter before it is used. Thus, if one has program \(P(x)\) in the form shown on the second line of Eqn. (1), the essence of both PCHECK and the Zhang et al. paper is to transform \(P\) as follows: \[\begin{array}{rl}&P(x)\;=\;\textbf{let}\;t=\textit{translate}(x)\textbf{in}\;g( t)\\ \rightarrow&P(x)\;=\;\textbf{let}\;t=\textit{translate}(x)\\ &\textbf{in if}\;\varphi(t)\textbf{then}\;g(t)\textbf{else abort}\end{array} \tag{2}\] Furthermore, several issues discussed in [14, 20, 12, 11] can be characterized as "it is advantageous to separate the configuration logic from the program's main-computation logic for the sake of facilitating configuration tracking and analysis," which at an abstract level amounts to: _Given program \(P\) in the form_ \[\begin{array}{rl}&P(x)=\textbf{let}\;t=\textit{translate}(x)\textbf{in}\;g( t)\\ &\textit{Analyze the usage of}\;t\;\textit{in}\;g(t)\end{array} \tag{3}\] For instance, LORrack's usage analysis aims to identify code fragments corresponding to load-time configurations [12, 11] in Android and Java programs. Finally, LMCAS [2] relies on manual techniques to identify a program's boundary, and then performs partial evaluation [9] with respect to the values of configuration-hosting variables. Abstractly, LMCAS operates similarly to what was discussed above, except that \(P\) is now a two-argument program, \(P(x,y)\). \[\begin{array}{rl}&P(x,y)\;=\;\textbf{let}\;t=\textit{translate}(x)\textbf{in} \;g(t,y)\\ \rightarrow&P_{x}(y)\;=\;g_{t}(y)\end{array} \tag{4}\] Program \(P_{x}(y)\) is a version of \(P(x,y)\) specialized with respect to a specific value of \(x\). The body of \(P_{x}(y)\) is obtained by finding and evaluating \(t=\textit{translate}(x)\), and then running a partial evaluator on \(g\) with static input \(t\) to create \(g_{t}(y)\), which is a version of \(g(t,y)\) specialized on the value of \(t\). ### Terminology and Notation Because the boundary constitutes an articulation point in the program, we formulate the boundary-identification task as a vertex-cut graph-partitioning problem. **Definition 1**: _Let \(G=(V,E,v_{en},v_{ex})\) denotes the Interprocedural Control-Flow Graph (ICFG) of a program \(P\). Vertices \(v_{en}\) and \(v_{ex}\) represent the entry vertex corresponding to the main and end of the program, respectively._ Without loss of generality, we assume that each vertex in \(G\) is reachable from \(v_{en}\) along a path in which each procedure-return edge is matched with its closest preceding unmatched procedure-call edge (a so-called "interprocedurally valid path" [24, SS7-3]). A vertex \(v\) in the control-flow graph \(G_{Q}\) of some procedure \(Q\) is said to be an _articulation point_ if removing \(v\), and all control-flow edges into and out of \(v\), partitions \(G_{Q}\) into two non-empty subgraphs. ### Problem Definition In the abstract view of the boundary-identification problem discussed in SS4.1, a command-line or configuration-file program \(P(x)=f(x)\) has the form shown on the second line of Eqn. (1). Eqn. (1) is stated in an abstract form, as if we were considering a program in a functional programming language. However, we need to translate this idea to something that is suitable for an imperative programming language, such as C/C++. In such a case, configuration parameter \(x\) will be argv. Our goal is to identify _translate_(\(x\)), whose end is considered to be the boundary (but in an imperative program), which leaves us with two questions: 1. What is a suitable "choke point" in the program, analogous to the hand-off from "\(t=\textit{translate}(x)\)" to \(g(t)\) in Eqn. (1)? 2. What does "the program has finished _translate_(\(x\))" mean? With respect to question (1), a natural approach is in terms of the articulation points of the program's control-flow graph \(G\): the candidate choke points are the articulation points of \(G\) (denoted by \(V_{AP}\)). In general, \(G\) can have many articulation points. We need some other conditions to specify which member of \(V_{AP}\) we want: the boundary separates the vertices of \(G\) into the configuration logic (denoted by \(V_{c}\))--which is analogous to _translate_(\(x\))--and vertices belong to the main-computation logic (denoted by \(V_{m}\))--which is analogous to \(g(t)\). With respect to question (2), discovering the end of _translate_(\(x\)) entails identifying the _configuration-hosting variables_[1] (which are analogous to variable \(t\) in Eqn. (1)). These variables are either (a) assigned configuration values directly, or (b) control dependent on branch expressions involving configuration quantities. As supported by the findings from our manual study (SS3), the assignments to configuration-hosting variables (i) typically all occur inside a parse-assign-check loop that processes the command line or configuration file, but (ii) some additional assignments to them may occur after the parse-assign-check loop because of interactions among configuration features. Configuration-hosting variables are typically live variables in the main logic; moreover, they are used without their values being modified in the main-computation phase [12, 3]. We formalize this concept as follows: **Configuration-hosting variables (\(C_{\mathit{host}}\)).** 1. Let \(v_{x}\) denote the CFG vertex that models the passing of configuration parameter \(x\) to main. 2. Let \(V_{\mathit{host}}=\{v_{0},v_{1},\ldots,v_{n}\}\) denote the set of vertices that represent assignments to configuration-hosting variables: \(v_{i}\in V_{\mathit{host}}\) if 1. \(v_{i}\) is flow dependent on \(v_{x}\), denoted by \(v_{x}\longrightarrow_{f}v_{i}\), or 2. \(v_{i}\) is control dependent on a vertex \(w_{x}\) that uses \(x\). 3. The set of configuration-hosting variables \(C_{\mathit{host}}\) is the set of variables that are assigned to at some member of \(V_{\mathit{host}}\). For instance, in the scaled-down word-count program in Figure 2, the global variables count_chars and count_lines are assigned values at lines 6 and 9, respectively; these assignments are control-dependent on vertices that use argv (i.e., the branch-conditions on lines 3 and 4, which play the role of \(w_{x}\) in item (2b)). Thus, by item (2b), \(V_{\mathit{host}}\) consists of the assignments at lines 6 and 9, and by item (3), \(C_{\mathit{host}}\) is \(\{\)count_chars, count_lines\(\}\). With the concept of \(C_{\mathit{host}}\) in hand, we can now state the boundary-identification problem. **Problem Definition:** Find an articulation point \(B\) of CFG \(G\) that is reachable from \(v_{\mathit{en}}\), and 1. is located after a loop, 2. post-dominates every assignment to a member of \(C_{\mathit{host}}\), and 3. for each \(c\in C_{\mathit{host}}\), all paths from \(B\) to \(v_{\mathit{ex}}\) are free of definitions to \(c\). Return the closest \(B\) to the entry as the boundary. The control-flow vertex for line 16 of Figure 2 is an articulation point that meets the conditions of the problem definition: * it is located after the end of the for-loop on lines 2-16 * it post-dominates every assignment to count_chars and count_lines, and * all paths from that point to the end of the program are free of definitions to count_chars and count_lines. Finally, the articulation point at line 16 is the closest to the entry point in terms of distance along control-flow edges. [MISSING_PAGE_POST] ## 5 Automatic boundary Identification ``` Input: Program P, SrcProcedure Output: boolArray 1Entry pointEntry = getEntryPointBasicBlock(\(P\)) 2G = computeTo(\(P\)) 3 Initialize ConfigHostVars \(V_{\text{last}}\), BoundaryCandidates BC 4 / *Identification of boundaries(Section 5.1) 5T = computeTaintAnalysis(G, SrcProcedure); foreachnode \(\in\) Gdo 6foreachRes \(\in\) T.getTaintResultsAt(node)do 7foreach\(\mathcal{O}\)\(\in\) nodeApprands(\(\mathcal{O}\)) 8if\(\mathcal{O}\)\(=\) Res \(\wedge\) isAssignmentInst(\(node\)) then 9\(V_{\text{last}}\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\)\(\cup\)\(\cup\)\(\cup\)\(\)\(\cup\) This section presents our algorithm to solve the boundary-identification problem defined in SS4.3. The algorithm is given as Alg. 1. The discussion of Alg. 1 is structured in two parts, which correspond to lines 4-32 and 33-53, respectively. * **Identification of boundary Candidates (SS5.1).** This phase identifies a set of boundary candidates, which are a subset of the set of articulation points of the control-flow graph. * **boundary Identification (SS5.2).** This phase eliminates all boundary candidates that do not satisfy the three properties of a boundary given in the problem definition in SS4.3. ### boundary Candidates (Lines 4-32) This phase performs taint analysis and control-flow analysis to identify a set of boundary candidates. It performs taint analysis to identify the set of configuration-hosting variables \(C_{\mathit{host}}\) and the set of assignments \(V_{\mathit{host}}\) defined in SS4.3 (lines 5-13). Some adjustments are made when the latter assignments are either inside a loop (lines 15-19), or in a procedure other than main (lines 22-31). In particular, for each candidate not in main, a proxy location is considered just after the appropriate call in main that would reach the candidate. The algorithm performs control-flow analysis to identify the set of articulation points in \(G\) (line 20). The outcome of this phase is the intersection between the set of articulation points and the (adjusted) set of vertices that represent assignments to configuration-hosting variables (first performed at line 21 to reduce the cost of the proxy-finding loop, then at line 32). ### boundary Identification (Lines 33-53) This phase eliminates all boundary candidates that do not satisfy the three conditions of a boundary given in the problem definition in SS4.3, namely, each \(C\in BC\) must (C1) be located after a loop, (C2) post-dominate every assignment to a member of \(C_{\mathit{host}}\), and (C3) for each \(\mathit{var}\in C_{\mathit{host}}\), all paths from \(B\) to \(\mathit{ve}_{\mathit{ex}}\) are free of definitions to \(\mathit{var}\). The algorithm removes from \(BC\) any vertex \(C\) that fails to satisfy all three conditions; see lines 37-38, 40-44, and 46-49, respectively. To verify condition (C1), the algorithm checks whether candidate \(C\in BC\) is located after a loop (lines 37-38). To verify condition (C3), the algorithm can use standard techniques--e.g., IFDS-based [19], interprocedural constant propagation for each configuration-hosting variable \(var\), starting at candidate \(C\in BC\). If the analysis reports that \(var\) is not constant at exit point \(v_{ex}\), then \(var\) might be (re)defined on some path from \(C\) to \(v_{ex}\), and hence condition (C3) is violated. This phase returns null if all boundary candidates are eliminated (lines 51-52); otherwise, it returns the candidate that is closest to \(v_{en}\), the entry point of procedure main (lines 50 and 53). The distance metric used to calculate the closest boundary candidate is the shortest path in terms of control-flow edges. ### Discussion **Limitations.** Alg. 1 gives an idealized algorithm that works on an ICFG, yet slash operates on the ICFG only partially. Specifically, for (a) finding post-dominators, and (b) finding articulation points our implementation works on individual CFGs in a procedure-by-procedure manner. Listing 1 sketches this variant: ``` AP,DTs:=\(\emptyset\); foreachprocedureP{DTs:=DTs\(\cup\)computePostDominators(CFG(P)) \(AP:=AP\)\(\cup\)computeArticulationPoints(CFG(P)) } ``` Listing 1: Variant of _computePostDominators_ and _computeArticulationPoints_ used in our implementation of Algorithm 1 Consequently, when a boundary candidate is located in some procedure p other than main, the candidate is relocated to the CFG of main by finding a proxy location for the candidate. We use the basic block of main that contains the call site that calls p (directly or transitively). A second limitation of slash is that it targets only single-element boundaries. When run on a multi-element-boundary program, it could either return the empty set (a correct answer with respect to the question of whether a single-element boundary exists) or some singleton set (which is a false positive). (In the latter case, our experience is that slash returns one of the elements of the multi-element boundary, and the rest of the elements are other boundary candidates.) When run on a program in the "blended"-boundary case, slash returns the empty set (correct with respect to the question of whether a single-element boundary exists). In this case, it never returns a singleton set because the intertwining of the configuration logic and the main-computation logic causes the properties required of a single-element boundary to be violated. **Time Complexity.** The overall worst-case running time of the algorithm is bounded by \(\mathcal{O}(|E|\cdot|D|^{3}+|N|^{2}+|E|)\), where \(N\) and \(E\) are the sets of nodes and edges, respectively, of the program's ICFG, and \(D\) is the domain(s) used in the data-flow problems (taint analysis and constant propagation) [19]. ## 6 Evaluation Our experiments were designed to answer three questions: * **RQ1:** Can slash identify the boundary location correctly for command-line and configuration-file programs? (SS6.1_shows that slash can identify the presence/absence of the boundary for \(87.5\%\) of the programs in the field study and for \(85.7\%\) of the programs in a previously unseen dataset._) * **RQ2:** How expensive is slash in terms of running time and memory usage? (SS6.2_reports that total analysis time on \(23\) apps is \(8.5\) minutes, with max. memory consumption of \(4.4\)Gb._) * **RQ3:** Can slash alleviate the manual efforts required to use existing debloating tools? (SS6.3_demonstrates that slash's integration with two debloating tools eliminates a user-annotation requirement without breaking their functionality._) The evaluations were carried out on an Ubuntu 16.04 PC with an Intel i7-5600U CPU @ 2.6GHz and 16 GB RAM. ### RQ1: Accuracy of slash The accuracy of slash is defined as its ability to determine whether the subject program contains an acceptable single-element boundary. A correct answer means that slash identified one of the suitable boundary locations, or correctly indicated that the program lacks any suitable location. We conducted our evaluation using two sets of programs: (i) the programs considered in the manual field study (SS3, Table 1), and (ii) the 21 programs listed in Table 4, which were neither involved in the manual field study nor examined to determine boundary properties. The latter set was introduced to provide an unbiased test of accuracy results. #### 6.1.1 Accuracy based on Manual-Field-Study Dataset For each program in the evaluation dataset, we measured the accuracy of slash in identifying the correct boundary location using the following methodology: 1. Manually annotate the target program's source code with the (single-element) boundary location that is closest to the entry point of main in terms of distance along control-flow-graph edges, and generate the LLVM IR bitcode. This information serves as ground truth. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Program**} & \multirow{2}{*}{**kLOC**} & \multirow{2}{*}{**\#ptr**} & \multirow{2}{*}{**\#alloc**} & \multicolumn{2}{c|}{**Correct**} & \multicolumn{2}{c|}{**Analysis**} & \multirow{2}{*}{**Memory**} \\ & & & & **boundary** & & **Time** & **Memory** \\ \hline \hline \multicolumn{7}{|c|}{**End-user Programs**} \\ \hline curl-7.47.0 & 31.6 & 10228 & 785 & ✗ & 119.6 & 1526 \\ \hline date-8.32 & 56.9 & 7613 & 979 & ✓ & 26.9 & 698 \\ \hline diff-2.8 & 37.2 & 5331 & 842 & ✓ & 2.9 & 213 \\ \hline du-8.32 & 109.1 & 19168 & 2756 & ✓ & 3.5 & 299 \\ \hline echo-8.32 & 11.0 & 1687 & 305 & ✓ & 0.5 & 72 \\ \hline gzip-1.2.4 & 26.7 & 2952 & 420 & ✓ & 0.6 & 112 \\ \hline id-8.32 & 15.1 & 2368 & 418 & ✓ & 1.8 & 135 \\ \hline kill-8.32 & 12.1 & 1819 & 348 & ✓ & 1.7 & 130 \\ \hline objdump-2.33 & 1049.0 & 209651 & 21984 & ✓ & 37.7 & 1261 \\ \hline psql-15 & 189.1 & 37921 & 3798 & ✓ & 7.4 & 305 \\ \hline readelf-2.33 & 413.7 & 76842 & 8325 & ✓ & 4.5 & 400 \\ \hline sort-8.32 & 55.9 & 9902 & 1549 & ✓ & 4.0 & 254 \\ \hline tcpdump-4.10.0 & 608.6 & 152999 & 11945 & ✓ & 24.1 & 1431 \\ \hline uniq-8.32 & 14.8 & 2340 & 442 & ✓ & 1.9 & 136 \\ \hline wc-8.32 & 17.8 & 3000 & 509 & ✓ & 1.8 & 138 \\ \hline wget-1.17.1 & 165.5 & 30069 & 4036 & ✓ & 2.7 & 276 \\ \hline \hline \multicolumn{7}{|c|}{**Server Programs**} \\ \hline bind-9.15.8 & 1755.1 & 326408 & 41391 & ✓ & 228.8 & 4292 \\ \hline DNSProxy-1.17 & 3.4 & 583 & 79 & ✓ & 0.1 & 53 \\ \hline httpd-2.4.51 & 179.0 & 75032 & 6927 & ✗(✓) & 2.2 & 278 \\ \hline knockd-0.5 & 10.9 & 2062 & 177 & ✓ & 0.1 & 57 \\ \hline lighttpd-1.4.54 & 174.0 & 34932 & 3527 & ✓ & 1.7 & 195 \\ \hline mini-httpd1-1.19 & 16.4 & 2935 & 323 & ✓ & 1.9 & 147 \\ \hline Nginx-1.19.0 & 589.2 & 116710 & 9307 & ✓ & 32.7 & 1232 \\ \hline PostgreSQL-15 & 4626.3 & 880507 & 126401 & ✗ & 575.6 & 8313.7 \\ \hline \end{tabular} \end{table} Table 3: Results of slash’s evaluation. Columns 3 & 4 represent the number of pointers and allocation sites, respectively. Column 5 indicates whether slash could identify the boundary location correctly. (“✗ (✓)” means that slash did not succeed “out of the box,” but was successful when provided with suitable stubs for two library functions.) Column 6 specifies slash’s average running time in seconds, and column 7 indicates the maximum amount of memory usage in MB (both over 10 runs). 2. Pass an un-annotated LLVM IR bitcode of the same target program to slash, which annotates one basic block of the bitcode as the boundary. 3. Check whether the basic block identified by slash matches the ground truth. If the check passes, then slash is successful in identifying the boundary. slash accurately reports the existence/absence of an acceptable boundary for 21 out of 24 programs (87.5%) in Table 3. slash fails to report an accurate boundary location for curl, httpd, and PostgreSQL because of the following reasons: * _A Swiss-army-knife program_: as mentioned in SS3, PostgreSQL requires a multi-element boundary. While this category is not within the purview of the slash, we still ran slash on PostgreSQL. slash was able to correctly identify the entire set of elements of the multi-element boundary as boundary candidates, but returned the one nearest to the program entry point as per line 50 of Alg. 1. * _Definitions of two argument-parsing functions are unavailable_: httpd uses libapr (Apache Portable Runtime) (specifically apr_getopt_init \begin{table} \begin{tabular}{|c|c|r|r|r|r|r|} \hline **Repo (Program)** & **Category** & **Popularity** & **kLOC** & boundary Type & **slash Results** & **Accuracy** \\ \hline AFL\_plusplus(afl-fuzz) & Software Testing & 3.7k & 164.9 & Single & P & ✓ \\ \hline blurhash(blurhash,\_encoder) & Image Processing & 13.8k & 58.6 & Blended & A & ✓ \\ \hline Caffe(caffe) & Machine Learning & 33.5k & 17.2 & Blended & A & ✓ \\ \hline fish-shell(fish) & Utility & 21.8k & 696.3 & Single & P & ✓ \\ \hline coreutils(chown) & Utility & 3.5k & 3.37 & Single & P & ✓ \\ \hline coreutils(rm) & Utility & 3.5k & 3.4 & Single & P & ✓ \\ \hline GoAccess(goaccess) & Web & 16.4k & 189.7 & Single & P & ✓ \\ \hline hashcat(hashcat) & Utility & 17.4k & 978 & Single & P & ✓ \\ \hline jq(q) & Utility & 25.3k & 300.1 & Single & P & ✓ \\ \hline masscan(masscan) & network & 21.3k & 223 & Single & P & ✗ \\ \hline memcached(memcached) & Data & 12.6k & 73.1 & Single & P & ✓ \\ \hline \(n^{3}\)(nnn) & Data & 16.4k & 46.4 & Single & P & ✓ \\ \hline Redis(redis) & Data & 60k & 1437.4 & Single & OOM & OOM \\ \hline rethinkdb(rethinkdb) & Data & 26.2k & 839.1 & Multi-element & P & ✗ \\ \hline skynet(skynet) & Games & 12k & 629.7 & Blended & A & ✓ \\ \hline Tesseract(tesseract) & Image Processing & 51.9k & 2204.6 & Single & P & ✓ \\ \hline the silver searcher(ag) & Utility & 25k & 28.2 & Single & P & ✓ \\ \hline runux(tmux) & Utility & 29.5k & 593.6 & Single & P & ✓ \\ \hline trojan(trojan) & Network & 17.8k & 346.6 & Single & P & ✓ \\ \hline twemproxy(nutracker) & Network & 11.8k & 152.8 & Single & P & ✓ \\ \hline wrk(wrk) & Web & 34.5k & 1203.3 & Single & P & ✓ \\ \hline \end{tabular} \end{table} Table 4: Results of slash’s evaluation based on the previously unseen programs. Popularity is based on the number of stars. Column 5 reports the boundary type based on the manual inspection of the source code. Column 6 indicates the outcome of slash’s analysis: (P) means a boundary is present; (A) (for “absent”) indicates that no boundary is present. and apr_getopt) to parse command-line arguments. Only the declarations of these functions exist in the LLVM bitcode; their definitions are not available, which prevents slash from performing taint analysis--and thus from identifying the correct boundary location. However, when provided with suitable stubs--i.e., taint-analysis summaries that describe the dependencies of outputs on inputs in apr_getopt_init and apr_getopt--slash is able to identify the boundary location correctly. * _Configuration logic and main-computation phase in the **same** procedure called from main_: slash always places the boundary inside main, just after the call site that contains the code identified as the configuration logic. However, in curl both the configuration logic and the main-computation logic reside in the **same** callee of main. slash is unable to identify the boundary correctly because no location in main separates the configuration logic from the main-computation logic. #### 6.1.2 Accuracy based on Previously Unseen Programs This dataset contains programs that were not employed to deduce the boundary's traits (SS3). **Selection of Subject Programs.** This dataset was acquired by cloning starred C/C++ projects from GitHub with \(3k\) stars or more, which yielded 100 repositories. We then excluded repositories that (a) include other programming languages (such as Python, JavaScript, etc.), (b) incorporate GUI functionality, (c) did not contain an entry point (i.e., firmware), or (d) did not build successfully. This process yielded the 21 repositories listed in Table 4. Of these repositories, 18 possess a single-element boundary, one has a multi-element boundary, and two lack any boundary (see Table 2). These findings corroborate the outcomes of the field study (SS3). For each program in the unseen dataset, we assessed the accuracy of slash using the following approach: 1. Generate the LLVM IR bitcode (without any instrumentation) and analyze it using slash. 2. Manually examine the source code to compare with slash's outcome. slash's result is considered correct if (i) the boundary identified by slash aligns with the manually identified location, or (ii) slash does not detect any boundary and the manual inspection con firms the absence of an acceptable single-element boundary (i.e., the program has multi-element boundaries or no boundary). Table 4 presents the boundary-identification results for the unseen programs. For 18 out of the 21 programs, slash accurately reports the existence/absence of an acceptable (single-element) boundary. The three cases for which slash reported an inaccurate result were as follows: * _rethinkdb_: slash returns a single location in main, rather than a set of locations that constitute a multi-element boundary. * _masscan_: Although the identified boundary satisfies all boundary properties, the code before this boundary does not actually parse the configurations. Instead, it configures the program to report debug information in case it crashes. A single variable, is_backtrace, is initialized inside a loop that parses argv, whereas the rest of the configuration-hosting variables are parsed inside another loop inside the function masscan_command_line, which is called from main. slash does mark the latter location as a candidate boundary initially, but it is then eliminated because it is not the closest to the entry of main (line 50 of Alg. 1). * _Redis_: Despite running on a server with 192GB of memory, slash's data-flow analysis phase exhausted memory. ### RQ2: Performance of slash We measured the analysis time and memory usage for each program in Table 3 (averaged over 10 runs) using the UNIX time tool, which provides the total analysis time and the peak memory usage that a process uses. As shown in Fig. 3, analysis times and memory consumption scale roughly linearly with lines of code. (In both plots, the outliers are curl and bind Figure 3: Lines of code versus time (left) and memory (right). on the high side and readelf and objdump on the low side.) slash's taint analysis is influenced by a program's characteristics, such as the number of pointer variables (both # of pointer variables declared and # of statements that use a pointer variable), stores/loads, indirect call sites, etc. [22, 23], which affect the number of data-flow facts that need to be propagated through the target program and are not strictly linked to the number of lines of code. Table 3 provides information about the number of pointers and allocation sites in each program. We calculated the Pearson Correlation Coefficient to establish the strength of the relationship between kLOC, number of pointers, allocation sites, analysis time, and memory usage. The Correlation Coefficient in general was over 0.95, which indicates a positive linear relationship between these factors. ### RQ3: Effectiveness of slash We discuss three case studies to demonstrate the benefits of integrating slash in state-of-the-art software-debloating tools [2, 6] and configuration-error detection tools [26]. The case studies explain how the integration of slash with these tools alleviates the manual effort required by a developer, thus making the tools easier to use. These tasks involve combing through source code to track argc and argv usage, and potentially debugging complex programs to ensure the boundary is reachable from the program's entry point. #### 6.3.1 Integration with LMCAS LMCAS [2] is a debloating tool that applies partial evaluation to specialize a program to a particular run-time configuration. Currently, the LMCAS user is required to annotate the program to specify the boundary. We incorporated slash into the LMCAS pipeline: a program's LLVM IR bitcode is annotated by slash with the program's boundary location, and then passed to LMCAS. We evaluated our extension of LMCAS on the programs (obtained from the LMCAS dataset) listed in Table 5. Our aim was to understand the efficiency gained by integrating slash into LMCAS. Improved efficiency means (1) reducing or eliminating the burden on the LMCAS user of identifying a program's boundary location, and (2) making sure that automatic boundary identification does not affect LMCAS's ability to create a correctly working debloated program. Correct functionality can be validated by running the debloated programs with the supplied test inputs, omitting the flags specifying the features for which they have been debloated. We matched the output of the debloated program with that of the original program, which was supplied the appropriate feature flags and the same inputs. If the output is the same, the debloated program is considered to have preserved the functionality. We also check that the slash-annotated programs do not crash the LMCAS debloating pipeline. Table 5 reports the results of this experiment. slash reduces the analysis time of the boundary-identification step in LMCAS from minutes to a few seconds. It also eliminates human error due to manual analysis. Finally, slash + LMCAS preserves the functionality of debloated programs (and slash does not break the debloating pipeline of LMCAS). #### 6.3.2 Integration with Temporal Specialization The temporal-specialization tool [6] reduces the attack surface of a server program by disabling unneeded system calls. The system calls to disable are determined by splitting server programs into phases of initialization and serving: any system call never used in the serving phase is to be disabled once the (manually identified) "transition point" between the phases is reached. Clearly, a transition point is the same concept as a boundary, so slash can be applied to the problem of transition-point identification. The functions for the initialization and serving phases are called the _master_ and _worker_, respectively. Our evaluation tested whether \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{boundary ident. time} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-5} Program & \begin{tabular}{c} LMCAS \\ (minutes)\({}^{a}\) \\ \end{tabular} & \begin{tabular}{c} slash \\ (seconds) \\ \end{tabular} & \begin{tabular}{c} Correct \\ boundary \\ \end{tabular} & \begin{tabular}{c} Functionality \\ Preserved \\ \end{tabular} \\ \hline chown & 5 - 10 & 0.6 & ✓ & ✓ \\ \hline date & 5 - 10 & 1.8 & ✓ & ✓ \\ \hline gzip & 5 - 10 & 0.3 & ✓ & ✓ \\ \hline rm & 5 - 10 & 0.7 & ✓ & ✓ \\ \hline sort & 5 - 10 & 1.4 & ✓ & ✓ \\ \hline uniq & 5 - 10 & 0.4 & ✓ & ✓ \\ \hline \end{tabular} * Reported by the LMCAS authors [2]. \end{table} Table 5: Effectiveness of slash in facilitating boundary-identification for LMCAS. the function calls representing each phase are correctly separated. We performed the following steps: (i) for each program in Table 6 (obtained from the temporal-specialization dataset), we ran slash to identify the boundary; (ii) as ground truth, we used the master and worker functions employed in the temporal-specialization evaluation: If the configuration logic identified by slash includes a call to the master function, and the main-computation logic identified by slash contains a call to the worker function, we considered slash to have identified an appropriate transition point. As shown in Table 6, slash identified an appropriate transition point in each example, except for Redis, where the analysis again ran out of memory (OOM). #### 6.3.3 Pcheck PCHECK is an analysis tool that aims to detect configuration errors. It generates configuration-checking code to be invoked after program initialization. PCHECK requires users to identify the boundary manually. We could not integrate slash with PCHECK because its implementation is unavailable. PCHECK's dataset includes three Java and three C/C++ programs, but the only boundaries defined in the paper are for Squid (C++) and HDFS (Java). Thus, we focused our attention on Squid (786k LOC). slash successfully identified Squid's boundary in 41 seconds. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Correct & Master func. in & Worker func. in \\ Program & boundary & config. logic & main logic \\ \hline Bind & ✓ & ✓ & ✓ \\ \hline Memcached & ✓ & ✓ & ✓ \\ \hline Nginx & ✓ & ✓ & ✓ \\ \hline Apache & ✓ & ✓ & ✓ \\ \hline Lighttpd & ✓ & ✓ & ✓ \\ \hline Redis & OOM & OOM & OOM \\ \hline \end{tabular} \end{table} Table 6: Effectiveness of slash in facilitating transition-point identification for the temporal-specialization tool. ## 7 Threats to Validity We outline threats to the validity of our approach, along with the applied mitigations: * **Scope of the study** (Internal & external validity). We investigated programs whose configurations are provided through command-line input or configuration files. Some of our findings may not generalize to other kinds of software, such as event-driven programs (e.g., Android programs). For the field study, we selected a diverse set of widely used, mature programs. However, to avoid bias, we evaluated slash using popular programs from GitHub that we had _not_ used to identify boundary properties. * **Robustness of boundary properties** (External validity). The properties used by slash to infer boundary locations were inferred from the 24 programs in Table 1. Moreover, the properties used by slash do not depend on heuristics like function/variable names and data types like int/string. slash also does not exploit special idioms that are used by some programmers for parsing programs' configurations. For instance, we observed that the GNU Coreutils programs use a particular idiom (i.e., the invocation of the function getopt_long inside a while-loop) for parsing command-line parameters. Instead, we decided to have slash rely on high-level structural properties that are driven by program-configuration semantics. The evaluation of 21 unseen programs validates the identified boundary properties in common programs and confirms slash's effectiveness in boundary identification. * **Incorrect propagation of data flows** (External Validity). Our taint analysis, discussed in SS5.1, is sound under practical assumptions, such as system and libb calls behaving as expected. It does not account for lsetjmp and llongjmp usage or dynamically loaded code via dlopen/dlsym. If these assumptions are violated, the analysis becomes unsound. Finally, there is the question of the soundness slash's results when run on the three different kinds of boundary cases. * **Single-element**: masscan shows that slash is fallible, and can return an incorrect answer when a single-element boundary exists (see SS6.1.2). The masscan result constitutes both a false positive and a false negative. * **Multi-element**: in our limited experience, slash identifies the locations of a multi-element boundary as (individual) candidates, but because slash returns just a singleton location in main, the answer returned is a false positive. * **Blended**: slash returns null, because the "blended" case involves violations of single-element-boundary properties. ## 8 Related Work **Multi-cut for Program Decomposition.** Multi-cut algorithms [4] have been used in several program-optimization methods [8, 15, 13]. Ma et al. [13] presented a vertex-cut framework on LLVM IR graphs to partition coarse-grained dataflow graphs into parallel clusters to improve performance of applications in multi-core systems. In our work, only a degenerate form of min-cut is used: the algorithm identifies the set of articulation points, each member of which constitutes a cut-set of size 1. However, slash's static taint analysis is an improvement on the data-dependency analysis used in [13], which relies on dynamically generated traces. **Tracing Program Configurations.** LOTRACK [12, 11] applies taint analysis for identifying all code that is influenced by load-time configurations in Android and Java programs. In Android applications, the identification of a boundary appears to be less of a problem: because Android apps, essentially plugins with a specific lifecycle in the Android framework, usually have their configuration logic completed (i.e., typically inside onCreate and before onStart) by the time of executing their main activity. Hence, this program point can thus serve as the boundary. This observation does not hold for regular Java programs, then we foresee slash can be leveraged to solve this boundary identification challenge in this context. Finally, LOTRACK relies on the assumption that configuration APIs are known; however, identifying such APIs can be cumbersome. slash does not require configuration APIs, the taint analysis of argv is sufficient to identify configuration-hosting variables including APIs that read configuration files. ## 9 Conclusions and Future Work This paper presents an algorithm and tool, called slash, to statically identify programs' configuration logic. Our evaluation on widely used C/C++ command-line and configuration-file programs confirmed the existence of a boundary and found that slash_automatically_ identified a suitable boundary for 87.5% of the programs. Finally, we demonstrated an application of slash to reduce the manual-annotation burden in software-debloating and error-detection tools. Additionally, we envision that slash can be used as a linting tool to alert developers that they have intertwined a program's configuration logic with its main-computation logic. Thus, slash supports ongoing initiatives [5] to promote configurability as a first-class programming concept. In future work, we would like to examine the existence of boundaries in GUI programs and event-driven programs. Furthermore, multi-cut algorithms could allow slash to handle Swiss-Army-knife cases.
2301.08034
Cooperative Artificial Neural Networks for Rate-Maximization in Optical Wireless Networks
Recently, Optical wireless communication (OWC) have been considered as a key element in the next generation of wireless communications due to its potential in supporting unprecedented communication speeds. In this paper, infrared lasers referred to as vertical-cavity surface-emitting lasers (VCSELs) are used as transmitters sending information to multiple users. In OWC, rate-maximization optimization problems are usually complex due to the high number of optical access points (APs) needed to ensure coverage. Therefore, practical solutions with low computational time are essential to cope with frequent updates in user-requirements that might occur. In this context, we formulate an optimization problem to determine the optimal user association and resource allocation in the network, while the serving time is partitioned into a series of time periods. Therefore, cooperative ANN models are designed to estimate and predict the association and resource allocation variables for each user such that sub-optimal solutions can be obtained within a certain period of time prior to its actual starting, which makes the solutions valid and in accordance with the demands of the users at a given time. The results show the effectiveness of the proposed model in maximizing the sum rate of the network compared with counterpart models. Moreover, ANN-based solutions are close to the optimal ones with low computational time.
Ahmad Adnan Qidan, Taisir El-Gorashi, Jaafar M. H. Elmirghani
2023-01-19T12:25:37Z
http://arxiv.org/abs/2301.08034v1
# Cooperative Artificial Neural Networks for Rate-Maximization in Optical Wireless Networks ###### Abstract Recently, Optical wireless communication (OWC) have been considered as a key element in the next generation of wireless communications due to its potential in supporting unprecedented communication speeds. In this paper, infrared lasers referred to as vertical-cavity surface-emitting lasers (VCSELs) are used as transmitters sending information to multiple users. In OWC, rate-maximization optimization problems are usually complex due to the high number of optical access points (APs) needed to ensure coverage. Therefore, practical solutions with low computational time are essential to cope with frequent updates in user-requirements that might occur. In this context, we formulate an optimization problem to determine the optimal user association and resource allocation in the network, while the serving time is partitioned into a series of time periods. Therefore, cooperative ANN models are designed to estimate and predict the association and resource allocation variables for each user such that sub-optimal solutions can be obtained within a certain period of time prior to its actual starting, which makes the solutions valid and in accordance with the demands of the users at a given time. The results show the effectiveness of the proposed model in maximizing the sum rate of the network compared with counterpart models. Moreover, ANN-based solutions are close to the optimal ones with low computational time. Optical wireless networks, machine learning, interference management, optimization ## I Introduction The evolution of Internet-based technologies in recent days has led to challenges in terms of traffic congestion and lack of resources and secrecy that current wireless networks have failed to support. Therefore, optical wireless communication (OWC) has attracted massive interest from scientific researchers to provide unprecedented communication speeds. Basically, OWC sends information modulated on the optical band, which offers huge license free-bandwidth and high spectral and energy efficiency. In [1], light-emitting diodes (LEDs) were used as transmitters providing data rates in gigabit-per-second (Gbps) communication speeds. Despite the characteristics of LEDs, the modulation speed is limited, and they are usually deployed for providing illumination, and therefore, increasing the number of transmitters must be in compliance with the recommended illumination levels in such indoor environments. Alternatively, infrared lasers such as vertical-cavity surface-emitting lasers (VCSELs) were used in [2] to serve users at Terabit-per-second (Tbps) aggregate data rates, which makes OWC as a strong candidate in the next generation of wireless communications. However, the transmit power of the VCSEL can be harmful to human eyes if it operates at high power levels without considering eye safety regulations. Optimization problems for rate-maximization were formulated in [3, 4] to enhance the spectral efficiency of OWC networks. In particular, a resource allocation approach was designed in [3] to guarantee high quality of service for users with different demands. In [4], centralized and decentralized algorithms were proposed to maximize the sum rate of the network under the capacity constraint of the optical AP. It is worth pointing out that optimization problems in the context of rate-maximization are usually defined as complex problems that are time consumers. Recently, machine learning (ML) techniques have been considered to provide practical solutions for NP-hard optimization problems. In [5], a deep learning algorithm was used for power allocation in massive multiple-input multiple-output (MIMO) to achieve relatively high spectral efficiency at low loss. In [6], an artificial neural network (ANN) model was trained for resource allocation-based rate maximization in OWC network. It is shown that a closed form solution to the optimum solution of exhaustive search can be achieved at low complexity. However, the use of ML techniques in optical or RF wireless networks is still under investigation especially in complex scenarios where decisions for example in rate-maximization must be made promptly. In contrast to the work in the literature, in this paper, we design two ANN models working in cooperation to maximize the sum rate of a discrete-time OWC network in which the serving time is partitioned into consecutive periods of time. First, a multi user OWC system model is defined where a transmission scheme referred to as blind interference alignment (BIA) is applied for multiple access services. Then, an optimization problem is formulated to find the optimum user-association and resource allocation during a certain period of time. The computational time of solving such complex optimization problems exceeds the time during which the optimum solution must be determined. Therefore, two ANN models are designed and trained to maximize the sum rate of the network during the intended period of time prior to its starting by exploiting the records of the network in the previous period of time and performing prediction. The results show the ability of the trained ANN models in providing accurate solutions close to the optimum ones. ## II System Model We consider a discrete-time downlink OWC network as shown in Fig. 1, where multiple optical APs given by \(L\), \(l=\{1,\ldots,L\}\), are deployed on the ceiling to serve multiple users given by \(K\), \(k=\{1,\ldots,K\}\), distributed on the communication floor. Note that, the VCSEL is used as a transmitter, and therefore, each optical AP consists of \(L_{v}\) VCSELs to extend its coverage area. On the user side, a reconfigurable optical detector with \(M\) photodiodes providing a wide field of view (FoV) [7] is used to ensure that each user has more than one optical link available at a given time. In this work, the serving time in the network is partitioned into a set of time periods given by \(\mathcal{T}\), where \(t=\{1,\ldots,t,t+1,\ldots\mathcal{T}\}\), and the duration of each time period is \(\tau\). In this context, the signal received by a generic user \(k\), \(k\in K\), connected to AP \(l\) during the period of time \(t+1\) can be expressed as \[y^{[l,k]}(t+1)=\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T}\mathbf{x}(t+1)+z^{[l,k] }(t+1), \tag{1}\] where \(m\in M\) is a photodiode of user \(k\), \(\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T}\in\mathbb{R}_{+}^{l\times 1}\), is the channel matrix, \(\mathbf{x}(t+1)\) is the transmitted signal, and \(z^{[l,k]}(t+1)\) is real valued additive white Gaussian noise with zero mean and variance given by the sum of shot noise, thermal noise and the intensity noise of the laser. In this work, all the optical APs are connected through a central unit (CU) to exchange essential information for solving optimization problems. It is worth mentioning that the distribution of the users is known at the central unite, while the channel state information (CSI) at the transmitters is limited to the channel coherence time due to the fact that BIA is implemented for interference management [7, 8]. ### _Transmitter_ The VCSEL transmitter has Gaussian beam profile with multiple modes. For lasers, the power distribution is determined based on the beam waist \(W_{0}\), the wavelength \(\lambda\) and the distance \(d\) between the transmitter and user. Basically, the beam radius of the VCSEL at photodiode \(m\) of user \(k\) located on the communication floor at distance \(d\) is given by \[W_{d}=W_{0}\left(1+\left(\frac{d}{d_{Ra}}\right)^{2}\right)^{1/2}, \tag{2}\] where \(d_{Ra}\) is the Rayleigh range. Moreover, the spatial distribution of the intensity of VCSEL transmitter \(l\) over the transverse plane at distance \(d\) is given by \[I_{l}(r,d)=\frac{2P_{t,l}}{\pi W_{d}^{2}}\exp\left(-\frac{2r^{2}}{W_{d}^{2}} \right). \tag{3}\] Finally, the power received by photodiode \(m\) of user \(k\) from transmitter \(l\) is given by \[\begin{split}& P_{m,l}=\\ &\int_{0}^{r_{m}}I(r,d)2\pi rdr=P_{t,l}\left[1-\exp\left(\frac{-2 r_{m}^{2}}{W_{d}^{2}}\right)\right],\end{split} \tag{4}\] where \(r_{m}\) is the radius of photodiode \(m\). Note that, \(A_{m}=\frac{A_{rec}}{M}\), \(m\in M\), is the detection area of photodiode \(m\), assuming \(A_{rec}\) is the whole detection area of the receiver. In (4), the location of user \(k\) is considered right under transmitter \(l\), more details on the power calculations of the laser are in [2]. ### _Blind interference alignment_ BIA is a transmission scheme proposed for RF and optical networks to manage multi-user interference with no CSI at the transmitters [7, 8], showing superiority over other transmit precoding schemes with CSI such as zero-forcing (ZF). Basically, the transmission block of BIA allocates multiple alignments block to each user following a unique methodology. For instance, an AP with \(L_{v}=2\) transmitters serving \(K=3\) users, one alignment block is allocated to each user as shown in Fig. 2. For the general case where an optical AP composed of \(L_{v}\) transmitters serving \(K\) users, BIA allocates \(\ell=\left\{1,\ldots,(L_{v}-1)^{K-1}\right\}\) alignment blocks to each user over a transmission block consisting of \((L_{v}-1)^{K}+K(L_{v}-1)^{K-1}\) time slots. In this context, user \(k\) receives the symbol \(\mathbf{u}_{\ell}^{[l,k]}\) from AP \(l\) during the \(\ell\)-th alignment block as follows \[\mathbf{y}^{[l,k]}=\mathbf{H}^{[l,k]}\mathbf{u}_{\ell}^{[l,k]}+\sum_{l^{ \prime}=1,l^{\prime}\neq l}^{L}\sqrt{\alpha_{l^{\prime}}^{[l,k]}}\mathbf{H}^{[ l^{\prime},k]}\mathbf{u}_{\ell}^{[l^{\prime},k]}+\mathbf{z}^{[l,k]}, \tag{5}\] where \(\mathbf{H}^{[l,c]}\) is the channel matrix of user \(k\). It is worth mentioning that user \(k\) is equipped with a reconfigurable detector that has the ability to provide \(L_{v}\) linearly independent channel responses, i.e., \[\mathbf{H}^{[l,k]}=\left[\mathbf{h}^{[l,k]}(1)\quad\mathbf{h}^{[l,k]}(2)\quad \ldots\quad\mathbf{h}^{[l,k]}(L_{v})\right]\in\mathbb{R}_{+}^{L_{v}\times 1}. \tag{6}\] In (5), \(\alpha_{l^{\prime}}^{[l,k]}\) is the signal-to-interference ratio (SIR) received at user \(k\) due to other APs \(l\neq l^{\prime}\), and \(\mathbf{u}_{\ell}^{[l^{\prime},k]}\) represents the interfering symbols received from the adjacent APs during the alignment block \(\ell\) over which the desired symbol \(\mathbf{u}_{\ell}^{[k,\ell]}\) is received. It is worth pointing out that frequency reuse is usually applied to avoid inter-cell interference so that the Fig. 1: An OWC system with \(L\) optical APs serving \(K\) users. interfering symbol \(\mathbf{u}_{\ell}^{[l^{\prime},k]}\) can be teared as noise. Finally, \(\mathbf{z}^{[k,c]}\) is defined as noise resulting from interference subtraction, and it is given by a covariance matrix, i.e., \[\mathbf{R_{z_{p}}}=\begin{bmatrix}(K)\mathbf{I}_{L_{v}-1}&\mathbf{0}\\ \mathbf{0}&1\end{bmatrix}. \tag{7}\] According to [7], the BIA-based data rate received by user \(k\) from its corresponding APs during the period of time \((t+1)\) is expressed as \[\begin{split}& r^{[l,k]}(t+1)=\\ & B_{t+1}^{[l,k]}\mathbb{E}\left[\log\det\left(\mathbf{I}_{L_{v}}+P_{ \mathrm{str}}\mathbf{H}_{t+1}^{[l,k]}\mathbf{I}^{[l,k]}\mathbf{R_{z}}^{-1}(t+1 )\right)\right],\end{split} \tag{8}\] where \(B_{t+1}^{[l,k]}=\dfrac{(L_{v}-1)^{K-1}}{(L_{v}-1)^{K}+K(L_{v}-1)^{K-1}}=\dfrac{ 1}{K+L_{v}-1}\) is the ratio of the alignment blocks allocated to each user connected to AP \(l\) over the entire transmission block, \(P_{\mathrm{str}}\) is the power allocated to each stream and \[\mathbf{R_{z}}(t+1)=\mathbf{R_{z_{p}}}+P_{\mathrm{str}}\sum_{l^{\prime}=1}^{L }\alpha_{l^{\prime}}^{[l,k]}\mathbf{H}_{t+1}^{[l^{\prime},k]}\mathbf{H}^{[l^ {\prime},k]}, \tag{9}\] is the covariance matrix of the noise plus interference received from other APs \(l\neq l^{\prime}\). ## III Problem Formulation We formulate an optimization problem in a discrete time OWC system aiming to maximize the sum rate of the users by determining the optimum user assignment and resource allocation simultaneously. It is worth mentioning that data rate maximization in the network must be achieved during each period of time \(t\), otherwise, it cannot be considered as a valid solution due to the fact that user-conditions are subject to changes in the next period of time. Focussing on period of time \(t+1\), the utility function of sum rate maximization is given by \[U(x,e)=\sum_{k\in K}\varphi\left(\sum_{l\in L}x_{t+1}^{[l,k]}R^{[l,k]}(t+1) \right), \tag{10}\] where \(x_{t+1}^{[l,k]}\) is an assignment variable that determines the connectivity of user \(k\) to optical AP \(l\), where \(x_{t+1}^{[l,k]}=1\) if user \(k\) is assigned to AP \(l\) during the period of time \(t+1\), otherwise, it equals 0. Moreover, the actual data rate of user \(k\) during \(t+1\) is \(R^{[l,k]}(t+1)=e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\), where \(e_{t+1}^{[l,k]}\), \(0\leq e_{t+1}^{[l,k]}\leq 1\), determines the resources devoted from AP \(l\) to serve user \(k\), and \(r^{[l,k]}(t+1)\) is the user rate given by equation (8). The sum rate maximization during the period of time \(t+1\) can be obtained by solving the optimization problem as follows \[\begin{split}\mathbf{P1}:&\max_{x,e}\quad\sum_{k\in K }\varphi\left(\sum_{l\in L}x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\right)\\ &\text{s.t.}\quad\sum_{l\in L}x_{t+1}^{[l,k]}=1,\qquad\qquad \forall k\in K\\ &\quad\sum_{k\in K}x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\leq\rho_{l},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\forall k\in K\\ & R_{min}\leq x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\leq R_{max},\\ &\qquad\qquad\qquad\qquad\qquad\forall l\in L,k\in K\\ & x_{t+1}^{[l,k]}\in\{0,1\},l\in L,k\in K,\end{split} \tag{11}\] where \(\varphi(.)=\log(.)\) is a logarithmic function that achieves proportional fairness among users [9], and \(\rho_{l}\) is the capacity limitation of AP \(l\). The first constraint in (11) guarantees that each user is assigned to only one AP, while the second constraint ensures that each AP is not overloaded. Moreover, the achievable user rate must be within a certain range as in the third constraint where \(R_{min}\) is the minimum data rate required by a given user and \(R_{max}\) is the maximum data rate that user \(k\) can receive. It is worth mentioning that imposing the third constraint helps in minimizing the waste of the resources and guarantees high quality of service. Finally, the last constraint defines the feasible region of the optimization problem. The optimization problem in (11) is defined is as a mixed integer non-linear programming (MINLP) problem in which two variables, \(x_{t+1}^{[l,k]}\) and \(e_{t+1}^{[l,k]}\), are coupled. Interestingly, some of the deterministic algorithms can be used to solve such complex MINLP problems with high computational time. However, the application of these algorithms in real scenarios is not practical to solve optimization problems like in (11), where the optimal solutions must be determined within a certain period of time. One of the assumptions for relaxing the main optimization problem in (11) is to connect each user to more than one AP, which means that the association variable \(x_{t+1}^{[l,k]}\) equals to 1. In this context, the optimization problem can be rewritten as \[\begin{split}\mathbf{P2}:&\max_{e}\quad\sum_{k\in K }\varphi\left(\sum_{l\in L}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\right)\\ &\text{s.t.}\quad\sum_{k\in K}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\leq \rho_{l},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \forall k\in K\\ & R_{min}\leq e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\leq R_{max},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall l\in L,k \in K\\ & 0\leq e_{t+1}^{[l,k]}\leq 1,l\in L,k\in K.\end{split} \tag{12}\] Fig. 2: Transmission block of BIA for a use case. Note that, considering our assumption of full connectivity, the variable \(x_{t+1}^{[l,k]}\) is eliminated. Interestingly, the optimization problem in (12) can be solved in a distributed manner on the AP and user sides using the full decomposition method via Lagrangian multipliers [10]. Thus, the Lagrangian function is \[f\left(e,\mu,\xi_{\max},\lambda_{\min}\right)=\sum_{k\in K}\sum_{l \in L}\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\right)+\\ \sum_{l\in L}\mu_{t+1}^{[l]}\left(\rho_{l}-\sum_{k\in K}R^{[l,k]} (t+1)\right)+\sum_{k\in K}\xi_{t+1,\max}^{[k]}\\ \left(R_{max}-\sum_{l\in\mathbf{L}}R^{[l,k]}(t+1)\right)+\sum_{k \in K}\lambda_{t+1,\min}^{[k]}\\ \left(\sum_{l\in L}R^{[l,k]}(t+1)-R_{min}\right), \tag{13}\] where \(\mu\), \(\xi_{\max}\) and \(\lambda_{\min}\) are the Lagrange multipliers according to the first and second constraints in (13), respectively. However, the assumption of users assigned to more than one AP is unrealistic in real time scenarios where users might not see more than one AP at a given time due to blockage. Therefore, focusing on resource allocation more than user association as in (12) can cause relatively high waste of resources due to the fact that an AP might allocate resources to users blocked from receiving its LoS link. In the following, an alternative solution is proposed using ANN models. ### _Artificial neural network_ Our aim in (11) is to provide optimal solutions during the period of time \(t+1\). Therefore, our ANN model must have the ability to exploit the solutions of the optimization problem in the previous period of time \(t\). Given that, the problem in hand can be defined as time series prediction, Focussing on the optimization problem in (11), calculating the network assignment vector \(\mathbf{X}_{t+1}\) involves high complexity. Therefore, having an ANN model that is able to perform prediction for the network assignment vector can considerably minimize the computational time, while valid sub-optimum solutions are obtained within a certain period of time. As in [6], we design a convolutional neural network (CNN) to estimate the network assignment vector denoted by \(\widehat{\mathbf{X}}_{t}\) during a given period of time \(t\) based on user-requirements sent to the whole set of the APs through uplink transmission. It is worth mentioning that the CNN model must be trained over a data set generated from solving the original optimization problem as in the following sub-section. For prediction, we consider the use of long-short-term-memory (LSTM) model classified as a recurrent neural network (RNN) [11], which is known to solve complex sequence problems through time. Once, the network assignment vector is estimated during the period of time \(t\), it is fed into the input layer of the LSTM model trained to predict the network assignment vector \(\widetilde{\mathbf{X}}_{t+1}\) during the next period of time \(t+1\) prior to its starting. Note that, resource allocation can be performed in accordance with the predicted network assignment vector to achieve data rate maximization during the intended period of time. ### _Offline phase_ We first train the CNN model over a dataset with \(N\) size within each period of time to determine an accurate set of weight terms that can perfectly map between the information sent to the input layer of the ANN model and its output layer. Note that, the CNN model aims to estimate the network assignment vector at a given time. For instance during period of time \(t\), the CNN model provides an estimated network assignment vector within the interval \([\widehat{\mathbf{X}}_{t-\tau+1},\widehat{\mathbf{X}}_{t-\tau+2},\ldots, \widehat{\mathbf{X}}_{t}]\), which then can be fed into the input layer of the LSTM model to predict the network assignment vector \(\widetilde{\mathbf{X}}_{t+1}\). In this context, the CNN model must be trained during the period of time \(t\) over data points generated from solving the following problem \[\mathbf{P3}: \max_{x} \sum_{k\in K}\varphi\left(\sum_{l\in L}x_{t}^{[l,k]}\frac{r^{[l,k ]}(t)}{\sum_{k\in K}x_{t}^{[l,k]}}\right)\] (14) s.t. \[\sum_{l\in L}x_{t}^{[l,k]}=1,\qquad\qquad\forall k\in K\] \[x_{t}^{[l,k]}\in\left\{0,1\right\},l\in L,k\in K.\] This optimization problem is a rewritten form of the problem in (11) with the assumption of uniform resource allocation, i.e., \(e_{t}^{[l,k]}=\frac{1}{K_{t}}\), where \(K_{l}=\sum_{k\in K}x_{t}^{[l,k]}\). It is worth pointing out that this assumption is considered due to the fact that once the estimation and prediction processes for the network assignment vector are done using CNN and LSTM models, respectively, resource allocation is performed at each optical AP to satisfy the requirements of the users as in subsection III-C. The optimization problem in (14) can be solved through brute force search with a complexity that increases exponentially with the size of the network. Note that, since the dataset is generated in an offline phase, complexity is not an issue. For the LSTM model, the dataset is generated over \(\mathcal{T}\) consecutive period of times. Then, it is processed to train the LSTM model for determining a set of wight terms that can accurately predict the network assignment vector during a certain period of time. Interestingly, the training of the LSTM model for predicting \(\widetilde{\mathbf{X}}_{t+1}\) during \(t+1\) is occurred over date points included in the dataset during the previous time duration \(\tau\), i.e., \([\widehat{\mathbf{X}}_{t-\tau+1},\widehat{\mathbf{X}}_{t-\tau+2},\ldots, \widehat{\mathbf{X}}_{t}]\). ### _Online application_ After generating the dataset and training the ANN models in an offline phase, their application is considered at the optical APs to perform instantaneous data rate-maximization during a certain period of time \(t+1\) by finding the optimum user association and resource allocation. Basically, the users send their requirements to the optical APs at the beginning of the period of time \(t\) through uplink transmission. Subsequently, these information are injected into the trained CNN model to estimate the network assignment vector \(\widehat{\mathbf{X}}_{t}\) during the interval \([t-\tau+1,t-\tau+2,\ldots,t]\), which then can be used as information for the input layer of the LSTM trained to predict the network assignment vector \(\mathbf{\widetilde{X}}_{t+1}\) during the next period of time prior to its actual starting. Once the network assignment variable \(x_{t+1}^{[l,k]}\) is predicted for each user \(k\) during \(t+1\), resource allocation is determined at each AP according to equation (13) as follows \[\mathcal{L}(e,\mu,\xi_{\max},\lambda_{\min})=\] \[\sum_{k\in K_{l}}\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1) \right)-\mu_{t+1}^{[l]}\sum_{k\in K_{l}}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\] \[\qquad-\sum_{k\in K_{l}}(\xi_{t+1,\max}^{[k]}-\lambda_{t+1,\min}^ {[k]})\ e_{t+1}^{[l,k]}r^{[l,k]}(t+1). \tag{15}\] The optimum resources allocated to user \(k\) associated with AP \(l\) during \(t+1\) is determined by taking the partial derivative of \(\mathcal{L}(e,\mu,\xi_{\max},\lambda_{\min})\) with respect to \(e_{t+1}^{[l,k]}\). Therefore, it is given by \[\left(\frac{\partial\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1) \right)}{\partial e_{t+1}}\right)=\\ r^{[l,k]}(t+1)\left(\mu_{t+1}^{[l]}+\xi_{t+1,\max}^{[k]}- \lambda_{t+1,\min}^{[k]}\right), \tag{16}\] Otherwise, considering the definition of \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}\) as monotonically decreasing function with \(e_{t+1}^{[l,k]}\), the partial derivative \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}|_{e_{t+1}^{[l,k]}=0}\leq 0\) means that the optimum value \(e_{t+1}^{*[l,k]}\) equals zero, while \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}|_{e_{t+1}^{[l,k]}=1}\geq 0\) means that the optimum value \(e_{t+1}^{*[l,k]}\) equals one. At this point, the gradient projection method is applied to solve the dual problem, and the Lagrangian multipliers in (15) are updated as follows \[\mu_{t+1}^{[l]}(i)=\left[\mu_{t+1}^{[l]}(i-1)-\Omega_{\varepsilon}\left( \alpha_{l}-\sum_{k\in K_{l}}R^{[l,k]}(t+1)\right)\right]^{+}, \tag{17}\] \[\xi_{t+1,\max}^{[k]}(i)=\left[\xi_{t+1}^{[k]}(i-1)-\Omega_{\nu}\Bigg{(}R_{ \max}-R^{[l,k]}(t+1)\Bigg{)}\right]^{+}, \tag{18}\] \[\lambda_{t+1,\min}^{[k]}(i)=\left[\lambda_{t+1}^{[k]}(i-1)-\Omega_{\lambda} \Bigg{(}R^{[l,k]}(t+1)-R_{\min}\Bigg{)}\right]^{+}, \tag{19}\] where \(i\) denotes the iteration of the gradient algorithm and [\(:\)]+ is a projection on the positive orthant. The Lagrangian variables work as indicators between the users and APs to maximize the sum rate of the network, while ensuring that each AP is not overloaded and the users receiver their demands. Note that, the resources are determined based on the predicted network assignment vector \(\mathbf{\widetilde{X}}_{t+1}\). Therefore, at the beginning of the period of time \(t+1\), each AP sets its link price according to (17), and the users update and broadcast their demands as in (18) and (19). These values remain fixed during the whole time interval so that the trained CNN estimate a new assignment vector to feed the LSTM model in order to predict \(\mathbf{\widetilde{X}}_{t+1}\) for the next period of time \(t+2\). ## IV Performance evaluations We consider an indoor environment with 5m\(\times\) 5m\(\times\) 3m dimensions where on the ceiling \(L=8\) APs are deployed, and each AP with \(L_{v}\) transmitters. On the communication floor located at 2 m from the ceiling, \(K\) users are distributed randomly with different activities. Note that, each user \(k\) is equipped a reconfigurable detector that gives the opportunity to connect to most of the APs, more details are in [4]. All the other simulation parameters are listed in Table 1. The accuracy of the trained ANN model, i.e., LSTM, in performing prediction is depicted in Fig. 3 in terms of mean square error (MSE) versus a set of epochs. It is can be seen that training and validation losses decrease with the number of epochs regardless of the dataset size considered since the optimal weights needed to preform specific mathematical calculations are set over time. However, increasing the size of dataset from 5000 to \(10^{4}\) results in decreasing the error of validation and training processes. It is worth noticing that MSE increases if more periods of time, \(\mathcal{T}=5\), is assumed for the same dataset size which is due to an increase in the prediction Fig. 3: The performance of the ANN model trained for prediction. error. This issue can be avoided by training the ANN model over a larger dataset with data points \(>10^{4}\). The figure also shows that our ANN model is not overfitting and can predict accurate solutions in the online application where unexpected scenarios are more likely to occur. In Fig. 4, the sum rate of the network is shown against different values of the beam waist \(W_{0}\), which is known as a vital parameter in laser-based OWC that influences the power received at the user end. It is shown that the sum rate of the users increases with the beam waist due to the fact that more transmit power is directed towards the users, and less interference is received from the neighboring APs. Interestingly, our cooperative ANNs provides accurate solutions close to the optimal ones that involves high computational time. Note that, solving the the optimization problem in (12) results in low sum rates compared to our ANN-based solutions, which is expected due to the assumption of full connectivity, i.e., \(x_{t+1}^{[l,k]}=1\), which in turn leads to wasting the resources. Moreover, the proposed models shows superiority over the conventional scheme proposed in [7] in which each AP serves users located at a distance determining whether the signal received is useful or noise, and therefore, users are served regardless of their demands, the available resources and capacity limitations of the APs. Fig. 5 shows the sum rate of the network versus a range of SNR values using the trained ANN models. It can be seen that determining the optimal user assignment and resource allocation using ANN results in higher sum rates compared to the scenarios of full connectivity and distance-based user association. It is because of in our model, each user is assigned to an AP that has enough resources to satisfy its demands and positively impact the sum rate of the network. Interestingly, as in [7], BIA achieves a higher sum rate than ZF due to its ability to serve multiple users simultaneously with no CSI, while the performance of ZF is dictated by the need for CSI. ## V Conclusions In this paper, sum rate-maximization is addressed in a discrete time laser-based OWC. We first define the system model consisting of multiple APs to serve users distributed on the receiving plane. Then, the user rate is derived considering the application of BIA, which manages multi-user interference without the need for CSI at the transmitters. Moreover, an optimization problem is formulated to maximize the sum rate of the network during a certain period of time. Finally, CNN and LSTM models are designed and trained to provide instantaneous solutions during the validity of each period of time. The results show that solving the formulated model achieves higher sum rates compared to other benchmark models, and the trained ANN models have the ability to obtain accurate and valid solutions close to the optimal ones.
2302.11035
Color-avoiding connected spanning subgraphs with minimum number of edges
We call a (not necessarily properly) edge-colored graph edge-color-avoiding connected if after the removal of edges of any single color, the graph remains connected. For vertex-colored graphs, similar definitions of color-avoiding connectivity can be given. In this article, we investigate the problem of determining the maximum number of edges that can be removed from a color-avoiding connected graph so that it remains color-avoiding connected. First, we prove that this problem is NP-hard, then we give a polynomial-time approximation algorithm for it. To analyze the approximation factor of this algorithm, we determine the minimum number of edges of color-avoiding connected graphs on a given number of vertices and with a given number of colors. Furthermore, we also consider a generalization of edge-color-avoiding connectivity to matroids.
József Pintér, Kitti Varga
2023-02-21T22:32:01Z
http://arxiv.org/abs/2302.11035v2
# Color-avoiding connected spanning subgraphs with minimum number of edges ###### Abstract We call a (not necessarily properly) edge-colored graph edge-color-avoiding connected if after the removal of edges of any single color, the graph remains connected. For vertex-colored graphs, similar definitions of color-avoiding connectivity can be given. In this article, we investigate the problem of determining the maximum number of edges that can be removed from a color-avoiding connected graph so that it remains color-avoiding connected. First, we prove that this problem is NP-hard, then we give a polynomial-time approximation algorithm for it. To analyze the approximation factor of this algorithm, we determine the minimum number of edges of color-avoiding connected graphs on a given number of vertices and with a given number of colors. Furthermore, we also consider a generalization of edge-color-avoiding connectivity to matroids. 1 Footnote 1: Department of Stochastics, Institute of Mathematics, Budapest University of Technology and Economics. 2 Footnote 2: ELKH-BME Stochastics Research Group. 3 Footnote 3: Department of Computer Science and Information Theory, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics. 4 Footnote 4: ELKH-ELTE Egerry Research Group. **Keywords: approximation algorithms, color-avoiding connectivity, complexity, matroids, spanning subgraphs** ## 1 Introduction The robustness of networks against random errors and targeted attacks has attracted a great deal of research interest. The robustness of a network refers to its capacity to maintain some degree of connectivity after the removal of some edges or vertices of the network. Although the standard frameworks of error or attack tolerance in complex networks can be really useful in industrial practices, we can develop a more efficient framework if we take into account that some parts of the network might share some vulnerabilities. A characteristic example is the case of public transport networks, where the edges of the underlying graph are colored according to the mode of transportation such as rail, road, ship or air transport. Experience shows that excessive snowing usually has a greater impact on the railway than on underground transportation. In extreme cases, these weather conditions might even paralyze the whole railway traffic, which we can think of (from a network theoretical point of view) that all the edges corresponding to railway transportation disappear from the network. Thus it is useful to know which vertices in the network are available from each other without using any edges corresponding to the railway transportation. In this manner, we can consider the network reliable if even after the elimination of any single mode of transportation, the whole or a significant part of the network remains connected. Another example is the case of communication networks where the vertices represent routers, which are colored according to which country the corresponding router is registered to. If, for safety reasons, we want to ensure that no country can intercept our message, then we need multiple paths in the network between the sender and the receiver such that each country is avoided in at least one of these paths, and send our message divided into many parts through these paths. These concepts were introduced as color-avoiding connectivity, first for vertex-colored graphs by Krause et al. [10] in 2016 with the motivation to develop a framework which can treat the heterogeneity of multiple vulnerable classes of vertices, and they demonstrated how this can be exploited to maintain functionality of a complex network by utilizing multiple paths, mostly on communication networks. Krause et al. extended this original theory in [11]. They analyzed how the color frequencies affect the robustness of the networks. For unequal color frequencies, they found that the colors with the largest frequencies control vastly the robustness of the network, and colors of small frequency only play a little role. In [8], color-avoiding connectivity was further extended from vertex-colored graphs to edge-colored ones. Giusfredi and Bagnoli investigated color-avoiding percolation in diluted lattices [6] and also showed that color-avoiding connectivity can be formulated as a self-organized critical problem, in which the asymptotic phase space can be obtained in one simulation [5]. Rath et al. [18] investigated the color-avoiding bond percolation of edge-colored Erdos-Renyi random graphs. They analyzed the fraction of vertices contained in the giant edge-color-avoiding connected component and proved that its limit can be expressed in terms of probabilities associated to edge-colored branching process trees. The work [14] of Lichev and Schapira includes some simplification and generalization of these results as well as some finer results on the size of the largest edge-color-avoiding connected component. Lichev also described the phase transition of the largest edge-color-avoiding connected component between the supercritical and the intermediate regime [13]. Molontay and Varga [15] investigated the computational complexity of finding the color-avoiding connected components of a graph. They also generalized the concept of color-avoiding connectivity by making the vertices or edges more vulnerable by assigning a list of colors to them. A similar concept called courteous edge-coloring was studied by DeVos et al. [2] in 2006. Graphs with 1-courteous edge-colorings are exactly the edge-color-avoiding connected graphs. In that article, they gave interesting upper bounds on the number of colors needed to courteously color an arbitrary graph. When operating a network, we might want to reduce the maintenance cost while retaining some desired properties of the network. In this work, we investigate the problem of finding color-avoiding connected spanning subgraphs with minimum number of edges in edge- or vertex-colored graphs. We also consider a generalization of edge-color-avoiding connectivity to matroids and investigate a similar problem. First, we prove that all these problems are NP-hard, then we present polynomial-time approximation algorithms for them. ## 2 Color-avoiding connected graphs and courteously colored matroids In this article, we study color-avoiding connected graphs and courteously colored matroids. First, we recall some important definitions and notation. The set of positive integers is denoted by \(\mathbb{Z}_{+}\). For two sets \(X\) and \(Y\), the _set difference_ of \(X\) and \(Y\) is denoted by \(X-Y\). Given a graph \(G=(V,E)\) and a subset of edges \(E^{\prime}\subseteq E\), let \(G-E^{\prime}\) denote the graph that is obtained from \(G\) by deleting the edges of \(E^{\prime}\) from it. If \(E^{\prime}=\{e\}\) for some edge \(e\) of \(G\), then \(G-\{e\}\) is abbreviated by \(G-e\). A graph is called _\(k\)-edge-connected_ if it remains connected whenever fewer than \(k\) edges are removed. Similarly, a graph is called _\(k\)-vertex-connected_ if it has more than \(k\) vertices and remains connected whenever fewer than \(k\) vertices are removed. A _matroid_\(\mathcal{M}=(S,\mathcal{I})\) is a pair formed by a finite (possibly empty) _ground set_\(S\) and a family of subsets \(\mathcal{I}\subseteq 2^{S}\) called _independent sets_ satisfying the _independence axioms_: 1. \(\emptyset\in\mathcal{I}\), * for any \(X,Y\subseteq S\) with \(X\subseteq Y\), if \(Y\in\mathcal{I}\), then \(X\in\mathcal{I}\), * for any \(X,Y\in\mathcal{I}\) with \(|X|<|Y|\), there exists \(e\in Y-X\) such that \(X\cup\{e\}\in\mathcal{I}\). The maximal independent subsets of \(S\) are called _bases_. The _rank_ of a set \(X\subseteq S\) in the matroid, denoted by \(r(X)\), is the maximum size of an independent subset of \(X\). The _rank of the matroid_ is the rank of its ground set. If \(\mathcal{M}\) is a matroid on the ground set \(S\) and \(T\subseteq S\), then the _restriction_ of \(\mathcal{M}\) to \(T\), or in other words, the _deletion_ of \(S-T\) from \(\mathcal{M}\), is the matroid \(\mathcal{M}\big{|}_{T}:=(T,\mathcal{I}^{\prime})\), or also denoted by \(\mathcal{M}\setminus(S-T):=(T,\mathcal{I}^{\prime})\), where \(\mathcal{I}^{\prime}:=\{X\subseteq T\mid X\in\mathcal{I}\}\). Furthermore, if the rank of \(\mathcal{M}\big{|}_{T}\) equals that of \(\mathcal{M}\), i.e. \(r(T)=r(S)\), then we say that \(\mathcal{M}\big{|}_{T}\) is a _rank-preserving restriction_ of \(\mathcal{M}\). A _coloring_ of a matroid is an arbitrary assignment of colors to the elements of its ground set. A _graphic matroid_ is a matroid whose independent sets can be represented as the edge sets of forests of a graph. Since the number of independent sets can be exponential in the size of the ground set, the usual requirement for a matroid algorithm to be polynomial is to be polynomial in the size of the ground set and not in the size of the input matroid. For this, it is assumed that the input matroid is given by an _oracle_ - in our case, by an _independence oracle_, and with an independence oracle call we can determine whether a subset of the ground set is independent in the matroid - and when analyzing the complexity of the matroid algorithm, the oracle calls are counted as single steps. Now we present the definitions of color-avoiding connectivity in edge- and vertex-colored graphs and a generalization of edge-color-avoiding connectivity to matroids. **Definition 1**.: We say that a (not necessarily properly) edge-colored graph \(G\) is _edge-color-avoiding connected_ if after the removal of the edges of any single color from \(G\), the remaining graph is connected. For two small examples on the definition of edge-color-avoiding connectivity, see Figure 1. The following lemma directly follows from the definition. **Lemma 2**.: Let \(G\) be an edge-colored graph in which every edge has a different color. Then \(G\) is edge-color-avoiding connected if and only if it is \(2\)-edge-connected. Clearly, an edge-colored graph is edge-color-avoiding connected if and only if after the removal of the edges of any single color, there exists a spanning tree in the remaining graph. This motivates the introduction of the following definition for matroids, which we call, after DeVos et al. [2], courteously colored matroids. **Definition 3**.: Let \(\mathcal{M}\) be a matroid, whose ground set is colored. We say that \(\mathcal{M}\) is a courteously colored matroid if after the deletion of the elements of any single color from \(\mathcal{M}\), the rank of the matroid does not change. Thus a matroid is courteously colored if and only if after the deletion of the elements of any single color from the ground set, at least one basis remains intact. In particular, a graphic matroid is courteously colored if and only if each component of the corresponding graph is edge-color-avoiding connected. Another simple example is the case of uniform matroids. The ground set of a uniform matroid \(U_{n,k}\) is of size \(n\), and its independent sets are those subsets of the ground set whose cardinality is at most \(k\) for some integer \(0\leq k\leq n\). We note that \(U_{n,k}\) is graphic if and only if \(k\in\{0,1,n-1,n\}\). It is not difficult to see that the Figure 1: An example for an edge-color-avoiding connected graph (left) – after the removal of edges of any single color, there remains a Hamiltonian path –, and an example for a not edge-color-avoiding connected graph (right) – after the removal of the blue (denoted by squares) edges, the bottom right vertex becomes isolated. uniform matroid \(U_{n,k}\) is courteously colored if and only if there exists no color which is assigned to at least \(n-k+1\) elements. Now we present two definitions of color-avoiding connectivity for (not necessarily properly) vertex-colored graphs describing slightly different phenomena. **Definition 4**.: We say that two vertices \(u\) and \(v\) of a (not necessarily properly) vertex-colored graph \(G\) are _vertex-\(c\)-avoiding connected_ for some color \(c\) if there exists a \(u\)-\(v\) path, and either at least one of \(u\) and \(v\) is of color \(c\), or at least one \(u\)-\(v\) path does not contain a vertex of color \(c\). If any two vertices of \(G\) are vertex-\(c\)-avoiding connected for any color \(c\), then we say that \(G\) is _vertex-color-avoiding connected_. **Definition 5**.: We say that two vertices \(u\) and \(v\) of a (not necessarily properly) vertex-colored graph \(G\) are _internally vertex-\(c\)-avoiding connected_ for some color \(c\) if there exists a \(u\)-\(v\) path containing no internal vertices of color \(c\). If any two vertices of \(G\) are internally vertex-\(c\)-avoiding connected for any color \(c\), then we say that \(G\) is _internally vertex-color-avoiding connected_. Note that if a graph is internally vertex-color-avoiding connected, then it is vertex-color-avoiding connected as well, but not every vertex-color-avoiding connected graph is internally vertex-color-avoiding connected. For some small examples on the definitions of vertex- and internally vertex-color-avoiding connectivity, see Figure 2. Although vertex-color-avoiding connectivity does not imply internally vertex-color-avoiding connectivity in general, the following lemma shows that this implication holds for those graphs in which every vertex has a different color. The proof of this lemma directly follows from the definitions. **Lemma 6**.: Let \(G\) be a vertex-colored graph in which every vertex has a different color. Then the following are equivalent. 1. The graph \(G\) is vertex-color-avoiding connected. 2. The graph \(G\) is internally vertex-color-avoiding connected. 3. The graph \(G\) is 2-vertex-connected. For convenience, let us introduce the following notation. **Notation 7**.: Given an edge- or a vertex-colored graph \(G\) and a color \(c\), we denote by \(G_{\overline{c}}\) the graph which can be obtained from \(G\) by removing the edges or the vertices of color \(c\) from it. Given a matroid \(\mathcal{M}\) whose ground set is colored and given a color \(c\), we denote by \(\mathcal{M}_{\overline{c}}\) the matroid which can be obtained from \(\mathcal{M}\) by deleting the elements of color \(c\) from it. Figure 2: An example for a vertex- and internally vertex-color-avoiding connected graph (left) – after the removal of vertices of any single color, there remains a Hamiltonian path, thus for any two vertices, there exists a path containing no internal vertices of the removed color –, and an example for a vertex- but not internally vertex-color-avoiding connected graph (middle) – there exists no path between the bottom left and the top right vertices which avoids internal vertices of color blue (denoted by squares), thus these two vertices are not internally vertex-blue-avoiding connected, however, they are vertex-blue-avoiding connected since the color of the top right vertex is blue. Finally, an example of a graph which is neither vertex- nor internally vertex-color-avoiding connected (right) – the red (denoted by triangle) and green (denoted by rhombus) vertices are neither vertex- nor internally vertex-blue-avoiding connected. Main results In this section, we study the problems of finding edge-, vertex- and internally vertex-color-avoiding connected spanning subgraphs with minimum number of edges in edge- or vertex-colored graphs, and also finding courteously colored rank-preserving restrictions to a set of minimum size in courteously colored matroids. ### Complexity It is not difficult to see that an edge- or vertex-colored graph \(G\) has a color-avoiding connected spanning subgraph if and only if \(G\) is color-avoiding connected, and similarly, a colored matroid \(\mathcal{M}\) has a courteously colored rank-preserving restriction if and only if \(\mathcal{M}\) is courteously colored. **Theorem 8**.: Given a matroid \(\mathcal{M}=(S,\mathcal{I})\) whose ground set is colored and given a positive integer \(m\), it is NP-complete to decide whether \(\mathcal{M}\) has a courteously colored rank-preserving restriction to a subset \(T\subseteq S\) of size at most \(m\). Furthermore, this problem remains NP-complete even for graphic matroids. Proof.: The problem is clearly in NP. Now we show that the problem is NP-hard even for graphic matroids. As we observed earlier, a graphic matroid is courteously colored if and only if each component of the corresponding graph is edge-color-avoiding connected. Lemma 2 implies that for those connected graphs in which every edge has a different color and for the choice of \(m=\left|V(G)\right|\), our problem is equivalent to deciding whether the graph contains a Hamiltonian cycle, which is known to be NP-complete [4]. **Corollary 9**.: Given an edge-colored graph \(G=(V,E)\) and a positive integer \(m\), it is NP-complete to decide whether \(G\) has an edge-color-avoiding connected spanning subgraph with at most \(m\) edges. Now we prove an analogue of Corollary 9 for the concepts of vertex- and internally vertex-color-avoiding connectivity. **Theorem 10**.: Given a vertex-colored graph \(G\) and a positive integer \(m\), it is NP-complete to decide whether \(G\) has a vertex-color-avoiding connected spanning subgraph with at most \(m\) edges. Similarly, it is also NP-complete to decide whether \(G\) has an internally vertex-color-avoiding connected spanning subgraph with at most \(m\) edges. Proof.: Both problems are clearly in NP. Now we show that they are NP-hard. If every vertex has a different color, then by Lemma 6, for the choice of \(m=\left|V(G)\right|\), both problems are equivalent to deciding whether the graph contains a Hamiltonian cycle, which is known to be NP-complete [4]. As is clear from the proof of Theorems 8 and 10, in the case of those connected graphs whose edges or vertices are all of different colors, we want to find a 2-edge- or a 2-vertex-connected spanning subgraph with minimum number of edges, respectively, which are NP-hard problems. However, there exist polynomial-time approximation algorithms for these problems. Khuller and Vishkin [9] provided a 3/2-approximation algorithm for 2-edge-connectivity and a 5/3-approximation algorithm for 2-vertex-connectivity: they modified the depth-first search algorithm so that it does not just find a spanning tree, but a minimally 2-edge- or 2-vertex-connected1 spanning subgraph. Gabow et al. [3] presented a \(\left(1+\frac{2}{k}\right)\)-approximation algorithm for finding a \(k\)-edge-connected spanning subgraph with minimum number of edges with the use of linear programming. Currently, the best known approximation factor for finding a 2-edge-connected spanning subgraph with minimum number of edges is \(\frac{4}{3}\) by Hunkenschroder et al. [7], and for finding 2-vertex-connected spanning subgraphs with minimum number of edges, Cheriyan and Thurimella [1] gave a \(\frac{3}{2}\)-approximation algorithm, but there exist better performing algorithms if the input graph satisfies some additional conditions - for example, see [12, 16]. ### Approximation algorithm for courteosly colored matroids In the following, we present a polynomial-time approximation algorithm for finding a courteously colored rank-preserving restriction of a matroid to a set of minimum size. To shorten the description of the algorithm, let us define the following subroutine. ``` 0: a matroid \(\mathcal{M}=(S,\mathcal{I})\) and a subset \(T\subseteq S\). 0: a set \(T^{\prime}\subseteq S\) for which \(r(T^{\prime})=r(S)\) and \(T\subseteq T^{\prime}\). \(T^{\prime}\gets T\) for\(s\in S\)do if\(r(T^{\prime})<r\big{(}T^{\prime}\cup\{s\}\big{)}\)then \(T^{\prime}\gets T^{\prime}\cup\{s\}\) return\(T^{\prime}\) ``` **Subroutine** IncreaseRank **Input:** a matroid \(\mathcal{M}=(S,\mathcal{I})\) and a subset \(T\subseteq S\). **Output:** a set \(T^{\prime}\subseteq S\) for which \(r(T^{\prime})=r(S)\) and \(T\subseteq T^{\prime}\). \(T^{\prime}\gets T\) for\(s\in S\)do if\(r(T^{\prime})<r\big{(}T^{\prime}\cup\{s\}\big{)}\)then \(T^{\prime}\gets T^{\prime}\cup\{s\}\) return\(T^{\prime}\) ``` **Algorithm 1** Finding courteously colored rank-preserving restrictions Now we are ready to present Algorithm 1. ``` 0: a courteously colored matroid \(\mathcal{M}=(S,\mathcal{I})\) with \(S\neq\emptyset\), colored with a color set \(C\). \(T\leftarrow\) a basis of \(\mathcal{M}\) for\(c\in C\)do \(T_{\overline{c}}\leftarrow\{s\in T\mid\text{$s$ is not of color $c$}\}\) if\(r(T_{\overline{c}})<r(S)\)then \(T\gets T\cup\text{IncreaseRank}(\mathcal{M}_{\overline{c}},T_{\overline{c}})\) for\(s\in T\)do if\(\big{(}\mathcal{M}\big{|}_{T}\big{)}\setminus\{s\}\) is a courteously colored rank-preserving restriction of \(\mathcal{M}\)then \(T\gets T-\{s\}\) return\(\mathcal{M}\big{|}_{T}\) ``` **Algorithm 2** Finding courteously colored rank-preserving restriction **Remark 11**.: Note that if the rank of the input matroid \(\mathcal{M}\) is zero, then for any color \(c\), the rank of \(\mathcal{M}_{\overline{c}}\) is also zero and the only basis of \(\mathcal{M}\) is the empty set. Thus the algorithm selects the empty set at the first step and does not add any elements to it later, so the output is \(T=\emptyset\). This is clearly the optimal solution. To analyze the case when the rank of the input matroid is at least one, first we prove the following theorem. **Theorem 12**.: Let \(\mathcal{M}=(S,\mathcal{I})\) be a courteously colored matroid colored with exactly \(k\in\mathbb{Z}_{+}\) colors, and let \(r=r(S)\). If \(r\geq 1\), then \(k\geq 2\) and \(|S|\geq\left\lceil\frac{k\cdot r}{k-1}\right\rceil\), and these lower bounds are tight. Proof.: Suppose to the contrary that there exists a courteously colored matroid \(\mathcal{M}\) colored with \(k=1\) color \(c\) and with rank \(r\geq 1\). Then by the definition of courteous colorings, \(\mathcal{M}_{\overline{c}}\) has rank \(r\). On the other hand, the ground set of \(\mathcal{M}_{\overline{c}}\) is the empty set, which is a contradiction. Now consider the case \(k\geq 2\). Let \(\mathcal{M}\) be a courteously colored matroid with rank \(r\), where the elements of the ground set are colored with exactly \(k\) colors. Then for any of these \(k\) colors \(c\), the matroid \(\mathcal{M}_{\overline{c}}\) has rank \(r\), thus its ground set has at least \(r\) elements. That sums up to at least \(k\cdot r\) elements, where every element is counted exactly \(k-1\) times, so the number of elements is at least \(\left\lceil\frac{k\cdot r}{k-1}\right\rceil\). To show that these lower bounds are tight, we construct courteously colored graphic matroids of rank \(r\) on \(\left\lceil\frac{k\cdot r}{k-1}\right\rceil\) elements which are colored with exactly \(k\) colors for any \(k\geq 2\). More precisely, we construct an edge-color-avoiding connected graph \(G=(V,E)\) on \(r+1\) vertices and with \(r+\left\lceil\frac{r}{k-1}\right\rceil=\left\lceil\frac{k\cdot r}{k-1}\right\rceil\) edges, where the edges are colored with exactly \(k\) colors. Let \[C:=\{0,1,\ldots,k-1\}\] be the color set, let \[V:=\{v_{0},\ldots,v_{r}\}\] with \(r\geq k-1\), and let \[E_{i}:=\left\{v_{j}v_{j+1}\mid j\in\{0,1,\ldots,r-1\}\text{ and }j\equiv i\pmod{k-1}\right\}\] be the set of edges of color \(i\) for any \(i\in\{0,1,\ldots,k-2\}\), and let \[E_{k-1}:=\left\{v_{j}v_{\max(j+k-1,r)}\mid j\in\{0,\ldots,r-1\}\text{ and }j \equiv 0\pmod{k-1}\right\}\] be the set of edges of color \(k-1\). Note that \(G\) might have a pair of parallel edges between the vertices \(v_{r-1}\) and \(v_{r}\); for an example see Figure 3. It is not difficult to show that \(G\) is edge-color-avoiding connected and has \(r+\left\lceil\frac{r}{k-1}\right\rceil=\left\lceil\frac{k\cdot r}{k-1}\right\rceil\) edges. In the following theorem, we analyze Algorithm 1. **Theorem 13**.: Algorithm 1 is a polynomial-time \(\frac{2(k-1)}{k}\)-approximation algorithm for finding a courteously colored rank-preserving restriction of a courteously colored matroid - given by an independence oracle - whose elements are colored with exactly \(k\in\mathbb{Z}_{+}\) colors to a set of minimum size. Moreover, there exist inputs for which the approximation ratio is exactly \(\frac{2(k-1)}{k}\). Proof.: Let \(\mathcal{M}=(S,\mathcal{I})\) be the input matroid whose ground set is courteously colored with exactly \(k\in\mathbb{Z}_{+}\) colors and let \(r:=r(S)\). If \(k=1\), then by Theorem 12, \(r=0\) must hold, and by Remark 11, in this case Algorithm 1 finds an optimal solution. Now assume \(k\geq 2\). In the first step, we simply select a basis of \(\mathcal{M}\). Note that this step already guarantees the rank-preserving property of the output. Now we show that in the second phase, the algorithm selects some additional elements of the ground set to ensure that the output is courteously colored. Since \(\mathcal{M}\) is courteously colored, \(\mathcal{M}_{\overline{e}}\) has the same rank as \(\mathcal{M}\), namely \(r\), thus we can select some additional elements from the ground set of \(\mathcal{M}_{\overline{e}}\) so that the obtained set of selected elements has rank \(r\) as well. Therefore, at the end of the second phase, the restriction of \(\mathcal{M}\) to the so far selected elements is indeed courteously colored. In the third phase, the algorithm deselects some elements while maintaining the desired properties of the output. Therefore, the algorithm finds an appropriate subset of the ground set. Note that the elements of the output are not necessarily colored with exactly \(k\) colors. Now we prove that the ground set of the output contains at most \(2\frac{k-1}{k}\cdot\frac{k\cdot r}{k-1}=2r\) elements, which is, by Theorem 12, at most \(2\frac{k-1}{k}\) times as many as the minimum number of elements of a courteously colored matroid whose elements are colored with at most \(k\) colors, implying that Algorithm 1 is a \(\frac{2k-1}{k}\)-approximation algorithm. In the first step, the algorithm selects a basis \(B\) of \(\mathcal{M}\), which clearly consists of \(r\) elements. Figure 3: An edge-color-avoiding connected graph on \(8\) vertices and with minimum number of edges colored with exactly \(4\) colors. In the second phase, for each color \(c\), if the deletion of the elements of color \(c\) decreases the rank of the set of the so far selected elements, then the algorithm selects some additional elements of some colors different from \(c\) to avoid this happening. More precisely, if the deletion of the elements of color \(c\) decreases the rank of the set of the so far selected elements by \(x_{c}\), then the algorithm selects \(x_{c}\) new elements of some colors different from \(c\). For any color \(c\), let \(y_{c}\) denote the number of elements of color \(c\) in the basis \(B\) found in the first step. Since \(B\) is a basis, the deletion of \(y_{c}\) elements from \(B\) decreases its rank by exactly \(y_{c}\). However, there might be some additional selected elements of colors different from \(c\), thus the algorithm selects at most \(y_{c}\) elements for every color \(c\) in the second phase. Therefore, at the end of the second phase, at most \[r+\sum_{c\in C}y_{c}=r+|B|=2r\] elements are selected. In the third phase, the algorithm only deselects some elements, thus the ground set of the output indeed contains at most \(2r\) elements. Next we prove that the algorithm runs in polynomial time if \(\mathcal{M}\) is given by an independence oracle. The first step is selecting a basis which can be done in \(O\big{(}|S|\big{)}\) time with the greedy algorithm. In the second phase, for every color \(c\in C\), we remove the elements of color \(c\) from the set of the so far selected elements - which can be done in \(O\big{(}|S|\big{)}\) time -, then we select some additional elements with the use of Subroutine IncreaseRank - which can be done in \(O\big{(}|S|\big{)}\) time as well. Thus the algorithm takes \(O\big{(}|C|\cdot|S|\big{)}\) steps in the second phase. In the third phase, the algorithm checks for every selected element \(s\) whether for any color \(c\in C\), the deletion of \(s\) and all the selected elements of color \(c\) decreases the rank of the set of the so far selected elements. For a given selected element \(s\) and a given color \(c\), this can be done in \(O\big{(}|S|\big{)}\) time with the greedy algorithm. Thus the algorithm takes \(O\big{(}|C|\cdot|S|^{2}\big{)}\) steps in the third phase. Therefore, the algorithm runs in \(O\big{(}|C|\cdot|S|^{2}\big{)}\), i.e. in polynomial time. Finally, we present some courteously colored matroids, more precisely, some edge-color-avoiding connected graphs, for which the approximation ratio is exactly \(\frac{2(k-1)}{k}\). Let us define the edge-colored graph \(G=(V,E)\) as follows. Let \[C:=\{0,1,\ldots,k-1\}\] be the color set, let \[V:=\{v_{0},\ldots,v_{n-1}\}\] with \(k-1\mid n-1\), and let \[E_{i}:=\big{\{}v_{j}v_{j+1}\bigm{|}j\in\{0,1,\ldots,n-2\}\text{ and }j\equiv i \pmod{k-1}\big{\}}\] and \[E_{i}^{\prime}:=\big{\{}v_{j}v_{j+1}\bigm{|}j\in\{0,1,\ldots,n-2\}\text{ and }j+1\equiv i\pmod{k-1}\big{\}}\] be the sets of edges of color \(i\) for any \(i\in\{0,1,\ldots,k-2\}\), and let \[E_{k-1}:=\big{\{}v_{j}v_{j+k-1}\bigm{|}j\in\{0,\ldots,n-2\}\text{ and }j\equiv 0 \pmod{k-1}\big{\}}\] be the set of edges of color \(k-1\). For an example, see Figure 4. By Theorem 12, the subgraph \(\big{(}V,\cup_{i\in C}E_{i}\big{)}\) is an optimal solution with \(\frac{k\cdot(n-1)}{k-1}\) edges. However, the output of the algorithm can also be the subgraph \(\big{(}V,\cup_{i\in C\setminus\{k-1\}}\big{(}E_{i}\cup E_{i}^{\prime}\big{)} \big{)}\). To see this, first note that the edges of both \(\cup_{i\in C\setminus\{k-1\}}E_{i}\) and of \(\cup_{i\in C\setminus\{k-1\}}E_{i}^{\prime}\) form Hamiltonian paths \(v_{0}v_{1}\ldots v_{n-1}\), which are disjoint with each other. Thus the algorithm can select the edges of \(\cup_{i\in C\setminus\{k-1\}}E_{i}\) in the first step (these edges obviously form a spanning tree), then it can select all the edges of \(\cup_{i\in C\setminus\{k-1\}}E_{i}^{\prime}\) in the second phase (all the so far selected edges clearly form an edge-color-avoiding connected graph), and then it does not deselect any edges in the third phase. Clearly, this output has \(2n-1\) edges, therefore the approximation ratio in this case is indeed \(\frac{2(k-1)}{k}\). As a consequence, we also obtained the following result. **Corollary 14**.: Let \(\mathcal{M}=(S,\mathcal{I})\) be a courteously colored matroid with rank \(r\) such that \(\mathcal{M}\setminus\{s\}\) is not courteously colored for all \(s\in S\). Then \(|S|\leq 2r\) and this upper bound is sharp. For a construction of a courteously colored matroid (more precisely, of an edge-color-avoiding connected graph) of rank \(r\), which has maximum number of elements (edges) for the property that none of the elements can be deleted such that the matroid remains courteously colored, see Figure 5. **Remark 15**.: Using Theorem 12 and Corollary 14, one can design a simpler (but for large networks, less efficient) polynomial-time \(\frac{2(k-1)}{k}\)-approximation algorithm for finding courteously colored rank-preserving restrictions of a matroid \(\mathcal{M}\) to a set of minimum size: one by one greedily delete those elements of the ground set from \(\mathcal{M}\) after whose deletion the obtained matroid is courteously colored and has the same rank as \(\mathcal{M}\). By the same reasoning as that for Algorithm 1, we get that there exist inputs for which the approximation ratio of this greedy algorithm is exactly \(\frac{2(k-1)}{k}\). ### Approximation algorithm for edge-color-avoiding connected graphs As we observed before, the underlying graph of a courteously colored graphic matroid is not necessarily edge-color-avoiding connected, thus we also present a separate polynomial-time approximation algorithm for finding edge-color-avoiding connected spanning subgraphs with minimum number of edges. (Note that by Corollary 9, this problem is NP-hard.) To simplify the description of the algorithm, we introduce the following simple subroutines. The subroutine \(\mathbf{Graph}(V,E)\) creates a graph with vertex set \(V\) and edge set \(E\), the subroutine \(\mathbf{SpanningTree}(G)\) returns the edges of a spanning tree of a connected graph \(G\), and the subroutine \(\mathbf{ConnectedComponents}(G)\) returns the family of the vertex sets of the connected components of \(G\). The subroutine \(\mathbf{ContractVertices}(G,\mathcal{W})\), whose inputs are a graph \(G\) and a partition \(\mathcal{W}\) of its vertex set, returns a graph which can be obtained from \(G\) by contracting the vertices of each set \(W\in\mathcal{W}\) into a single vertex. Finally, the subroutine \(\mathbf{BeforeContraction}(G,H,E^{\prime})\), whose inputs are a graph \(G\), and a graph \(H\) that is obtained from \(G\) by contracting some of its vertices, and an edge set \(E^{\prime}\subseteq E(H)\), returns a set of edges in \(G\) corresponding to \(E^{\prime}\). Now we are ready to present Algorithm 2. Figure 4: An edge-color-avoiding connected graph colored with \(k=3\) colors and on \(n=7\) vertices, which contains an edge-color-avoiding connected spanning subgraph with \(\frac{k-n-1}{k-1}=9\) edges – for example, such a subgraph is spanned by the green (denoted by rhombi) edges and the lower red (denoted by triangles) and blue (denoted by squares) edges – and an edge-color-avoiding connected spanning subgraph with \(2(n-1)=12\) edges – that subgraph is spanned by the red and blue edges. The output of Algorithm 1 can be this latter subgraph, resulting in an approximation ratio of exactly \(\frac{2(k-1)}{k}\). Figure 5: An edge-color-avoiding connected graph on \(n=8\) vertices and with \(2(n-1)=14\) edges colored with exactly \(k=4\) colors, and having the property that none of the edges can be removed such that the graph remains edge-color-avoiding connected. ``` 0: an edge-color-avoiding connected graph \(G=(V,E)\) colored with a color set \(C\). 0: an edge-color-avoiding connected spanning subgraph \(G^{\prime}\) of \(G\). \(E^{\prime}\leftarrow\text{SpanningTree}(G)\) \(G^{\prime}\leftarrow\text{Graph}(V,E^{\prime})\) for\(c\in C\)do if\(G^{\prime}_{\overline{c}}\) is not connected then \(\mathcal{W}\leftarrow\text{ConnectedComponents}(G^{\prime}_{\overline{c}})\) \(H\leftarrow\text{ContractVertices}(G_{\overline{c}},\mathcal{W})\) \(E^{\prime}\gets E^{\prime}\cup\text{BeforeContraction}\big{(}G,\;H,\; \text{SpanningTree}(H)\big{)}\) \(G^{\prime}\leftarrow\text{Graph}(V,E^{\prime})\) for\(e\in E^{\prime}\)do if\(G^{\prime}-e\) is edge-color-avoiding connected then \(G^{\prime}\gets G^{\prime}-e\) return\(G^{\prime}\) ``` **Algorithm 2** Finding edge-color-avoiding connected spanning subgraphs Analogously to Theorem 8, it can be proved that Algorithm 2 is a \(\frac{2(k-1)}{k}\)-approximation algorithm of running time \(O\big{(}|C|\cdot|V|^{2}\big{)}\) for the problem of finding an edge-color-avoiding connected spanning subgraph with minimum number of edges of a given graph \(G=(V,E)\) colored with a color set \(C\) (for a detailed proof, see [17]). ### Approximation algorithm for vertex-color-avoiding connected graphs In the following, we give a polynomial-time approximation algorithm for finding a vertex-color-avoiding connected spanning subgraph with minimum number of edges. The algorithm is presented as Algorithm 3. ``` 0: a vertex-color-avoiding connected graph \(G=(V,E)\) colored with a color set \(C\). 0: a vertex-color-avoiding connected spanning subgraph \(G^{\prime}\) of \(G\). \(E^{\prime}\leftarrow\text{SpanningTree}(G)\) \(G^{\prime}\leftarrow\text{Graph}(V,E^{\prime})\) for\(c\in C\)do if\(G^{\prime}_{\overline{c}}\) is not connected then \(\mathcal{W}\leftarrow\text{ConnectedComponents}(G^{\prime}_{\overline{c}})\) \(H\leftarrow\text{ContractVertices}(G_{\overline{c}},\mathcal{W})\) \(E^{\prime}\gets E^{\prime}\cup\text{BeforeContraction}\big{(}G,\;H,\; \text{SpanningTree}(H)\big{)}\) \(G^{\prime}\leftarrow\text{Graph}(V,E^{\prime})\) for\(e\in E^{\prime}\)do if\(G^{\prime}-e\) is vertex-color-avoiding connected then \(G^{\prime}\gets G^{\prime}-e\) return\(G^{\prime}\) ``` **Algorithm 3** Finding vertex-color-avoiding connected spanning subgraphs **Remark 16**.: Note that if the number of colors is \(k=1\), then any tree is vertex-color-avoiding connected. As a result, in this case, Algorithm 3 always returns a tree. Now let us consider the case \(k=2\) and let \(G\) be an arbitrary vertex-color-avoiding connected graph whose vertices are colored with exactly two colors, red and blue. Then both the red and the blue vertices must span a connected graph in \(G\). Thus \(G\) must contain a spanning tree \(T\) in which there exists an edge \(e\) so that one of the two components of \(T-e\) consists of red vertices and the other one consists of blue vertices. It is not difficult to see that in the case \(k=2\), the output of Algorithm 3 is always such a spanning tree. To analyze the approximation factor of Algorithm 3, first we prove the following theorem. **Theorem 17**.: Let \(G\) be a vertex-color-avoiding connected graph on \(n\) vertices colored with exactly \(k\in\mathbb{Z}_{+}\) colors. Then \[\left|E(G)\right|\geq\begin{cases}n-1&\text{if }k\leq 2,\\ n&\text{if }k\geq 3,\end{cases}\] and this lower bound is sharp. To prove this theorem, we use the following lemma. By a _cut-vertex_, we mean a vertex whose removal disconnects the graph. **Lemma 18**.: If \(v\) is a cut-vertex in a vertex-color-avoiding connected graph \(G\), then all but one component of \(G-v\) consist only of vertices of the color of \(v\). Proof.: Let \(G\) be a vertex-color-avoiding connected graph with a cut-vertex \(v\) of color \(c\). Suppose to the contrary that there are at least two components in \(G-v\) which contain vertices of some colors different from \(c\). Let \(v_{1}\) and \(v_{2}\) be two vertices such that their colors are different from \(c\) and they are in different components in \(G-v\). Then \(v_{1}\) and \(v_{2}\) are not vertex-\(c\)-avoiding connected, which is a contradiction. Now we prove Theorem 17. Proof of Theorem 17.: First, let \(k=1\), i.e., assume that every vertex is of the same color. By the definition of vertex-color-avoiding connectivity, a graph whose every vertex is of the same color is vertex-color-avoiding connected if and only if it is connected. This means that the graph must have at least \(n-1\) edges, and clearly any tree on \(n\) vertices achieves this lower bound. Now let \(k=2\). By the definition of vertex-color-avoiding connectivity, \(G\) must be connected, so it must have at least \(n-1\) edges. Now we show that the vertices of any tree can be colored so that with this vertex-coloring, the tree is vertex-color-avoiding connected. Let \(T\) be an arbitrary tree and let \(e\) be an arbitrary edge of \(T\). The removal of \(e\) would disconnect \(T\) into \(2\) components; let us color the vertices of one component with blue and those of the other one with red. Then with this vertex-coloring, the tree \(T\) is clearly a vertex-color-avoiding connected graph with \(n-1\) edges. Now let \(k\geq 3\). First, we show that any vertex-color-avoiding connected graph \(G\) on \(n\) vertices colored with exactly \(k\geq 3\) colors must have at least \(n\) edges. Suppose to the contrary that there exists such a graph with \(n-1\) edges. By the definition of vertex-color-avoiding connectivity, \(G\) must be connected. Thus \(G\) is a tree. Clearly, \(n\geq k\geq 3\), thus \(G\) has at least two leaves. If the non-leaf vertices are all of the same color \(c\), then there exist at least two leaves \(u\) and \(v\) of a color different from \(c\). Then \(u\) and \(v\) are not vertex-\(c\)-avoiding connected, which is a contradiction. Therefore, there exist two adjacent non-leaf vertices \(v_{1}\) and \(v_{2}\) of colors \(c_{1}\) and \(c_{2}\), respectively, where \(c_{1}\neq c_{2}\). By Lemma 6, since \(v_{1}\) is a cut-vertex, all components of \(G-v_{1}\) but one contain vertices only of color \(c_{1}\). The remaining one component must be the one which contains \(v_{2}\). The same can be told about \(v_{2}\) and \(c_{2}\). It is easy to see that by the adjacency of \(v_{1}\) and \(v_{2}\) this means that all of the vertices must be of color \(c_{1}\) or \(c_{2}\). Which contradicts the fact that the graph is colored with exactly \(k\geq 3\) colors. Thus any vertex-color-avoiding connected graph on \(n\) vertices colored with exactly \(k\geq 3\) colors must have at least \(n\) edges. Now we present a vertex-color-avoiding connected graph \(G=(V,E)\) on \(n\) vertices colored with exactly \(k\geq 3\) colors that has \(n\) edges. Let \[V=\{v_{1},v_{2},\dots,v_{n}\}\] with \(n\geq 3\) and let \[E=\{v_{1}v_{2},\,v_{2}v_{3},\,\dots,\,v_{n-1}v_{n},\,v_{n}v_{1}\}.\] Let the color function \(c\) be \[c\colon V(G)\to\{1,2,\dots,k\}\qquad v\mapsto\begin{cases}i&\text{if }v=v_{i} \text{ and }i\leq k,\\ k&\text{otherwise}.\end{cases}\] For an example, see Figure 6. It is not difficult to see that with this vertex-coloring, \(G\) is vertex-color-avoiding connected and has \(n\) edges. In the following theorem, we analyze Algorithm 3. **Theorem 19**.: Algorithm 3 is a polynomial-time \(2\)-approximation algorithm for the problem of finding a vertex-color-avoiding connected spanning subgraph with minimum number of edges. Proof.: Since \(G\) is vertex-color-avoiding connected, it is connected as well, thus it has a spanning tree. In the first step, the algorithm selects the edges of a spanning tree. Now we show that in the second phase, the algorithm selects some additional edges to ensure the vertex-color-avoiding connectivity of the output. Since \(G\) is vertex-color-avoiding connected, the graph \(G_{\overline{c}}\) is connected for any color \(c\) and any graph obtained from \(G_{\overline{c}}\) by contracting some disjoint sets of vertices is also connected, thus it has a spanning tree. Therefore, by selecting of the edges of such a spanning tree, the obtained graph of the selected edges becomes vertex-color-avoiding connected. In the third phase, the algorithm deselects some edges while maintaining the vertex-color-avoiding connectivity of the output. Thus, the algorithm finds a vertex-color-avoiding connected spanning subgraph. Now we prove that the output has at most \(2n-3\) edges, which is, by Theorem 17, at most twice as many as the minimum number of edges of a vertex-color-avoiding connected graph whose vertices are colored with exactly \(k\geq 3\) colors, implying that Algorithm 3 is a \(2\)-approximation algorithm. In the first step, the algorithm selects the edges of a spanning tree \(T\), which clearly has \(n-1\) edges. In the second phase, for each color \(c\), if the removal of the vertices of color \(c\) disconnects the graph of the so far selected edges, then the algorithm selects some additional edges to avoid this happening. More precisely, if the removal of the vertices of color \(c\) from the graph of the selected edges leaves \(x_{c}\) connected components, then the algorithm selects \(x_{c}-1\) new edges of some colors different form \(c\). Since \(T\) is a spanning tree, the deletion of a vertex \(v\) from it creates exactly \(d_{T}(v)\) components, where \(d_{T}(v)\) denotes the degree of \(v\) in \(T\). Thus the removal of all the vertices of color \(c\) from the graph of the so far selected edges disconnects this graph into at most \[1+\sum_{\begin{subarray}{c}v\in V:\\ c(v)=c\end{subarray}}\big{(}d_{T}(v)-1\big{)}\] components, where \(c(v)\) denotes the color of a vertex \(v\). Therefore, at the end of the second phase, at most \[(n-1)+\sum_{c\in C}\sum_{\begin{subarray}{c}v\in V:\\ c(v)=c\end{subarray}}\big{(}d_{T}(v)-1\big{)}=(n-1)+\sum_{v\in V}d_{T}(v)-n=(n -1)+2(n-1)-n=2n-3\] edges are selected. In the third phase, the algorithm only deselects some edges, thus the output has at most \(2n-3\) edges. Similarly to Algorithm 2, Algorithm 3 also runs in \(O\big{(}|C|\cdot|V|^{2}\big{)}\) time. As a consequence, we also obtained the following. **Corollary 20**.: Let \(G=(V,E)\) be a vertex-color-avoiding connected graph on \(n\) vertices such that \(G-e\) is not vertex-color-avoiding connected for any \(e\in E\). Then \(|E|\leq 2n-3\) and if the vertices are colored with exactly \(k=3\) colors, then this upper bound is sharp. Moreover, if \(k=1\) or \(k=2\), then \(|E|=n-1\). Figure 6: A vertex-color-avoiding connected graph on \(n=6\) vertices colored with exactly \(k=4\) colors and with \(n=6\) edges. For a construction of a vertex-color-avoiding connected graph on \(n\) vertices with \(2n-3\) edges where no edge can be removed such that the graph remains vertex-color-avoiding connected, see Figure 7. By Corollary 20, a simple greedy algorithm (i.e., an analogous version of the one described in Remark 15) also performs in polynomial time and with the same approximation factor as Algorithm 3. ### Approximation algorithm for internally vertex-color-avoiding connected graphs Now we give a polynomial-time approximation algorithm for finding an internally vertex-color-avoiding connected spanning subgraph with minimum number of edges. To simplify the description of the algorithm, we introduce the following subroutines. The subroutine \(\mathbf{DifferentColorNeighbor}(v,G)\) checks whether the vertex \(v\) has a neighbor whose color is different from the color of \(v\) in the vertex-colored graph \(G\), and the subroutine \(\mathbf{DifferentColorEdge}(v,G)\) returns an edge which connects \(v\) to such a neighbor. The algorithm is presented as Algorithm 4. ``` 0: an internally vertex-color-avoiding connected graph \(G=(V,E)\) colored with a color set \(C\). 0: an internally vertex-color-avoiding connected spanning subgraph \(G^{\prime}\) of \(G\). if\(|C|=1\)then return\(G\) else \(E^{\prime}\leftarrow\mathrm{SpanningTree}(G)\) \(G^{\prime}\leftarrow\mathrm{Graph}(V,E^{\prime})\) for\(c\in C\)do if\(G^{\prime}_{\overline{c}}\) is not connected then \(\mathcal{W}\leftarrow\mathrm{ConnectedComponents}(G^{\prime}_{\overline{c}})\) \(H\leftarrow\mathrm{ContractVertices}(G_{\overline{c}},\mathcal{W})\) \(E^{\prime}\gets E^{\prime}\cup\mathrm{BeforeContraction}\big{(}G,\;H, \;\mathrm{SpanningTree}(H)\big{)}\) \(G^{\prime}\leftarrow\mathrm{Graph}(V,E^{\prime})\) for\(v\in V\)do if not \(\mathrm{DifferentColorNeighbor}(v,G^{\prime})\)then \(E^{\prime}\gets E^{\prime}\cup\big{\{}\mathrm{DifferentColorEdge}(v,G)\big{\}}\) \(G^{\prime}\leftarrow\mathrm{Graph}(V,E^{\prime})\) for\(e\in E^{\prime}\)do if\(G^{\prime}-e\) is internally vertex-color-avoiding connected then \(G^{\prime}\gets G^{\prime}-e\) return\(G^{\prime}\) ``` **Algorithm 4** Finding internally vertex-color-avoiding connected spanning subgraphs To analyze the approximation factor of Algorithm 4, first we prove the following theorem. Figure 7: A vertex-color-avoiding connected graph on \(n=8\) vertices colored with exactly \(k=3\) colors having \(2n-3=13\) edges and having the property that none of the edges can be removed such that the graph remains vertex-color-avoiding connected. **Theorem 21**.: Let \(G\) be an internally vertex-color-avoiding connected graph on \(n\) vertices colored with exactly \(k\) colors. Then \[\big{|}E(G)\big{|}\geq\begin{cases}\binom{n}{2}&\text{if }k=1,\\ \left\lceil\frac{2k-1}{2k-2}n-\frac{k}{k-1}\right\rceil&\text{if }k\geq 2, \end{cases}\] and this lower bound is sharp. Proof.: First, assume \(k=1\). It is not difficult to see that in this case, only the complete graphs are internally color-avoiding connected. Thus the minimum number of edges of an internally color-avoiding connected graph on \(n\) vertices, where all the vertices are colored with the same color, is indeed \(\binom{n}{2}\). Now assume \(k\geq 2\) and let \(G=(V,E)\) be an internally vertex-color-avoiding connected graph on \(n\) vertices colored with a color set \(C\), where \(|C|=k\geq 2\) and each color in \(C\) is used at least once. Let \(x\) be the number of edges whose endpoints are of the same color, and let \(n_{c}\) be the number of vertices of color \(c\) for any \(c\in C\). Now we give two lower bounds on the number of edges of \(G\) using the variable \(x\), and then we combine them to eliminate \(x\). Let us prove the first lower bound, namely \[(k-2)\cdot|E|\geq(k-1)n-k-x.\] Since \(G\) is internally vertex-color-avoiding connected, the graph \(G_{\overline{c}}\) is connected for any \(c\in C\), so it has at least \(n-n_{c}-1\) edges. Summing this up for all the colors, we count at least \[\sum_{c\in C}n-n_{i}-1=kn-n-k=(k-1)n-k\] edges, where the edges whose endpoints have the same color are counted \(k-1\) times, and the other edges (i.e. those ones whose endpoints are not of the same color) are counted \(k-2\) times. This implies our first lower bound. Now let us prove the second lower bound, namely \[k\cdot|E|\geq kn-k+x.\] Since \(G\) is internally vertex-\(c\)-avoiding connected for any \(c\in C\) and there are \(k\geq 2\) colors, removing the edges whose both endpoints are of color \(c\) leaves the graph connected, so the remaining graph has at least \(n-1\) edges. Summing up these remaining edges for all the colors, we count \(k(n-1)\) edges, where the edges whose endpoints have different colors are counted \(k\) times, and the other edges (i.e. those ones whose endpoints are of the same color) are counted \(k-1\) times. This implies our second lower bound. Adding these two inequalities together, we obtain \[(2k-2)\cdot|E|\geq(2k-1)n-2k,\] thus \[|E|\geq\left\lceil\frac{2k-1}{2k-2}n-\frac{k}{k-1}\right\rceil.\] To show that this lower bound is tight, we construct an internally vertex-color-avoiding connected graph on \(n\) vertices colored with exactly \(k\geq 2\) colors that has \(\left\lceil\frac{2k-1}{2k-2}n-\frac{k}{k-1}\right\rceil\) edges. Clearly, there exist unique integers \(m,\ell\) such that \(n=(2k-2)m+\ell+3\) and \(0\leq\ell\leq 2k-3\). First, consider the case when \(\ell=0\). Let \(G=(V,E)\) and \(c\colon V\to\{1,\ldots,k\}\) be defined as follows. Let \[V=\big{\{}v_{i,j}\bigm{|}i\in\{1,\ldots,2k-2\}\text{ and }j\in\{1,\ldots,m\} \big{\}}\cup\{v_{1,m+1},\,v_{1,m+2},\,v_{2k-2,m+1}\},\] let \[E=\big{\{}v_{1,j}v_{1,j+1}\;\big{|}\;j\in\{1,\ldots,m+1\}\big{\}}\cup \big{\{}v_{2k-2,j}v_{2k-2,j+1}\;\big{|}\;j\in\{1,\ldots,m\}\big{\}}\\ \cup\big{\{}v_{i,j}v_{i+1,j}\;\big{|}\;i\in\{1,\ldots,2k-3\}\text{ and }j\in\{1,\ldots,m\}\big{\}}\cup\big{\{}v_{1,m+1}\,v_{2k-2,m+1},\;v_{1,m+2}\,v_ {2k-2,m+1}\big{\}},\] and let \[c\colon V\to\{1,\ldots,k\}\qquad v\mapsto\begin{cases}1&\text{if }v=v_{1,j}\text{ for some }j\in\{1,\ldots,m+2\},\\ \big{\lfloor}\frac{i}{2}\big{\rfloor}+1&\text{if }v=v_{i,j}\text{ for some }i\in\{2,3,\ldots,2k-3\}\text{ and }j\in\{1,\ldots,m\},\\ k&\text{if }v=v_{2k-2,j}\text{ for some }j\in\{1,\ldots,m+1\}.\end{cases}\] Now consider the case when \(\ell\neq 0\). Let \(G=(V,E)\) and \(c\colon V\to\{1,\ldots,k\}\) be defined as follows. Let \[V=\big{\{}v_{i,j}\;\big{|}\;i\in\{1,\ldots,2k-2\}\text{ and }j\in\{1, \ldots,m\}\big{\}}\cup\big{\{}v_{i,m+1}\;\big{|}\;i\in\{1,\ldots,\ell\}\big{\}}\\ \cup\big{\{}v_{1,m+2},\,v_{2k-2,m+1},\,v_{2k-2,m+2}\big{\}},\] let \[E=\big{\{}v_{1,j}v_{1,j+1}\;\big{|}\;j\in\{1,\ldots,m+1\}\big{\}} \cup\big{\{}v_{2k-2,j}v_{2k-2,j+1}\;\big{|}\;j\in\{1,\ldots,m+1\}\big{\}}\\ \cup\big{\{}v_{i,j}v_{i+1,j}\;\big{|}\;i\in\{1,\ldots,2k-3\}\text{ and }j\in\{1,\ldots,m\}\big{\}}\cup\big{\{}v_{i,m+1}v_{i+1,m+1}\;\big{|}\;i\in\{1,\ldots,\ell-1\}\big{\}}\\ \cup\big{\{}v_{\ell,m+1}\,v_{2k-2,m+1},\;v_{1,m+2}\,v_{2k-2,m+2} \big{\}},\] and let \[c\colon V\to\{1,\ldots,k\}\qquad v\mapsto\begin{cases}1&\text{if }v=v_{1,j}\text{ for some }j\in\{1,\ldots,m+2\},\\ \big{\lfloor}\frac{i}{2}\big{\rfloor}+1&\text{if }v=v_{i,j}\text{ for some }i\in\{2,3,\ldots,2k-3\}\text{ and }j\in\{1,\ldots,m+1\},\\ k&\text{if }v=v_{2k-2,j}\text{ for some }j\in\{1,\ldots,m+2\}.\end{cases}\] For some examples, see Figure 8. It is not difficult to see that these graphs are internally vertex-color-avoiding connected and have \(\big{\lceil}\frac{2k-1}{2k-2}n-\frac{k}{k-1}\big{\rceil}\) edges. Now we are ready to analyze Algorithm 4. **Theorem 22**.: Algorithm 4 is a polynomial-time \(\big{(}3\cdot\frac{2k-2}{2k-1}\big{)}\)-approximation algorithm for the problem of finding an internally vertex-color-avoiding connected spanning subgraph with minimum number of edges in an internally vertex-color-avoiding connected graph whose vertices are colored with exactly \(k\in\mathbb{Z}_{+}\) colors. Figure 8: Some internally vertex-color-avoiding connected graphs on \(n\in\{9,10,\ldots,14\}\) vertices colored with exactly \(k=4\) colors and with \(\big{\lceil}\frac{2k-1}{2k-2}n-\frac{k}{k-1}\big{\rceil}\) of edges. Proof.: If \(k=1\), then by Theorem 21, the input of Algorithm 4 must be a complete graph, and so does the output, thus in this case the algorithm finds an optimal solution. Now assume \(k\geq 2\). By the same reasoning as that for Algorithm 3, after the second phase, the graph of the so far selected edges is vertex-color-avoiding connected, however, it is not necessarily internally vertex-color-avoiding connected: some vertex \(v\) might not be internally vertex-\(c(v)\)-avoiding connected to some other vertices, where \(c(v)\) denotes the color of \(v\). Now we show that in the third phase, the algorithm selects some additional edges to ensure the internally vertex-color-avoiding connectivity of the output. Let \(v\) and \(w\) be two arbitrary vertices of \(G\). It is enough to show that after the third phase, there exists an internally vertex-\(c(v)\)-avoiding path between \(v\) and \(w\) in the graph of the so far selected edges. Since the algorithm only selects some additional edges in the third phase, if there exists an internally vertex-\(c(v)\)-avoiding path between \(v\) and \(w\) in the graph of the selected edges at the end of the second phase, then such a path also exists in the graph of the selected edges at the end of the third phase. So assume that at the end of the second phase, there exists no internally vertex-\(c(v)\)-avoiding path between \(v\) and \(w\) in the graph of the selected edges. _Case 1: \(w\) is not of color \(c(v)\)._ Then in the third phase, the algorithm selects an edge \(uv\) where the color of \(u\) is different from \(c(v)\). Since \(u\) and \(w\) are vertex-\(c(v)\)-avoiding connected in the graph of the selected edges at the end of the second phase and neither \(u\) nor \(w\) are of color \(c(v)\), there exists a vertex-\(c(v)\)-avoiding path between \(u\) and \(w\) in the graph of the selected edges at the end of the second phase. Extending this path with the edge \(uv\), we obtain an internally vertex-\(c(v)\)-avoiding path between \(v\) and \(w\) in the graph of the selected edges at the end of the third phase. _Case 2: \(w\) is of color \(c(v)\)._ Then in the third phase, the algorithm selects two edges \(u_{1}v\) and \(u_{2}w\) where neither \(u_{1}\) nor \(u_{2}\) is of color \(c(v)\). Since \(u_{1}\) and \(u_{2}\) are vertex-\(c(v)\)-avoiding connected in the graph of the selected edges at the end of the second phase and neither \(u_{1}\) nor \(u_{2}\) are of color \(c(v)\), there exists a vertex-\(c(v)\)-avoiding path between \(u_{1}\) and \(u_{2}\) in the graph of the selected edges at the end of the second phase. Extending this path with the edges \(u_{1}v\) and \(u_{2}w\), we obtain an internally vertex-\(c(v)\)-avoiding path between \(v\) and \(w\) in the graph of the selected edges at the end of the third phase. In the fourth phase, the algorithm deselects some edges while maintaining the internally vertex-color-avoiding connectivity of the output. Therefore, the algorithm finds an internally vertex-color-avoiding connected spanning subgraph. Now we prove that the spanning subgraph has at most \[3n-4\leq 3\cdot\frac{2k-2}{2k-1}\cdot\left\lceil\frac{2k-1}{2k-2}n-\frac{k}{ k-1}\right\rceil\] edges, which is, by Theorem 21, at most \(3\cdot\frac{2k-2}{2k-1}\) times as many as the minimum number of edges of an internally vertex-color-avoiding connected graph whose vertices are colored with exactly \(k\) colors, implying that Algorithm 4 is a \(\left(3\cdot\frac{2k-2}{2k-1}\right)\)-approximation algorithm. By the same reasoning as that for Algorithm 3, at the end of the second phase, at most \(2n-3\) edges are selected. It is not difficult to see that the edges selected in the third phase cannot induce a cycle. Thus the algorithm selects at most \(n-1\) edges in the third phase. In the fourth phase, the algorithm only deselects some edges, thus the output has indeed at most \[(2n-3)+(n-1)=3n-4\] edges. Similarly to Algorithm 3, Algorithm 4 also runs in \(O\big{(}|C|\cdot|V|^{2}\big{)}\) time. Observe that we also obtained the following. **Corollary 23**.: Let \(G=(V,E)\) be an internally vertex-color-avoiding connected graph on \(n\) vertices such that \(G-e\) is not internally vertex-color-avoiding connected for any \(e\in E\). Then \(|E|\leq 3n-4\). By Corollary 23, a simple greedy algorithm (i.e., an analogous version of the one described in Remark 15) also performs in polynomial time and with the same approximation factor as Algorithm 4. ## 4 Conclusions In this article, we considered the problems of finding edge-, vertex-, and internally vertex-color-avoiding connected spanning subgraphs with minimum number of edges and finding courteously colored rank preserving restrictions of a matroid to a set of minimum size. We proved that all of these problems are NP-hard and provided polynomial-time approximation algorithms for them. We also gave sharp lower bounds on the number of edges in edge-, vertex- and internally vertex-color-avoiding connected graphs and on the number of elements in courteously colored matroids. Some possible generalizations of this framework are color-avoiding \(k\)-edge- and \(k\)-vertex-connectivity. An edge-colored graph \(G\) is called _edge-color-avoiding \(k\)-vertex-_ or \(k\)_-edge-connected_ if after the removal of the edges of any single color from \(G\), the remaining graph is \(k\)-vertex- or \(k\)-edge connected, respectively. A vertex-colored graph \(G\) is called _vertex-color-avoiding \(k\)-vertex-_ or \(k\)_-edge-connected_ if \(G\) is \(k\)-vertex- or \(k\)-edge-connected and after the removal of the vertices of any single color from \(G\), the remaining graph is \(k\)-vertex- or \(k\)-edge-connected, respectively. We say that two vertices \(u\) and \(v\) of a vertex-colored graph are _internally vertex-\(c\)-avoiding \(k\)-vertex-_ or \(k\)_-edge-connected_ for some color \(c\) if there exist \(k\) pairwise internally vertex-disjoint or \(k\) pairwise edge-disjoint \(u\)-\(v\) paths containing no internal vertices of color \(c\), respectively. If any two vertices of a vertex-colored graph are internally vertex-\(c\)-avoiding \(k\)-vertex- or \(k\)-edge-connected for any color \(c\), then the graph is called _internally vertex-color-avoiding \(k\)-vertex-_ or \(k\)_-edge-connected_, respectively. Then we can consider the problems of finding color-avoiding \(k\)-edge- or \(k\)-vertex-connected spanning subgraphs with minimum number of edges, or we can also investigate an edge-weighted version of them in which instead of minimizing the number of edges, we want to minimize the sum of the edge-weights of such spanning subgraphs. Clearly, these problems are also NP-hard since they are NP-hard even in the case \(k=1\) without edge-weights, however, developing approximation algorithms for them seems to be an interesting problem. Acknowledgement.The authors would like to express their gratitude to Roland Molontay for useful conversations and for his support during the research process. The research presented in this paper was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the framework of the Artificial Intelligence National Laboratory Programme. The research of Kitti Varga was supported by the Hungarian National Research, Development and Innovation Office - NKFIH, grant number FK128673.
2310.08775
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
We investigate an attack on a machine learning model that predicts whether a person or household will relocate in the next two years, i.e., a propensity-to-move classifier. The attack assumes that the attacker can query the model to obtain predictions and that the marginal distribution of the data on which the model was trained is publicly available. The attack also assumes that the attacker has obtained the values of non-sensitive attributes for a certain number of target individuals. The objective of the attack is to infer the values of sensitive attributes for these target individuals. We explore how replacing the original data with synthetic data when training the model impacts how successfully the attacker can infer sensitive attributes.
Manel Slokom, Peter-Paul de Wolf, Martha Larson
2023-10-12T23:47:22Z
http://arxiv.org/abs/2310.08775v3
# When Machine Learning Models Leak: An Exploration of Synthetic Training Data ###### Abstract We investigate an attack on a machine learning model that predicts whether a person or household will relocate in the next two years, i.e., a propensity-to-move classifier. The attack assumes that the attacker can query the model to obtain predictions and that the marginal distribution of the data on which the model was trained is publicly available. The attack also assumes that the attacker has obtained the values of non-sensitive attributes for a certain number of target individuals. The objective of the attack is to infer the values of sensitive attributes for these target individuals. We explore how replacing the original data with synthetic data when training the model impacts how successfully the attacker can infer sensitive attributes.1 Footnote 1: Original paper published at PSD 2022. The paper was subsequently updated. Keywords:Synthetic data, propensity to move, Attribute inference, machine learning. ## 1 Introduction Governmental institutions charged with collecting and disseminating information may use machine learning models to produce estimates, such as imputing missing values or inferring variables that cannot be directly observed. When such estimates are published, it is also useful to publish the machine learning model itself, so that researchers using the estimates can evaluate it closely, or even produce their own estimates. Moreover, society also asks for more insight into the models that are used, e.g., to address possible discrimination caused by decisions based on machine learning models. Unfortunately, machine learning models can be attacked in a way that allows an attacker to recover information about the data set that they were trained on [18]. For this reason, publishing machine learning models can lead to a risk that information in the training set is leaked. In this paper, we carry out a case study of an attribute inference attack on a machine learning classifier to better understand the nature of the risk. The classifier that we study predicts _propensity to move_, i.e., whether an individual or household will relocate their home within the next two years. The attack scenario assumes that the classifier has been released to the public, and that an attacker wishes to learn a sensitive attribute for a group of victims, i.e., target individuals. The attacker has non-sensitive information about these target individuals that is used for the attack and has scraped information about other people from the Web. Our experimental investigation first confirms that a machine learning classifier is able to predict propensity to move for individuals in its training data set as well as for previously "unseen" individuals, reproducing [2]. We then attack this classifier and demonstrate that an attacker can learn sensitive attributes both for individuals in the training data as well as for previously "unseen" individuals. However, for "unseen" individuals the attack is somewhat less successful. We reason that data synthesis might potentially allow us to create data that we could use for training and that would be far enough from the original data, than any real individual would have the somewhat higher resistance to attack of an "unseen" individual. Based on this idea, we create a synthetic training set, train on machine learning classifier on that set, and repeat the attacks. Interestingly, the resulting classifier is just as susceptible to attack as the original classifier, which was trained on the original data. We relate this finding to the success of an attack that infers sensitive information from individuals using priors and not the machine learning model. Our findings point to the direction that future research must pursue in order to create synthetic data that could reduce the risk of attack when used to train machine learning models. ## 2 Threat Model Our goal is to test whether a machine learning model trained on synthetic data can replace a machine learning model trained on original data. The idea is to release a machine learning model trained on synthetic data such that there is no leak of original data. The synthetic data serves as a replacement of the original data. In this section, we specify our goal more formally in the form of a threat model. Inspired by [23], a threat model follows three main dimensions. First, the threat model describes the adversary by looking at the resources at the adversary's disposal and the adversary's objective. In other words, it specifies what the attacker is capable of and what the attacker's goal is. Second, it describes the vulnerability, including the opportunity that makes an attack possible. Then, the threat model specifies the nature of the countermeasures that can be taken to prevent the attack. Table 1 provides the specifications of our threat model for each of the dimensions. As resources, we assume that the attacker has access to our released machine learning classifier. In addition to the ML model, the attacker has a subset of the data that is used to train an attacker model. The adversary's ob jective is to infer sensitive information about individuals. In our experiments, the attack model is trained using subset of data in addition to the released machine learning model that predicts propensity-to-move. The opportunity for attack is the possession of original data including sensitive attributes. Finally, the countermeasure that we are investigating is data synthesis. ## 3 Background and Related Work In this section, we give a brief overview on basic concepts and related work on predicting the propensity to move, on privacy in machine learning, and model inversion attribute inference attack. ### Propensity to Move The propensity to move is defined as desires, expectations, or plans to move to another dwelling [4]. Multiple factors come to play to understand and estimate the propensity to move in a population. In [4], the authors have grouped those factors into two categories: (1) _Residential satisfaction_ which is defined as the satisfaction with the dwelling and its location or surroundings. Residential satisfaction is divided into housing satisfaction and neighborhood satisfaction. (2) _Household characteristics_ which is related to demographic and socioeconomic characteristics of the household. The gender and age are indicators of a household are important demographic attributes. For instance, a male household has different mobility patterns than a female household. Also, education and income of the household are important socioeconomic attributes. In [9], authors investigated the possible relationship between involuntary job loss and regional mobility. In a survey, the German socio-economic panel [9] looked at whether job loss increases the probability to relocate to a different region and whether displaced workers who relocate to another region after job loss have better labor market outcomes than those staying in the same area. They found that job loss has a strong positive effect on the propensity to relocate. In [16], the authors examined the residential moving behavior of older adults in the Netherlands. [16] used a data collected from the Housing Research Netherlands (HRN) to provide insights into the housing situation of the Dutch \begin{table} \begin{tabular}{c c} \hline \hline **Component** & **Description** \\ \hline _Adversary: Objective_ & Specific attributes about individuals \\ \hline _Adversary: Resources_ & The attacker has access to the released classifier and has a subset of data \\ \hline _Vulnerability:Opportun_ & Possession of original data and inference of individuals’ sensitive data \\ \hline _Countermeasure_ & Make access to original data and model unreliable \\ \hline \hline \end{tabular} \end{table} Table 1: Threat model addressed by our approach population and their living needs. A logistic regression model was used to assess the likelihood that respondents would report that they are willing to move in the upcoming two years. Among their key findings, they showed that older adults with a propensity to move are more often motivated by unsatisfactory conditions in the current neighborhood. Further results revealed that older adults are more likely to have moved to areas with little deprivation, little nuisance, and a high level of cohesion. In [2], the authors studied the possibility of replacing a survey question about moving desires by a model-based prediction. To do so, they used machine learning algorithms to predict moving behavior from register data. The results showed that the models are able to predict the moving behavior about equally well as the respondents of the survey. In [3], the authors used data collected by the British Household Panel Survey. The data is conducted using a face to face interviews. They examined the reasons why people desire to move and how these desires affect their moving behavior. The results show that the reasons people report for desiring to move vary considerably over the life course. People are more likely to relocate if they desire to move for targeted reasons like job opportunities than if they desire to move for more diffuse reasons relating to area characteristics. In [17], the authors studied the social capital and propensity to move of four different resident categories in two Dutch restructured neighborhoods. They defined social capital as the benefit of cursory interactions, trust, shared norms, and collective action. Using a logistic regression model, they showed that (1) age, length of residency, employment, income, dwelling satisfaction, dwelling type and perceived neighborhood quality significantly predict residents' propensity to move and (2) social capital is of less importance than suggested by previous research. ### Privacy in Machine Learning In this section, we will discuss challenges and possible solutions in privacy preserving techniques. Existing works can be divided into three categories according to the roles of machine learning (ML) in privacy [18]: First, _making the ML model private_. This category includes making ML model (its parameters) and data private. Second, _using ML to enhance privacy protection_. In this category, the ML is used as a tool to enhance privacy protection of the data. Third, _ML based privacy attack_. The ML model is used as an attack tool of the attacker. Based on the threat model, both data and the prediction model are important. Predicting and estimating the propensity to move requires access to models as well as to data. However, since the propensity to move data contains sensitive data such as income, gender, age, education level, the data is treated as sensitive and once collected from individuals it cannot be shared with third parties. One possible solution is to generate synthetic data that captures the distribution of the original data and generates artificial, but yet realistic data. The synthetic data offers a replacement for the original data to enable model training, model validation and model explanation. In order to attempt to protect the machine learning model before release or sharing, we propose to train our model on the synthetic data instead of the original data. The goal is to test whether it is possible to release a machine learning model trained on synthetic data without leaking sensitive information. Synthetic data generation is based on two main steps: First, we train a model to learn the joint probability distribution in the original data. Second, we generate a new artificial data set from the same learned distribution. In recent years, advances in machine learning and deep learning models have offered us the possibility to learn a wide range of data types. Synthetic data was first proposed for Statistical Disclosure Control (SDC) [7]. The SDC literature distinguishes between two types of synthetic data [7]. First, _fully synthetic data sets_ create an entirely synthetic data based on the original data set. Second, _partially synthetic data sets_ contain a mix of original and synthetic values. It replaces only observed values for variables that bear a high risk of disclosure with synthetic values. In this paper, we are interested in fully synthetic data. For data synthesize, we used an open source and widely used R toolkit: _Synthpop_. We used a CART model for synthesize since it has been shown to perform well for other type of data [8]. Data synthesis is based on sequential modeling by decomposing a multidimensional joint distribution into conditional and univariate distributions. In other words, the synthesis procedure models and generates one variable at a time, conditionally to previous variables: \[f_{x_{1},x_{2},..,x_{n}}=f_{x_{1}}\times f_{x_{2}|x_{1}}\times..\times f_{x_{n }|x_{1},x_{2},..x_{n-1}} \tag{1}\] Synthesis using CART model has two important parameters. First, the order in which variables are synthesized called _visiting.sequence_. This parameter has an important impact on the quality of the synthetic data since it specifies the order in which the conditional synthesize will be applied. Second, the _stopping rules_ that dictate the number of observations that are assigned to a node in the tree. ### Attribute Inference Attack Privacy attacks in machine learning [5, 22] include membership inference attacks [24], model reconstruction attacks such as attribute inference [29], model inversion attacks [11, 10], and model extraction attacks [28]. Here, we focus on a form of model inversion attacks, namely, attribute inference attack. Model inversion attacks try to recover sensitive features or the full data sample based on output labels and partial knowledge (subset of data) of some features [22, 19]. [19] provided a summary of possible assumptions about adversary capabilities and resources for different model inversion attribute inference attacks. In [11, 10], the authors introduced two types of model inversion attacks: Black-box attack and white-box attack. The difference between black-box attack and white-box attack lies in the amount of resources that are available for the adversary. In [19], the authors proposed two types of model inversion attacks: (1) confidence score-based model inversion attack and (2) label-only model inversion attack. The first attack assumed that the adversary has access to the target model's confidence scores, whereas the second assumed that the adversary has access to the target model's label predictions only. Other attacks such as [13] assumed that the attacker does not have access to target individuals non-sensitive features. Attribute inference attackAn attribute inference attack or attribute disclosure occurs if an attacker is able to learn new information about a specific individual, i.e., the values of certain attributes. Examples from the Statistical Disclosure Control (SDC) literature include [7, 15]. Here, we study attribute inference attack as prediction. An attacker trains a model to predict the value of an unknown sensitive attribute from a set of known attributes given access to raw or synthetic data [25, 14]. We implemented our attribute inference attack using _adversarial robustness toolbox5_. In order to perform an attribute inference attack, we assume that the attacker has access to a subset of data, a marginal prior distribution representing possible values for the sensitive features in the training data, and the released ML model's predictions. Using this resources, an attacker is able to train a model to learn sensitive information. This attack is called black-box attack because the predictions of the model, but not the architecture or the weights are available to the attacker. Further details about our black-box attack will be discussed in section 4.3. Footnote 5: [https://github.com/Trusted-AI/adversarial-robustness-toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox) In addition to black-box attack, we use two other attack models as baselines for comparison, namely, _random attack_ and _baseline attack_. Both attacks assume that the attacker does not have access to the released ML model. First, the random attack has only access to the marginal prior distribution of the sensitive feature that is being targeted. Our random attack uses random classifier with a stratified strategy, i.e., it generates random predictions that respect the class distribution of the training data. Second, the baseline attack also access to the prior distribution of the sensitive feature. However, in addition it also uses a ML model, i.e., a random forest classifier, to infer sensitive attributes. Recall that only the black-box attack is related to our threat model defined in Section 2. The random and baseline attacks provide comparative conditions, which the black-box attack must outperform. Measuring success of inferencePrior work on synthetic data disclosure risk [26] looked at either matching probability by comparing perceived match risk, expected match risk, and true match risk [20], or Bayesian estimation approach by assuming that an attacker seeks a Bayesian posteriori distribution [21]. In this paper, our black-box attack is considered successful if its accuracy outperforms the accuracy of a random attack. In other words, we assume that going beyond a random guess, can reveal sensitive information about individuals. This type of measurement is similar to previous work on model inversion attribute inference attacks [11, 10, 13], which measure the difference between the adversary's predictive accuracy given the model and the best, i.e., ideal, accuracy that could be achieved without the model [29]. Methods for measurements of success are discussed in [1], who also covers the precise or probabilistic measures conventionally used in the SDC community, i.e., using matching or Bayesian estimate. ## 4 Experimental Setup In this section, we describe our data sets, utility measures measured by applying different machine learning algorithms, and adversary resources. ### Data Set For our experiments, we used an existing data about someone's propensity-to-move. The data was collected by [2]. [2] linked several registers from the Dutch System of Social Statistical Datasets (SSD). The data set has around 150K individuals including 100K individuals drawn randomly from register data and 50K individuals are sampled from the Housing Survey 2015 (HS2015) respondents. The resulting data set has used in [2] has 700 variables containing for each individual: (1) "y01" the binary target variable indicating whether (=1) or not (=0) a person moved in year \(j\) where j= 2013, 2015. The target attribute "y01" is imbalanced and dominated by class 0. (2) time independent personal variables, (3) time dependent personal, household, and housing variables, (4) information about regional variables. Feature SelectionDifferent from [2], we applied feature selection to reduce the number of features. Some features can be noise and potentially reduce the performance of the models. Also, reducing number of feature helps to reduce the complexity of synthesize and to better understand the output of the ML model. To do so, we applied _SelectKBest_ from _Sklearn_. We use chi2 method as a scoring function. We selected top \(K=30\) features with the highest scores. Our final data set contains 30 best features for a total of 150K individuals 6. In addition to the 30 features, we added gender (binary), income (categorical with five categories), and age (categorical with seven categories) as sensitive features that will be used in our attribute inference attack later (Section 5.2). Gender, age, and income have balanced classes. Similar to [2], we found that the most important features are age (lft), time since latest change in household composition (inhehalgr3), and time since latest move or number of moves (rinobjectnummer). Footnote 6: We note that reducing the number of features does not have an impact on the success rate of the attack because there is a redundancy in some variables since they go until 17 years back [2]. Data SplitsAs mentioned earlier, our propensity to move data was collected in 2013 and 2015. Following [2], we use the 2013 data to train our classifier and the 2015 data to test the classifier and to carry out the attacks. The 2015 data contains individuals who were present in the 2013 data set, and also new individuals. We split the 2015 data set into two parts "original individuals" (inclusive) and "new in 2015 individuals" (exclusive) in order to test our our classifier and our attacks on individuals who were in the training set but also in the "unseen individuals". ### Utility Measures Machine Learning AlgorithmsWe selected a number of machine learning algorithms to predict propensity-to-move. The chosen machine learning techniques provide insight into the importance of the features and are easy to interpret and understand [2]. In our experiments in Section 5.1, we used: _decision tree_ where a tree is created/ learned by splitting the source set into subsets based on an attribute value test. This process is repeated on each derived subset in a recursive manner. Extra trees and random forest are part of ensemble methods. In _random forest_, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. _Extra trees_ fits a number of randomized decision trees on various sub-samples of the data set and uses averaging to improve the predictive accuracy and control overfitting. _Naive Bayes_ is a probabilistic machine learning algorithm based on applying Bayes' theorem with strong (naive) independence assumptions between the features. _KNN_, K-nearest neighbors, is a non-parametric machine learning algorithm. KNN uses proximity to make predictions about the grouping of an individual data point. Metrics for Evaluating Performance of ML ModelsSimilar to [2] and since our target propensity-to-move attribute is imbalanced, we used: F1-score, as a harmonic mean of precision and recall score. Matthews Correlation Coefficient (MCC), and Area Under the Curve (AUC) that measures the ability of a classifier to distinguish between classes. ### Adversary Resources In Section 3.3, we provided description of our attack models. The attacker is interested to infer target individual sensitive features. Below, we briefly discuss different attack models used in our experiments along with different resources that are available for the attacker. * _Random attack_: uses a subset of data and marginal prior distribution. * _Baseline attack_: uses a subset of data, marginal prior distribution, and random forest classifier. * _Black-box attack_: uses a subset of data, marginal prior distribution, released ML model, and random forest classifier. In random attack model, a random classifier randomly infers target individual's sensitive features i.e., gender, age, income. In baseline attack model, a random forest classifier is trained on a subset of data and marginal prior distribution to predict sensitive features. Last but not least, a black-box attack model has access to the released ML model's predictions, in addition to having access to subset of data and marginal prior distribution. Then, a random forest classifier is trained to infer target individual's sensitive features. Understanding the vulnerability of a model to attribute inference attack requires using right metric to evaluate different attack models. Since our sensitive target features (gender, age, income) are balanced [10], we used precision, recall to measure the effectiveness of the attacks. Precision measures the ability of the classifier not to label as positive a sample that is negative. Precision is the ratio of \(tp/(tp+fp)\) where \(tp\) is the number of true positives and \(fp\) the number of false positives. Recall measures the ability of the classifier to find all the positive samples. Recall is the ratio of \(tp/(tp+fn)\) where \(tp\) is the number of true positives and \(fn\) the number of false negatives. We also measure accuracy which is defined as the fraction of predictions that our classifier got right. ## 5 Experimental Results Now, that we have defined our threat model including the adversary resources and capabilities, and utility measures to evaluate the quality of synthetic data and machine learning algorithms, we turn to discuss our experimental results. ### Evaluation of Machine Learning Algorithms Table 2 shows our results of classification performance of propensity to move, and confirms the results of [2]. As expected, all classifiers outperform the random baseline, with classifiers using trees generally the stronger performers. We also see that when the test set includes only individuals already present in the training set (_inclusive_), the performance is better than when it includes only "unseen" individuals (_exclusive_). Note that if the data for the _inclusive_ individuals were identical in the training and test set, we would have expected very high classification scores. However, the data is not identical because it was collected on two different occasions with two years intervening, and individuals' situations would presumably have changed. _Reproducing Burger et al.,'s [2] results_ In Table 2, results show that all machine learning classifiers outperform random classifier. Overall we observe that our results are in line with [2] across different metrics. This confirms that we can still predict individuals moving behavior in the same level as in [2] even after reducing number of features. In addition to reproducing [2], we looked at another prediction model where train and test individuals are exclusive/different. We found that it is also possible to predict moving behavior of new individuals from 2015 based on a classifier trained on different individuals from 2013. _Measuring the utility of synthetic data_ In order to evaluate the quality of synthetic data, we run machine learning algorithms on synthesized training set (2013 data). We used \(TSTR\)[12] evaluation strategy where we train classifiers on 2013 synthetically generated data and we test on 2015 original data. Results in Table 2 show that the performance of machine learning algorithms trained on synthetic data is very close and comparable to the performance of machine learning algorithms trained on original data. This confirms that the synthetic training set can replace the original training set. In the remainder of the paper, we will focus on decision tree model. We will assume that we are releasing a decision tree model. ### Model Inversion Attribute Inference Attack In this section, we present the results of our experiments on attribute inference attack using the three attack models: (1) random attack, (2) baseline attack, (3) black-box attack (Section 4.3). Recall that we assume that the adversary can have access to three different subsets of data (Section 2). 1. Inclusive individuals (2013): the attacker has access to a subset of the data that is used from 2013 to train the released machine learning algorithm. 2. Inclusive individuals (2015): the attacker has access to a more recent subset of data from 2015, but for the same set of individuals that are used to train the released machine learning algorithm. 3. Exclusive individuals (2015): the attacker has access to a recent subset of data from 2015, but the individuals are different from individuals that are used to train the released machine learning algorithm. Table 3 shows results of different attribute inference attacks for three type of sensitive features gender, age and income. We notice that attack always achieves \begin{table} \begin{tabular}{c c c c c c c} \hline \multicolumn{2}{c}{_Machine Learning_} & \multicolumn{4}{c}{_Training and test_} & \multicolumn{2}{c}{_Training and test_} \\ \multicolumn{2}{c}{_Algorithms_} & \multicolumn{4}{c}{_individuals are exclusive individuals are inclusive_} \\ \cline{3-8} \multicolumn{2}{c}{} & **AUC** & **MCC** & **F1-score** & **AUC** & **MCC** & **F1-score** \\ \hline \multirow{8}{*}{_Original Data_} & _Random_ & 0.4962 & -0.0105 & 0.2139 & 0.5014 & 0.0029 & 0.1633 \\ \cline{2-8} & _NaiveBayes_ & 0.5656 & -0.0328 & 0.5491 & 0.6815 & 0.2204 & 0.2992 \\ \cline{2-8} & _RandomForest_ & 0.7061 & 0.3210 & 0.6322 & 0.7532 & 0.3121 & 0.4460 \\ \cline{2-8} & _DecisionTree_ & 0.6372 & 0.2692 & 0.5376 & 0.6568 & 0.2292 & 0.3057 \\ \cline{2-8} & _ExtraTrees_ & 0.7226 & 0.3197 & 0.6325 & 0.7597 & 0.3212 & 0.4525 \\ \cline{2-8} & _KNN_ & 0.6304 & 0.2074 & 0.4104 & 0.6717 & 0.1744 & 0.2235 \\ \hline \multirow{8}{*}{_Synthetic Data_} & _Random_ & 0.4991 & -0.025 & 0.2261 & 0.5011 & 0.0022 & 0.1657 \\ \cline{2-8} & _NaiveBayes_ & 0.5658 & 0.045 & 0.5451 & 0.6822 & 0.2029 & 0.2578 \\ \cline{1-1} \cline{2-8} & _RandomForest_ & 0.7053 & 0.3282 & 0.6343 & 0.7467 & 0.3133 & 0.4471 \\ \cline{1-1} \cline{2-8} & _DecisionTree_ & 0.6489 & 0.2598 & 0.4878 & 0.6618 & 0.2125 & 0.3078 \\ \cline{1-1} \cline{2-8} & _ExtraTrees_ & 0.7188 & 0.3185 & 0.6321 & 0.7557 & 0.3138 & 0.4464 \\ \cline{1-1} \cline{2-8} & _KNN_ & 0.6067 & 0.1152 & 0.1857 & 0.6542 & 0.1637 & 0.2070 \\ \hline \end{tabular} \end{table} Table 2: Classification performance of propensity-to-move measured in terms of AUC, MCC, and F1-score on **original data** and **synthetic data**. (Right) the data splitting is similar to [2]. The training set individuals and test set individuals are inclusive. (Left) A different data splitting where we train the model on individuals data from 2013, then, we test the model on different individuals from 2015. better than random scores, which demonstrates the viability of the attack. Comparing the row "Original" for the three individuals sets and across all three sets of sensitive attributes (columns), we see that the attack is less successful for the "Exclusive" individuals who were unseen in the training data of the classifier. This fact might lead us to wonder whether training the classifier on synthetic data might lead to less successful attacks, since the individuals in the training data would be in some way "different" with the target individuals. This, however, turns out not to be the case. Comparing the row "Synthetic" for the three individuals sets and across all three sets of sensitive attributes (columns), we see that if the training data is synthesized using the original training data, the model is just as susceptible to attack as when trained on the original data. This point is less surprising when we take into account the high success of the "Random" attack. This attack recovers sensitive attributes of individuals without access to the trained machine learning model. Instead, priors are used. We assume that the information of the priors is also retained in the trained model. These results demonstrate the magnitude of the challenge that we face, if we wish to release a trained machine learning model publically. ## 6 Conclusion and Future Work In this paper, we have investigated an attack on a machine learning model trained to predict individual's propensity-to-move i.e., in the next two years. We have found that an attacker is able to infer sensitive attributes both for individuals in the training data as well as for "unseen" individuals. However, we observed that for "unseen" individuals, the attribute inference attack is somewhat less successful. This result is consistent with the training data used to train ML model having a different distribution than the "unseen" individuals. To explore the ability of synthetic data to protect against attribute inference attack, we created fully synthetic data using CART model. The ML model trained on synthetic data maintained prediction performance, but was found to leak in the same way as the original classifier. This result is not particularly surprising. Synthetic data mimics properties of the original data including overall structure, correlation between features, and the joint distributions [25]. Our results is interesting because until now The SDC community working with synthetic data has mainly focused on measuring the risk of identity disclosure rather than attribute disclosure [26]. In the identity disclosure literature, synthetic data has been shown to provide protection [6, 27]. Our work draws attention to the fact a lot of work is still needed to protect against attribute disclosure [1]. A potential solution to protect against attribute inference attack is to apply privacy-preserving techniques during synthesis, e.g., data perturbation or masking sensitive attributes. Also, it would be interesting to explore different combinations of ML and conventional models to synthesize and carry out attribute attacks. From an evaluation perspective, future work should look at other metrics [14] (e.g., from SDC and/or ML perspective) to evaluate and quantify the success of attribute inference attack for a given target individual. Finally, future research should expand the threat model that we have adopted in this research (Section 2) and other attack scenarios in which the attacker has access to more limited resources, e.g., assuming that attacker does not have access to all attributes in data.
2302.07657
Dynamic Flows with Time-Dependent Capacities
Dynamic network flows, sometimes called flows over time, extend the notion of network flows to include a transit time for each edge. While Ford and Fulkerson showed that certain dynamic flow problems can be solved via a reduction to static flows, many advanced models considering congestion and time-dependent networks result in NP-hard problems. To increase understanding of these advanced dynamic flow settings we study the structural and computational complexity of the canonical extensions that have time-dependent capacities or time-dependent transit times. If the considered time interval is finite, we show that already a single edge changing capacity or transit time once makes the dynamic flow problem weakly NP-hard. In case of infinite considered time, one change in transit time or two changes in capacity make the problem weakly NP-hard. For just one capacity change, we conjecture that the problem can be solved in polynomial time. Additionally, we show the structural property that dynamic cuts and flows can become exponentially complex in the above settings where the problem is NP-hard. We further show that, despite the duality between cuts and flows, their complexities can be exponentially far apart.
Thomas Bläsius, Adrian Feilhauer, Jannik Westenfelder
2023-02-15T13:42:26Z
http://arxiv.org/abs/2302.07657v1
# Dynamic Flows with Time-Dependent Capacities ###### Abstract Dynamic network flows, sometimes called flows over time, extend the notion of network flows to include a transit time for each edge. While Ford and Fulkerson showed that certain dynamic flow problems can be solved via a reduction to static flows, many advanced models considering congestion and time-dependent networks result in NP-hard problems. To increase understanding of these advanced dynamic flow settings we study the structural and computational complexity of the canonical extensions that have time-dependent capacities or time-dependent transit times. If the considered time interval is finite, we show that already a single edge changing capacity or transit time once makes the dynamic flow problem weakly NP-hard. In case of infinite considered time, one change in transit time or two changes in capacity make the problem weakly NP-hard. For just one capacity change, we conjecture that the problem can be solved in polynomial time. Additionally, we show the structural property that dynamic cuts and flows can become exponentially complex in the above settings where the problem is NP-hard. We further show that, despite the duality between cuts and flows, their complexities can be exponentially far apart. ## 1 Introduction Network flows are a well established way to model transportation of goods or data through systems representable as graphs. Dynamic flows (sometimes called flows over time) include the temporal component by considering the time to traverse an edge. They were introduced by Ford and Fulkerson [2], who showed that maximum dynamic flows in static networks can be found using _temporally repeated flows_, which send flow over paths of a static maximum flow as long as possible. Since capacities in real-world networks tend to be more dynamic, several generalizations have been considered in the literature. One category here is congestion modeling networks, where transit times of edges can depend on the flow routed over them [6, 7]. Other generalizations model changes in the network independently from the routed flow [4, 11, 9]. This makes it possible to model known physical changes to the network and allows for situations, where we have estimates of the overall congestion over time that is caused by external entities that are not part of the given flow problem. There are also efforts to include different objectives for the flow, e.g., for evacuation scenarios, it is beneficial for a flow to maximize arrival for all times, not just at the end of the considered time interval [1]. Most problems modeling congestion via flow-dependent transit times are NP-hard. If the transit time depends on the current load of the edge, the flow problems become strongly NP-hard and no \(\varepsilon\) approximation exists unless P = NP [6]. If the transit time of an edge instead only depends on its inflow rate while flow that entered the edge earlier is ignored the flow problems are also strongly NP-hard [7]. When allowing to store flow at vertices, pseudo-polynomial algorithms are possible if there are time-dependent capacities [4] and if there additionally are time-dependent transit times [11, 9]. In the above mentioned evacuation scenario, one aims at finding the so-called earliest arrival flow (EAF). It is also NP-hard in the sense that it is hard to find the average arrival time of such a flow [1]. Moreover, all known algorithms to find EAFs have worst case exponential output size for all known encodings [1]. In this paper, we study natural generalizations of dynamic flows that have received little attention so far, allowing time-dependent capacities or time-dependent transit times. We prove that finding dynamic flows with time-dependent capacities or time-dependent transit times is weakly NP-hard, even if the graph is acyclic and only a single edge experiences a capacity change at a single point in time. This shows that a single change in capacity already increases the complexity of the - otherwise polynomially solvable - dynamic flow problem. It also implies that the dynamic flow problem with time-dependent capacities is not FTP in the number of capacity changes. The above results hold in the setting where the considered time interval is finite. If we instead consider infinite time, the results remain the same for time-dependent transit times. For time-dependent capacities, two capacity changes make the problem weakly NP-hard. We conjecture that it can be solved in polynomial when there is only one change. Beyond these results on the computational complexity, we provide several structural insights. For static flows, one is usually not only interested in the flow value but wants to output a maximum flow or a minimum cut. The concept of flows translates more or less directly to the dynamic setting [2], we need to consider time-dependent flows and cuts if we have time-dependent capacities or transit times. In this case, instead of having just one flow value per edge, the flow is a function over time. Similarly, in a dynamic cut, the assignment of vertices to one of two partitions changes over time. The cut-flow duality, stating that the capacity of the minimum cut is the same as the value of the maximum flow also holds in this and many related settings [5, 8, 11]. Note that the output complexity can potentially be large if the flow on an edge or the partition of a vertex in a cut changes often. For dynamic flows on static graphs (no changes in capacities or transit times) vertices start in the target vertices' partition and at some point change to the source partition, but never the other way [10], which shows that cuts have linear complexity in this setting. In case of time-dependent capacities or transit times, we show that flow and cut complexity are sometimes required to be exponential. Specifically, for all cases where we show weak NP-hardness, we also give instances for which every maximum flow and minimum cut have exponential complexity. Thus, even a single edge changing capacity or transit time once can jump the output complexity from linear to exponential. Moreover, we give examples where the flow complexity is exponential while there exists a cut of low complexity and vice versa. We note that the scenario of time-dependent capacities has been claimed to be strongly NP-complete [9] before. However, we suspect the proof to be flawed as one can see that this scenario can be solved in pseudo-polynomial time. Moreover, the above mentioned results on the solution complexity make it unclear whether the problem is actually in NP. In Appendix 0.A, we point out the place where we believe the proof for strong NP-hardness is flawed. ## 2 Preliminaries We consider dynamic networks \(G=(V,E)\) with directed edges and designated source and target vertices \(s,t\in V\). Edges \(e=(v,w)\in E\) have a time-dependent non negative _capacity_\(u_{e}\colon[0,T]\to\mathbb{R}_{0}^{+}\), specifying how much flow can enter \(e\) via \(v\) at each time. We allow \(u_{e}\) to be non-continuous but only for finitely many points in time. In addition, each edge \(e=(v,w)\) also has a non negative _transit time_\(\tau_{e}\in\mathbb{R}^{+}\), denoting how much time flow takes to move from \(v\) to \(w\) when traversing \(e\). Note that the capacity is defined on \([0,T]\), i.e., time is considered from \(0\) up to a _time horizon_\(T\). Let \(f\) be a collection of measurable functions \(f_{e}\colon[0,T-\tau_{e}]\to\mathbb{R}\), one for each edge \(e\in E\), assigning every edge a flow value depending on the time. The restriction to the interval \([0,T-\tau_{e}]\) has the interpretation that no flow may be sent before time \(0\) and no flow should arrive after time \(T\) in a valid flow. To simplify notation, we allow time values beyond \([0,T-\tau_{e}]\) and implicitly assume \(f_{e}(\Theta)=0\) for \(\Theta\notin[0,T-\tau_{e}]\). We call \(f\) a _dynamic flow_ if it satisfies the _capacity constraints_\(f_{e}(\Theta)\leq u_{e}(\Theta)\) for all \(e\in E\) and \(\Theta\in[0,T-\tau_{e}]\), and _strong flow conservation_, which we define in the following. The _excess flow_\(\operatorname{ex}_{f}(v,\Theta)\) of a vertex \(v\) at time \(\Theta\) is the difference between flow sent to \(v\) and the flow sent from \(v\) up to time \(\Theta\), i.e., \[\operatorname{ex}_{f}(v,\Theta)\coloneqq\int_{0}^{\Theta}\sum_{e=(u,v)\in E}f_ {e}(\zeta-\tau_{e})-\sum_{e=(v,u)\in E}f_{e}(\zeta)\,\mathrm{d}\zeta.\] We have strong flow conservation if \(\operatorname{ex}_{f}(v,\Theta)=0\) for all \(v\in V\setminus\{s,t\}\) and \(\Theta\in[0,T]\). The _value_ of \(f\) is defined as the excess of the target vertex at the time horizon \(|f|\coloneqq\operatorname{ex}_{f}(t,T)=-\operatorname{ex}_{f}(s,T)\). The _maximum dynamic flow problem with time-dependent capacities_ is to find a flow of maximum value. We refer to its input as _dynamic flow network_. A cut-flow duality similar to the one of the static maximum flow problem holds for the maximum dynamic flow problem with the following cut definition. A _dynamic cut_ or _cut over time_ is a partition of the vertices \((S,V\setminus S)\) for each point in time, where the source vertex \(s\) always belongs to \(S\) while the target \(t\) never belongs to \(S\). Formally, each vertex \(v\in V\) has a boolean function \(S_{v}\colon[0,T]\to\{0,1\}\) assigning \(v\) to \(S\) at time \(\Theta\) if \(S_{v}(\Theta)=1\). As for the flow, we extend \(S_{v}\) beyond \([0,T]\) and set \(S_{v}(\Theta)=1\) for \(\Theta>T\) for all \(v\in V\) (including \(t\)). The _capacity_\(\operatorname{cap}(S)\) of a dynamic cut \(S\) is the maximum flow that could be sent on edges from \(S\) to \(V\setminus S\) during the considered time interval \([0,T]\), i.e., \[\operatorname{cap}(S)=\int_{0}^{T}\sum_{\begin{subarray}{c}(v,w)\in E\\ S_{v}(\Theta)=1\\ S_{w}(\Theta+\tau_{e})=0\end{subarray}}u_{(v,w)}(\Theta)\,\mathrm{d}\Theta.\] An edge \((v,w)\)_contributes_ to the cut \(S\) at time \(\Theta\) if it contributes to the above sum, so \(S_{v}(\Theta)=1\wedge S_{w}(\Theta+\tau_{e})=0\). Note that this is similar to the static case, but in the dynamic variant the delay of the transit time needs to be considered. Thus, for the edge \(e=(v,w)\) we consider \(v\) at time \(\Theta\) and \(w\) at time \(\Theta+\tau_{e}\). Moreover, setting \(S_{v}(\Theta)=1\) for all vertices \(v\) if \(\Theta>T\) makes sure that no point in time beyond the time horizon contributes to \(\operatorname{cap}(S)\). Theorem 2.1 (Min-Cut Max-Flow Theorem [8, 11]): _For a maximum flow over time \(f\) and a minimum cut over time \(S\) it holds \(|f|=cap(S)\)._ Proof: The theorem by Philpott [8, Theorem 1] is more general than the setting considered here. They in particular allow for time-dependent storage capacities of vertices. We obtain the here stated theorem by simply setting them to constant zero. The theorem by Tjandra [11, Theorem 3.4] is even more general and thus also covers the setting with time-dependent transit times. Though the general definition allows the capacity functions to be arbitrary, for our constructions it suffices to use piecewise constant capacities. We note that in this case, there always exists a maximum flow that is also piecewise constant, assigning flow values to a set of intervals of non-zero measure. The property that the intervals have non-zero measure lets us consider an individual point \(\Theta\) in time and talk about the contribution of an edge to a cut or flow at time \(\Theta\), as \(\Theta\) is guaranteed to be part of a non-empty interval with the same cut or flow. For the remainder of this paper, we assume that all flows have the above property. We define the following additional useful notation. We use \(S(\Theta)\coloneqq\{v\in V\mid S_{v}(\Theta)=1\}\) and \(\bar{S}(\Theta)\coloneqq\{v\in V\mid S_{v}(\Theta)=0\}\) to denote the cut at time \(\Theta\). Moreover, a vertex \(v\) changes its partition at time \(\Theta\) if \(S_{v}(\Theta-\varepsilon)\neq S_{v}(\Theta+\varepsilon)\) for every sufficiently small \(\varepsilon>0\). We denote a change from \(S\) to \(\bar{S}\) with \(S_{v}\xrightarrow{\Theta}\bar{S}_{v}\) and a change in the other _direction_ from \(\bar{S}\) to \(S\) with \(\bar{S}_{v}\xrightarrow{\Theta}S_{v}\). We denote the number of partition changes of a vertex \(v\) in a cut \(S\) with \(\operatorname{ch}_{v}(S)\). Moreover the total number of changes in \(S\) is the _complexity_ of the cut \(S\). For a flow \(f\), we define changes on edges as well as the complexity of \(f\) analogously. In the above definition of the maximum dynamic flow problem we allow time-dependent capacities but assume constant transit times. Most of our results translate to the complementary scenario where transit times are time-dependent while capacities are constant. In this setting \(\tau_{e}(\Theta)\) denotes how much time flow takes to traverse \(e\), if it enters at time \(\Theta\). Similarly to the above definition, we allow \(\tau_{e}\) to be non-continuous for finitely many points in time. Additionally we look at the scenarios where infinite time (\(\Theta\in(-\infty,\infty)\)) is considered instead of only considering times in \([0,T]\). This removes structural effects caused by the boundaries of the considered time interval. Intuitively, because we are working with piecewise constant functions with finitely many discontinuities, there exists a point in time \(\Theta\) that is sufficiently late that all effects of capacity changes no longer play a role. From that time on, one can assume the maximum flow and minimum cut to be constant. The same holds true for a sufficiently early point in time. Thus, to compare flow values it suffices to look at a finite interval \(I\). Formally, \(f\) is a maximum dynamic flow with _infinite considered time_ if it is constant outside of \(I\) and maximum on \(I\), such that for any larger interval \(J\supset I\) there exists a large enough interval \(K\supset J\) so that a maximum flow with considered time interval \(K\) can be \(f\) during \(J\). Minimum cuts with infinite considered time are defined analogously. Such maximum flows and minimum cuts always exist as temporally repeated flows provide optimal solutions to dynamic flows and we only allow finitely many changes to capacity or traversal time. We will need the set of all integers up to \(k\) and denote it \([k]\coloneqq\{i\in\mathbb{N}^{+}|i\leq k\}\). ## 3 Computational Complexity In this section we study the computational complexity of the dynamic flow problem with time-dependent capacities or transit times. We consider finite and infinite time. For all cases except for a single capacity change with infinite considered time, we prove NP-hardness. We start by showing hardness in the setting where we have time-dependent capacities with only one edge changing capacity once. Our construction directly translates to the setting of infinite considered time with one edge changing capacity twice. For the case of time-dependent transit times we prove hardness for one change even in the infinite considered time setting. This also implies hardness for one change when we have a finite time horizon. We reduce from the _partition problem_, which is defined as follows. Given a set of positive integers \(S=\{b_{1},\ldots,b_{k}\}\) with \(\sum_{i=1}^{k}b_{i}=2L\), is there a subset \(S^{\prime}\subset S\) such that \(\sum_{a\in S^{\prime}}a=L\)? Theorem 2.1: _The dynamic flow problem with time-dependent capacities is weakly NP-hard, even for acyclic graphs with only one capacity change._ Proof: Given an instance of the partition problem, we construct \(G=(V,E)\) as shown in Figure 1 and show that a solution to partition is equivalent to a flow of value \(1\) in \(G\). Every \(b_{i}\in S\) corresponds to a vertex \(x_{i}\) which can be reached by \(x_{i-1}\) with one edge of transit time \(b_{i}\) and one bypass edge of transit time zero. The last of these vertices \(x_{k}\) is connected to the target \(t\) with an edge only allowing flow to pass during \([L+1,L+2]\), where the lower border is ensured by the capacity change of \((x_{k},t)\) and the upper border is given by the time horizon \(T=L+3\). The source \(s\) is connected to \(x_{0}\) with an edge of low capacity \(\frac{1}{L+1}\), so that the single flow unit that can enter this edge in \([0,T-2]\) can pass \((x_{k},t)\) during one time unit. Since a solution to the partition problem is equivalent to a path of transit time \(L\) through the \(x_{i}\), we additionally provide paths of transit time \(0,1,\ldots,L-1\) bypassing the \(b_{i}\) edges via the \(y_{i}\) so that a solution for partition exists, if and only if flow of value \(1\) can reach \(t\). To provide the bypass paths, we set \(\ell\in\mathbb{N}_{0}\) so that \(L=2^{\ell+1}+r,\ r\in\mathbb{N}_{0},r<2^{\ell+1}\) and define vertices \(y_{i},i\in\mathbb{N}_{0},i\leq\ell\). They create a path of transit time \(L-1\) where the edges' transit times are powers of two and one edge of transit time \(r\) and all edges can be bypassed by an edge with transit time zero. This allows all integer transit times smaller than \(L-1\). All edges except for \((s,x_{0})\) have unit capacity when they are active. Given a solution \(S^{\prime}\) to the partition problem, we can route flow leaving \(s\) during \([0,1]\) through the \(x_{i}\) along the non zero transit time edges if and only if the corresponding \(b_{i}\) is in \(S^{\prime}\). Flow leaving \(s\) in \([1,L+1]\) can trivially reach \(x_{k}\) during \([L+1,L+2]\) using the bypass paths, providing a maximum flow of \(1\). Only one unit of flow can reach \(x_{0}\) until \(L+2\), considering the time horizon \(T=L+3\) and the transit time of \((x_{k},t)\), the flow can have value at most \(1\). Given a flow that sends one unit of flow to \(t\), we can see that the flow has to route all flow that can pass \((s,x_{0})\) during \([0,L+1]\) to \(t\). Due to the integrality of transit times, the flow leaving \(s\) during \([0,1]\) has to take exactly time \(L\) to traverse from \(x_{0}\) to \(x_{k}\). The bypass paths via \(y_{0}\) are too short for this. As such, this time is the sum of edge transit times taken from the partition instance and zeroes from bypass edges, and there exists a solution \(S^{\prime}\) to the partition problem that consists of the elements corresponding to the non zero transit time edges taken by this flow. Corollary 1: _The dynamic flow problem with time-dependent capacities and infinite considered time is weakly NP-hard, even for acyclic graphs with only two capacity changes._ Proof: In the proof of Theorem 2.2, we restricted the flow on the edge from \(x_{k}\) to \(t\) to have non-zero capacity only at time \([L+1,L+2]\). For Theorem 2.2, we achieved the lower bound with one capacity change and the upper bound with the time horizon. Here, we can use the same construction but use a second capacity change for the upper bound. For the case of time-dependent transit times, we use a similar reduction. We start with the case of infinite considered time. Theorem 2.3: _The dynamic flow problem with infinite considered time and time-dependent transit times is weakly NP-hard, even for acyclic graphs with only one transit time change._ Proof: Similar to the proof of Theorem 2.2 we give a reduction of the partition problem. The constructed graph can be seen in Figure 2. We want to link the existence of a transit time \(L\) path to a maximum flow sending \(1\) flow per time from \(s\). For this, we start the graph with an edge \((s,x_{0})\) whose transit time gets reduced from \(1\) to zero at time \(\Theta=0\). This results in \(2\) units of flow reaching \(x_{0}\) at time \(\Theta=0\), while only a flow of \(1\) can traverse \((x_{0},t)\). This means that flow of \(1\) has to pass through the \(x_{i}\) and \(y_{i}\). The paths through the \(x_{i}\) and the bypass paths through the \(y_{i}\) function like in the proof of Theorem 2.2, but here the bypass edges also provide paths of transit times \(L+1\) to \(2L\). Because the edge \((x_{k},t)\) has capacity \(u_{(x_{k},t)}=\frac{1}{2L+1}\), the flow routed through \(x_{k}\) has to arrive at \(x_{k}\) using at least \(2L+1\) paths with different transit times. The paths from \(x_{0}\) to \(x_{k}\) have integer transit times between zero and \(2L\), so to route the extra unit of flow arriving at \(x_{0}\) at time \(\Theta=0\), flow needs to be routed through one path of each integer transit time between \(0\) and \(2L\). The bypass paths do not offer a Figure 1: Graph constructed for the reduction of the partition problem to dynamic flow with time-dependent capacities. Flow leaving \(s\) at time zero can only reach \(t\) if it takes exactly time \(L\) to traverse from \(x_{0}\) to \(x_{k}\), such choosing a partition. Black numbers are transit times, blue numbers indicate capacity, all unspecified capacities are \(1\), time horizon is \(T=L+3\). path of transit time \(L\), so, like in the proof of Theorem 2.2, this flow unit can be completely routed through the network if and only if the partition problem has a solution. To translate this result to the case of a finite time horizon, note that we can use the above construction and choose the time horizon sufficiently large to obtain the following corollary. Corollary 2: _The dynamic flow problem with time-dependent transit times is weakly NP-hard, even for acyclic graphs with only one transit time change._ Proof: Using the construction of the proof for Theorem 2.3, we can restrict time to the interval \([-1,2L+1]\), then a solution to the partition problem is equivalent to the existence of a flow of value \(2L+2\). To get a considered time interval from zero to a time horizon, we let the transit time change of \((s,x_{0})\) occur at time \(\Theta=1\) instead and set \(T=2L+2\). This leaves one remaining case: infinite considered time and a single capacity change. For this case, we can show that there always exists a minimum dynamic cut, where each vertex changes partition at most once and all partition changes are of the same direction. Furthermore, for given partitions before and after the changes, a linear program can be used to find the optimal transition as long as no vertex changes partition more than once and all partition changes are of the same direction. This motivates the following conjecture. Conjecture 1: The minimum cut problem in a dynamic flow network with only a single change in capacity and infinite considered time can be solved in polynomial time. Figure 2: Graph constructed for the reduction of the partition problem to dynamic flow with infinite considered time and time-dependent transit times. The flow units leaving \(s\) at times \(-1\) and zero can only reach \(t\) if they take the direct \((x_{0},t)\) edge and \(2L+1\) paths with different transit times to \(x_{k}\). Black numbers are transit times, blue numbers specify capacity, all unspecified capacities are \(1\). ## 4 The Complexity of Maximum Flows and Minimum Cuts We first construct a dynamic flow network such that all maximum flows and minimum cuts have exponential complexity. Afterwards, we show that there are also instances that require exponentially complex flows but allow for cuts of linear size and vice versa. These results are initially proven for a single change in capacity and are then shown to also hold in the setting with time dependent transit times, likewise with only one change in transit time required. ### Exponentially Complex Flows and Cuts We initially focus on the complexity of cuts and only later show that it transfers to flows. Before we start the construction, note that the example in Figure 3 shows how the partition change of two vertices \(a\) and \(b\) can force a single vertex \(v\) to change its partition back and forth. This type of enforced partition change of \(v\) is at the core of our construction. More specifically, we first give a structure with which we can force vertices to mimic the partition changes of other vertices, potentially with fixed time delay. The _mimicking gadget_ links two non terminal vertices \(a,b\in V\setminus\{s,t\}\) using edges \((a,b),(b,t)\in E\) with capacities \(u_{(a,b)}=\alpha,u_{(b,t)}=\beta\). The following lemma shows what properties \(\alpha\) and \(\beta\) need to have such that the mimicking gadget does its name credit, i.e., that \(b\) mimics \(a\) with delay \(\tau_{(a,b)}\). A visualization of the mimicking gadget is shown in Figure 4. Lemma 1: _Let \(G\) be a graph that contains the mimicking gadget as a sub-graph, such that_ \[\alpha>\sum_{w|(b,w)\in E}u_{(b,w)}\quad\text{and}\quad\beta>\sum_{w|(w,b)\in E \setminus(a,b)}u_{(w,b)}.\] _Then, \(S_{b}(\Theta)=S_{a}(\Theta-\tau_{(a,b)})\) for every minimum cut \(S\) and times \(\Theta\in(\tau_{(a,b)},T-\tau_{(b,t)})\)._ Figure 3: Example where \(a\) changes from \(\bar{S}\) (red) to \(S\) (blue) at time \(1\) and \(b\) changes from \(S\) to \(\bar{S}\) at time \(2\). Only edges from \(S\) to \(\bar{S}\) contribute to the cut (bold edges). Assuming \(v\) starts in \(\bar{S}\) and \(u_{(a,v)}<u_{(v,b)}\) as well as \(\tau_{(a,v)}=\tau_{(v,b)}=0\), \(v\) has to change to \(S\) at time \(1\) and back to \(\bar{S}\) at time \(2\) in a minimum cut (top row). The bottom row illustrates the alternative (more expensive) behavior of \(v\). Proof: We first show \(a\in S(\Theta-\tau_{(a,b)})\implies b\in S(\Theta)\). With the partition of \(a\) fixed, we look at possible contribution to \(S\) of edges incident to \(b\) at time \(\Theta\). For \(b\in\bar{S}(\Theta)\) the contribution is at least \(\alpha\), because \(\Theta\in(\tau_{(a,b)},T-\tau_{(b,t)})\) ensures that \((a,b)\) can contribute to \(S\). For \(b\in S(\Theta)\) the contribution is at most \(\sum_{w|(b,w)\in E}u_{(b,w)}<\alpha\). Because \(S\) is a minimum cut, we obtain \(b\in S(\Theta)\). The other direction \(a\in\bar{S}(\Theta-\tau_{(a,b)})\implies b\in\bar{S}(\Theta)\) holds for similar reasons. For \(b\in S(\Theta)\) the contribution is at least \(\beta\). For \(b\in\bar{S}(\Theta)\) the contribution is at most \(\sum_{w|(w,b)\in E}u_{(w,b)}<\beta\). Note that Lemma 1 does not restrict the edges incident to \(a\). Thus, we can use it rather flexibly to transfer partition changes from one vertex to another. To enforce exponentially many partition changes, we next give a gadget that can double the number of partition changes of one vertex. To this end, we assume that, for every integer \(i\in[k]\), we already have access to vertices \(a_{i}\) with _period_\(p_{i}\coloneqq 2^{i}\), i.e., \(a_{i}\) changes partition every \(p_{i}\) units of time. Note that \(a_{1}\) is the vertex with the most changes. With this, we construct the so-called binary counting gadget that produces a vertex \(v\) with period \(p_{0}=1\), which results in it having twice as many changes as \(a_{1}\). Roughly speaking, the binary counting gadget, shown in Figure 6, consists of the above mentioned vertices \(a_{i}\) together with additional vertices \(b_{i}\) such that \(b_{i}\) mimics \(a_{i}\). Between the \(a_{i}\) and \(b_{i}\) lies the central vertex \(v\) with edges from the vertices \(a_{i}\) and edges to the \(b_{i}\). Carefully chosen capacities and synchronization between the \(a_{i}\) and \(b_{i}\) results in \(v\) changing partition every step. To iterate this process using \(v\) as vertex for the binary counting gadget of the next level, we need to ensure functionality with the additionally attached edges of the mimicking gadget. The _binary counting gadget_\(H_{k}\) shown in Figure 6 is formally defined as follows. It contains the above mentioned vertices \(a_{i},b_{i}\) for \(i\in[k]\) and the vertex \(v\). Additionally, it contains the source \(s\) and target \(t\). On this vertex set, we have five types of edges. All of them have transit time \(1\) unless explicitly specified otherwise. The first two types are the edges \((a_{i},b_{i})\) and \((b_{i},t)\) for \(i\in[k]\), which form a mimicking gadget. We set \(\tau_{(a_{i},b_{i})}=p_{i}+1\) which makes \(b_{i}\) mimic the changes of \(a_{i}\) with delay \(p_{i}+1\). Moreover, we set \(u_{(a_{i},b_{i})}=\alpha_{i}\coloneqq 2^{i-1}+2\varepsilon\) and \(u_{(b_{i},t)}=\beta_{i}\coloneqq 2^{i-1}+\varepsilon\). We will see that these \(\alpha_{i}\) and \(\beta_{i}\) satisfy the requirements Figure 4: Gadget linking the partitions of two vertices \(a\) and \(b\), so that \(b\) mimics \(a\) with a delay of \(\tau_{(a,b)}\); \(\alpha,\beta\) are capacities. of the mimicking gadget in Lemma 1. The third and fourth types of edges are \((a_{i},v)\) and \((v,b_{i})\) for \(i\in[k]\) with capacities \(u_{(a_{i},v)}=u_{(v,b_{i})}=2^{i-1}\). These edges have the purpose to force the partition changes of \(v\), similar to the simple example in Figure 3. Finally, we have the edge \((s,v)\) with capacity \(u_{(s,v)}=1-\varepsilon\). It has the purpose to fix the initial partition of \(v\) and introduce some asymmetry to ensure functionality even if additional edges are attached to \(v\). Our plan is to prove that the binary counting gadget \(H_{k}\) works as desired by induction over \(k\). We start by defining the desired properties that will serve as induction hypothesis. Definition 1: Let \(G\) be a graph. We say that \(H_{k}\) is a _valid binary counting gadget_ in \(G\) if \(H_{k}\) is a subgraph of \(G\) and every minimum cut \(S\) has the following properties. * For \(i\in[k-1]\), the vertex \(a_{i}\) has period \(p_{i}\). It changes its partition \(2^{k-i}\) times starting with a change from \(S\) to \(\bar{S}\) at time \(0\) and ending with a change at time \(2^{k}-2^{i}\). * For \(i=k\), \(a_{k}\) changes from \(S\) to \(\bar{S}\) at time \(0\) and additionally back to \(S\) at time \(2^{k}\). Note that in a valid binary counting gadget the \(a_{i}\) and \(v\) form a binary counter from \(0\) to \(2^{k}-1\) when regarding \(\bar{S}\) as zero and \(S\) as \(1\), with \(v\) being the least significant bit (shifted back two time steps); see Figure 5. Lemma 2: _Let \(G\) be a graph containing the valid binary counting gadget \(H_{k}\) such that the central vertex \(v\) has no additional incoming edges, the sum of the capacities of additional outgoing edges of \(v\) is less than \(1-\varepsilon\), and no additional edges are incident to the \(b_{i}\). Then, for every minimum cut \(S\), the central vertex \(v\) has period \(p_{0}=1\) and changes \(2^{k}\) times, starting with a change from \(S\) to \(\bar{S}\) at time \(2\) and ending with a change at time \(2^{k}+1\)._ Proof: First note that the mimicking gadget allows causing an inverted counter behavior for the \(b_{i}\), affecting \(v\) one time step before the corresponding \(a_{i}\). For \(\Theta\in\{0,\ldots,2^{k}-1\}\), the \(a_{i}\) change \[S_{a_{i}}\xrightarrow{\Theta\equiv 0\bmod 2^{i+1}}\bar{S}_{a_{i}}\bar{S}_{a _{i}}\xrightarrow{\Theta\equiv 2^{i}\bmod 2^{i+1}}S_{a_{i}}\bar{S}_{a_{k}} \xrightarrow{2^{k}}S_{a_{k}}.\] Figure 5: Visualization of the partition change patterns of a valid binary counting gadget \(H_{4}\). In blue sections the vertex is in \(S\) and in red sections it is in \(\bar{S}\). Figure 6: Visualization of the binary counting gadget allowing to double the number of partition changes of a single vertex, ensuring \(2^{k}\) changes of vertex \(v\) assuming that vertices \(a_{i},i<k\) are changing partition \(\text{ch}_{a_{i}}=2^{k-i}\) times each, with \(\text{ch}_{a_{k}}=2\) and correct timing; Black numbers are capacities. The delay of \(p_{i}+1\) for inverting the counter is chosen because the activated states of the \(a_{i}\) and \(b_{i}\) are opposite, i.e. \(S_{a_{i}}(\Theta-\tau_{(a_{i},v)})=1\) allows contribution of \(u_{(a_{i},v)}\), but \(S_{b_{i}}(\Theta+\tau_{(v,b_{i})})=0\) allows contribution of \(u_{(v,b_{i})}\). The reset step at time \(1\), changing all \(b_{i}\) to \(S\) can be omitted, as all vertices start in \(S\). So the necessary delay between \(a_{i}\) and \(b_{i}\) is \(2^{i}+1\). The desired change pattern therefore is \[\bar{S}_{b_{i}}\xrightarrow{\Theta\equiv 1\bmod 2^{i+1}}S_{b_{i}}\bar{S}_{b_{ k}}\xrightarrow{2^{k+1}+1}S_{b_{k}}S_{b_{i}}\xrightarrow{\Theta\equiv 2^{i}+1 \bmod 2^{i+1}}\bar{S}_{b_{i}}\] for \(\Theta\in\{3,\ldots,2^{k}+1\}\). This pattern is achieved by the functionality of the mimicking gadget shown in Lemma 1 and the fact that \(G\) cannot have additional edges incident to any \(b_{i}\). To realize that the partition changes of \(v\) have to occur in the claimed way for any minimum cut \(S\), we look at the edges incident to \(v\). All other edges' contribution to any cut is already fixed. An edge \((a_{i},v)\) contributes \(2^{i}\) to \(\operatorname{cap}(S)\) if and only if \(a_{i}\in S(\Theta-1)\) and \(v\in\bar{S}(\Theta)\) (\(s\in S(\Theta)\) always holds, so \((s,v)\) contributes \(1-\varepsilon\) if \(v\in\bar{S}(\Theta)\)) for some time \(\Theta\). Likewise the only way for \((v,b_{i})\) to contribute to \(\operatorname{cap}(S)\) is \(v\in S(\Theta)\) and \(b_{i}\in\bar{S}(\Theta+1)\) for some time \(\Theta\). Evaluating these contributions for the given partition changes of \(a_{i},b_{i}\) we see that the counter of the \(b_{i}\) contributions is half a counting step - which corresponds to one time unit - ahead of the \(a_{i}\) contribution counter. For the last change of \(a_{k}\), affecting \(v\) at time \(2^{k}+1\), the \(a_{i}\) counter gets larger than the \(b_{i}\) counter instead of equaling it. Formally for \(\Theta\in[1,2^{k}+1)\) the contribution of \((a_{i},v)\) and \((v,b_{i})\) edges is \[\sum_{i|S_{a_{i}}(\Theta-1)=1}u_{(a_{i},v)}=\lfloor\frac{\Theta-1}{2}\rfloor \sum_{i|S_{b_{i}}(\Theta+1)=0}u_{(v,b_{i})}=\lfloor\frac{\Theta}{2}\rfloor\] and for \(\Theta\in[2^{k}+1,2^{k}+2)\) \[\sum_{i|S_{a_{i}}(\Theta-1)=1}u_{(a_{i},v)}=2^{k}-1>2^{k-1}=\sum_{i|S_{b_{i}}( \Theta+1)=0}u_{(v,b_{i})}.\] With the additional edge \((s,v)\), the side with more potential to contribute to the cut changes every \(\Theta\in\{2,\ldots,2^{k}+1\}\), forcing \(v\) to change its partition every time to ensure minimality of the cut. So the change pattern of \(v\) is \[S_{v}\xrightarrow{\Theta\equiv 0\bmod 2}\bar{S}_{v}\bar{S}_{v}\xrightarrow{ \Theta\equiv 1\bmod 2}S_{v}\] for \(\Theta\in\{2,\ldots,2^{k}+1\}\). Edges leaving \(v\) can only contribute to \(\operatorname{cap}(S)\) if \(v\in S(\Theta)\), so whenever the gadget already ensures \(v\in\bar{S}(\Theta)\), added outgoing edges cannot impede the gadgets behavior. When \(H_{k}\) ensures \(v\in S(\Theta)\), we have \[\sum_{i|S_{a_{i}}(\Theta-1)=1}u_{(a_{i},v)}\geq\sum_{i|S_{b_{i}}(\Theta+1)=0}u _{(v,b_{i})}.\] To minimize \(\operatorname{cap}(S)\), the assignment \(v\in S(\Theta)\) remains necessary to minimize \(\operatorname{cap}(S)\), because of the edge \((s,v_{k})\) with capacity \(1-\varepsilon\), which is larger than the sum over the capacities of all added edges. Note that Lemma 2 provides the first part towards the induction step of constructing a valid \(H_{k+1}\) from a valid \(H_{k}\). In the following, we show how to scale periods of the \(a_{i}\) such that \(a_{i}\) from \(H_{k}\) can serve as the \(a_{i+1}\) from \(H_{k+1}\) and \(v\) can serve as the new \(a_{1}\). Afterwards, it remains to show two things. First, additional edges to actually build \(H_{k+1}\) from \(H_{k}\) can be introduced without losing validity. And secondly, we need the initial step of the induction, i.e., the existence of a valid \(H_{1}\) even in the presence of only one capacity change. We say that a minimum dynamic cut \(S\)_remains optimal under scaling and translation of time_ if \(S\) is a minimum cut on graph \(G=(V,E)\) with transit times \(\tau_{e}\), capacities \(u_{e}(\Theta)\) and time interval \([0,T]\) if and only if \(\hat{S}\) with \(\hat{S}_{v}(r\cdot\Theta+T_{0})\coloneqq S_{v}(\Theta)\ \forall\Theta\in[0,T]\) is a minimum cut on \(\hat{G}=(V,E)\) with transit times \(\hat{\tau}_{e}\coloneqq r\cdot\tau_{e}\), capacities \(\hat{u}_{e}(r\cdot\Theta+T_{0})\coloneqq u_{e}(\Theta)\) and time interval \([T_{0},r\cdot T+T_{0}]\) for any \(r\in\mathbb{R}^{+},T_{0}\in\mathbb{R}\). Lemma 3: _Any dynamic cut remains optimal under scaling and translation of time. It also remains optimal under scaling of capacities._ Proof: The capacity of a cut is unaffected by translation of time. Scaling time scales the capacity of any cut by the same factor. So the relative difference of the capacity of different cuts is not affected by scaling and translation of time. Scaling capacities alters the capacity of every cut by the same factor, so the relative difference in capacity of cuts remains unchanged. With this, we can combine binary counting gadgets of different sizes to create a large binary counting gadget \(H_{\ell}\) while only requiring a single capacity change. Lemma 4: _For every \(\ell\in\mathbb{N}^{+}\), there exists a polynomially sized, acyclic dynamic flow network with only one capacity change that contains a valid binary counting gadget \(H_{\ell}\)._ Proof: To create the necessary change patterns for the \(a_{i}\) of \(H_{\ell}\), we chain binary counting gadgets of increasing size, beginning with \(H_{2}\) up to \(H_{\ell}\) together creating the graph \(G_{\ell}\) as shown in Figure 7. The coarse idea is to ensure the behavior of all \(a_{k,i}\) by having them mimic the central vertex \(v_{k-i}\) of the correct smaller binary counting gadget. Timewise, the binary counting gadgets \(H_{k}\) are scaled by \(\Delta_{k}\coloneqq 2^{-k}\) and translated by \(T_{0,k}\coloneqq 2+3(1-2\Delta_{k})+\Delta_{k}\). So the first change from \(S\) to \(\bar{S}\) of the \(a_{k,i}\) in \(H_{k}\) should happen at time \(T_{0,k}\) and the period between two successive changes of \(a_{k,i}\) is \(p_{k,i}\coloneqq\Delta_{k}\cdot 2^{i}=\Delta_{k-i}\), which is the period between two successive changes of \(v_{k-i}\). To correctly synchronize the different \(H_{k}\), the delay for the mimicking between the binary counting gadgets is set to \(\tau_{(v_{k-i},a_{k,i})}=3\cdot(\Delta_{k-i}-2\Delta_{k})+\Delta_{k}\). This is chosen so that \(T_{0,k-i}+2\Delta_{k-i}+\tau_{(v_{k-i},a_{k,i})}=T_{0,k}\). We set \(\varepsilon\coloneqq\frac{1}{6}\) for the capacity of \((s,v_{k})\) in \(H_{k}\). To ensure that the connecting mimicking gadgets do not exceed the permitted capacities leaving \(v_{k}\) of \(H_{k}\), the capacities of all edges in the \(H_{k}\) are scaled by factor \(\lambda_{k}\coloneqq\frac{1}{5^{k}}\). In accordance with this scaling and the capacity requirements of Lemma 1, the capacities of the mimicking gadget connecting \(v_{k-i}\) to \(a_{k,i}\) are \(u_{(v_{k-i},a_{k,i})}=\lambda_{k}\gamma_{i}\) with \(\gamma_{i}\coloneqq 2^{i}+4\varepsilon\) and \(u_{(a_{k,i},t)}=\lambda_{k}\varepsilon\). Ensuring the partition changes of \(v_{0}\) can be done with one capacity change as shown in Figure 8, using a path \(s,v_{0},t\). Transit times for \(H_{\text{start}}\) are \(\tau_{(s,v_{0})}=1\) and \(\tau_{(v_{0},t)}=T-3\), capacities are \(u_{(s,v_{0})}=2\) and \(u_{(v_{0},t)}(\Theta)=1,\Theta<2\) changing to \(u_{(v_{0},t)}(\Theta)=3,\Theta\geq 2\). The time horizon is set to \(T\coloneqq 6\). \(G_{\ell}\) clearly has polynomial size in regard to \(\ell\) and there is only one capacity change. The previously shown functionality of the binary counting gadget in Lemma 2 and the mimicking gadget in Lemma 1 are the basis for showing, that the contained \(H_{\ell}\) is valid. Lemma 3 provides that those gadgets' functionality is also given under the shifted and compressed time in which they are used for the construction of \(G_{\ell}\). To prove the correct behavior of the constructed graph, it needs to be shown that the mimicking gadgets adhere to the capacity restrictions established earlier, and that their attachment to smaller binary counting gadgets does not impede the behavior of those counting gadgets. The correctness of the mimicking gadgets' capacities can easily be seen. The \(a_{k,i}\) have no incoming edges outside of the mimicking gadget, so \(u_{(a_{k,i},t)}>0\) fulfills the requirement for \((a_{k,i},t)\). There are two additional edges leaving \(a_{k,i}\), one to \(b_{k,i}\) and one to \(v_{k}\). The combined capacity of all outgoing edges of \(a_{k,i}\) is therefore \(\lambda_{k}(\alpha_{i}+2^{i-1}+\varepsilon)=\lambda_{k}(2^{i}+3\varepsilon)< \lambda_{k}\gamma_{i}\), so the restriction for \((v_{k-i},a_{k,i})\) holds. To see that the chaining of binary counting gadgets does not impede the behavior of smaller binary counting gadgets, notice that the capacities of the edges leaving the \(v_{k}\) are chosen to not cross the established threshold: \[\sum_{i\in\mathbb{N}^{+},i\leq\ell-k}\lambda_{k+i}\gamma_{i}=\lambda_{k}\sum_{ i\in\mathbb{N}^{+},i\leq\ell-k}\left(\frac{2}{5}\right)^{i}+\frac{4}{6\cdot 5^{i}} <\lambda_{k}(\frac{2}{3}+\frac{1}{6})=\lambda_{k}(1-\varepsilon)\] Note that the behavior of \(v_{0}\) is also unimpeded by the connections to the binary counting gadgets. This follows from the same argument for \(k=0\) as well as the observation, that the changes of \(v_{0}\) are - ignoring connections to the binary counting gadgets - always ensured by a capacity difference of at least \(\lambda_{0}\). We use induction to show that the binary counting gadgets' \(a_{k,i}\) change at the required times for any minimum cut \(S\). More specifically we show \[S_{v_{k}}\xrightarrow{\Theta\equiv T_{0,k}\bmod 2\Delta_{k}}\bar{S}_{v_{k}} \xrightarrow{\Theta\equiv T_{0,k}+\Delta_{k}\bmod 2\Delta_{k}}S_{v_{k}}\] for all \(\Theta\in\{T_{0,k}+2\Delta_{k},T_{0,k}+3\Delta_{k},\ldots,T_{0,k}+1+\Delta_{k}\}\). The correct startup behavior of \(v_{0}\) requires two partition changes. For now we ignore the edges from the attachment to the binary counting gadgets, since they do not affect behavior, as argued above. The partition change \(S\xrightarrow{2}\bar{S}\) directly follows from the capacity change of \((v_{0},t)\) at time \(\Theta=2\) increasing the potential contribution of \(v_{0}\in S(2)\). The other partition change \(\bar{S}\xrightarrow{3}S\) is a result of the approaching time horizon \(T\), which reduces the potential contribution of \(v_{0}\in S(3)\) to zero. Now assume, for a fixed \(k\in\mathbb{N}\), gadgets \(H_{\text{start}}\) to \(H_{k-1}\) work correctly, producing the desired changes. Because of the functionality of the mimicking gadget, with the delay \(\tau_{(v_{k-i},a_{k,i})}\), by induction \(a_{k,i}\) experiences changes \(S_{a_{k,i}}\xrightarrow{\Theta}\bar{S}_{a_{k_{i}}}\) at times \[\Theta\equiv T_{0,k-i}+\tau_{(v_{k-i},a_{k,i})}\bmod 2\Delta_{k-i}\equiv T_{0,k} \bmod 2\Delta_{k-i}\] and changes \(\bar{S}_{a_{k,i}}\xrightarrow{\Theta}S_{a_{k_{i}}}\) at times \[\Theta\equiv T_{0,k-i}+\Delta_{k-i}+\tau_{(v_{k-i},a_{k,i})}\bmod 2\Delta_{k-i} \equiv T_{0,k}+\Delta_{k-i}\bmod 2\Delta_{k-i}\] beginning with \(\Theta=T_{0,k-i}+2\Delta_{k-i}+\tau_{(v_{k-i},a_{k,i})}=T_{0,k}\) up to \(\Theta=T_{0,k-i}+1+\Delta_{k-i}+\tau_{(v_{k-i},a_{k,i})}=T_{0,k}+1-\Delta_{k-1}\). This means that beginning at \(T_{0,k}\) the \(a_{k,i}\) form a binary counter increasing every \(2\Delta_{k}\) with the additional change \(\bar{S}_{a_{k}}\xrightarrow{T_{0,k}+1}S_{a_{k}}\). Now Lemma 2 - multiplying the time with factor \(\Delta_{k}\) and adding the initial offset of \(T_{0,k}\) - provides the desired change timings for \(v_{k}\). The correct change pattern of the \(a_{\ell,i}\) required for the validity of \(H_{\ell}\) follow from the stronger induction hypothesis for the \(v_{k}\), as seen above during the induction step. To be able to use the complexity of minimum cuts to show complexity of maximum flows, we need the following lemma. Lemma 5: _Every edge contributing to the capacity of some minimum cut has to be saturated by every maximum flow during the time where it contributes to a Figure 7: Construction linking binary counting gadgets to ensure \(\operatorname{ch}_{v_{\ell}}=2^{\ell}\) partition changes at \(v_{\ell}\) in a minimum cut; purple edges represent mimicking gadgets, numbers are capacities. cut. Moreover, every edge \(e=(v,w)\) with \(v\in\bar{S}(\Theta)\) and \(w\in S(\Theta+\tau_{e})\) for some minimum cut \(S\) may not route flow at time \(\Theta\) for any maximum flow._ Proof: For any minimum cut \(S\), flow is routed from \(s\in S\) to \(t\in\bar{S}\). This means that any path along which flow is routed has to contain at least one edge allowing flow to move from \(S\) to \(\bar{S}\). All edges allowing flow to traverse from \(S\) to \(\bar{S}\) contribute to \(\operatorname{cap}(S)\). As such, if one contributing edge was not saturated by a flow \(f\), the value \(|f|\) would be smaller than \(\operatorname{cap}(S)\). Then the cut-flow duality of Theorem 3.1 provides that \(f\) cannot be a maximum flow. Given a minimum cut \(S\), flow of \(f\) routed over an edge \(e=(v,w)\) with \(v\in\bar{S}(\Theta)\) and \(w\in S(\Theta+\tau_{e})\) has to cross multiple edges from \(S\) to \(\bar{S}\) to reach \(t\), one before \(e\) and one after it. So even if \(f\) saturates all edges contributing to \(\operatorname{cap}(S)\) as discussed above, less flow than \(\operatorname{cap}(S)\) can reach \(t\). With this, Theorem 3.1 again provides that \(f\) is no maximum flow. To obtain the following theorem, it only remains to observe that the structure of the minimum cut in the construction of Lemma 4 also implies exponentially complex maximum flows, using Lemma 5. Further note that discretization of time is possible. Theorem 4.1: _There exist dynamic flow networks with only one capacity change where every minimum cut and maximum flow has exponential complexity. This even holds for acyclic networks and discrete time._ Proof: A valid binary counting gadget \(H_{\ell}\) has a central vertex \(v_{\ell}\) that experiences \(2^{k}\) partition changes in any minimum cut \(S\), as shown in Lemma 2. Since Lemma 4 provides the existence of a polynomially sized, acyclic dynamic flow network \(G_{\ell}\) containing a valid \(H_{\ell}\), so any minimum cut in \(G_{\ell}\) has exponential complexity. The construction of \(G_{\ell}\) uses only transit times and change timings that are multiples of \(\Delta_{\ell}\), so discretizing time to units of length \(\Delta_{\ell}\) provides a discrete time dynamic flow network with the same properties. To see that this partition change pattern with exponentially many changes also implies that any maximum flow has to have exponential complexity, we need Lemma 5, which shows that the minimum cuts impose restrictions on maximum flows. The partition change pattern of \(a_{\ell,1}\), changing every \(2\Delta_{\ell}\) and \(v_{\ell}\), changing every \(\Delta\) in \(S\) results in \((a_{\ell,1},v_{\ell})\) changing from an edge from \(S\) to \(\bar{S}\) to an edge from \(\bar{S}\) to \(S\) exponentially often. This implies exponentially many changes of \(f_{(a_{\ell,1},v_{\ell})}\) in any maximum flow \(f\) in \(G_{\ell}\). Note that the construction from Lemma 4 requires a specific time horizon \(T\). In the case of infinite considered time, two capacity changes suffice to obtain the same result. Figure 8: Construction of \(H_{\text{start}}\), providing the partition changes of \(v_{0}\) needed for \(G_{\ell}\) with one capacity change, numbers are capacities, \(\tau_{(v_{0},t)}=T-3\). Corollary 3: _Theorem 3.1 also holds for infinite considered time with two capacity changes._ Proof: The starting gadget \(H_{\rm start}\) shown in Figure 8 can be modified to force the two changes \(S\xrightarrow{2}\bar{S}\) and \(\bar{S}\xrightarrow{3}S\) of \(v_{0}\) with infinite considered time by additionally changing the capacity of \((v_{0},t)\) to zero at time \(\Theta=3\). The rest of the proof of Theorem 3.1 is not changed by the introduction of infinite considered time. Note that if Conjecture 1 holds, two capacity changes are necessary in this setting. As mentioned in the introduction, the above complexity results transfer to the setting where we have time-dependent transit times instead of time-dependent capacities. The result of Corollary 3 can even be strengthened to only require a single transit time change. Corollary 4: _Theorem 3.1 also holds in the setting of static capacities and time-dependent transit times with a single change, with finite time horizon and with infinite considered time._ Proof: For infinite considered time, we give the starting gadget \(H_{\rm start}\) presented in Figure 9. Here \((v_{0},t)\) is always saturated for any maximum flow \(f\), except during \([2,3]\), when no flow can be routed over it due to the increase in transit time by \(1\) of \((s,v_{0})\). The corresponding minimum cut \(S\) requires \(v_{0}\) to be in \(S\) always except during \([2,3]\), when \(v_{0}\) has to be in \(\bar{S}\). These are the changes \(H_{\rm start}\) needs to provide the induction start for the proof of Theorem 3.2. This clearly also works for only considering time in \([0,6]\) as in the proof of Theorem 3.1. Note that the construction of Theorem 3.1 causes every minimum cut and every maximum flow to have exponential complexity. In the following we show that exponentially many changes in cut or flow can occur independently. Specifically, we provide constructions that require exponentially complex flows but allow for cuts of low complexity and vice versa. Figure 9: Construction of \(H_{\rm start}\), providing the partition changes of \(v_{0}\) needed for \(G_{\ell}\) with one change in transit time, black numbers are transit times, blue numbers are capacities. ### Complex Flows and Simple Cuts (and Vice Versa) All above constructions require all minimum cuts _and_ all maximum flows to have exponential complexity. Here, we show that flows and cuts can be independent in the sense that their required complexity can be exponentially far apart (in both directions). Theorem 4.2: _There exist acyclic dynamic flow networks with only one capacity change where every maximum flow has exponential complexity, while there exists a minimum cut of constant complexity. The same is true for static capacities and time-dependent transit times._ Proof: Figure 10 shows a graph with these properties. This is achieved by only allowing flow to enter \(v_{0}\) during \([0,1]\), but it has to leave \(v_{k}\) during \([0,2^{k}]\) due to the reduced capacity of \((v_{k},t)\) for a time horizon \(T\geq 2^{k}\). Apart from \(v_{k}\) all \(v_{i}\) are connected to the next \(v_{i+1}\) with a pair of edges, with transit times \(2^{k-i-1}\) and zero, all these edges have capacity \(1\). So all \(2^{k}\) paths of different transit time through the \(v_{i}\) have to be used to route flow for a maximum flow. Every second of those paths has an even transit time, so flow has to traverse the edge with transit time zero between \(v_{k-1}\) and \(v_{k}\) every second integer time interval, which results in exponentially many changes in flow over that edge. However assigning all \(v_{i}\) to \(\bar{S}\) for all time is a minimum cut without partition changes. This generalizes to time-dependent transit times, as we can block the edge \((s,v_{0})\) at time \(\Theta=1\) by increasing its transit time to \(T\) at that time. Theorem 4.3: _There exist acyclic dynamic flow networks with only one capacity change where every minimum cut has exponential complexity, while there exists a maximum flow of linear complexity. The same is true for static capacities and time-dependent transit times._ Proof: Figure 11 shows a graph where any minimum cut needs to contain exponentially many changes, but a maximum flow with linearly many changes exists. The idea of this construction is that only \(\frac{1}{2^{k}+1}\) flow can enter \(s^{\prime}\) at any time and flow can only leave from \(x_{2k}\) to \(t\) during one integer interval after \(2^{k}\) and Figure 10: Example of a graph where any maximum flow contains exponentially many changes, but there is a minimum cut with no changes, black numbers are transit times, blue numbers are capacities, unspecified capacities are \(1\). before the time horizon \(T=2^{k}+1\). All other edges have capacity 1. There is a set of bypass paths through the \(y_{i}\), that allows flow to be routed from \(s^{\prime}\) to \(x_{2k}\) in any integer time up to \(2^{k}-1\), by connecting \(y_{i}\) to \(y_{i+1}\) with a pair of edges with transit times \(2^{k-i-1}\) and zero. So \(\frac{2^{k}}{2^{k}+1}\) flow can be routed through the graph. The section where exponential cuts will be necessary consists of the \(x_{i}\). The initial \(x_{0}\) can be reached from \(s^{\prime}\) with transit time 1, internally each \(x_{i},i<k\) is connected with the next \(x_{i}+1\) with a pair of edges with transit times \(2^{i+1}\) and zero. The later \(x_{i},i\geq k\) are connected to the next \(x_{i+1}\) with a pair of edges of transit time \(2^{2k-i}\) and zero. If the bypass paths are used no additional flow can move through the upper paths via the \(x_{i}\) because of the capacity of \((s,s^{\prime})\) and the lack of a transit time \(2^{k}\) path through the \(x_{i}\). There is no maximum flow that saturates any edge except for \((s,s^{\prime})\) at any time, so, because of the cut flow duality of Theorem 1, this is the only edge that can contribute to the capacity of a minimum cut. Since there are integer transit time paths from \(s^{\prime}\) to \(x_{2k}\) for any transit time up to \(2^{k}-1\), we know that \(s^{\prime}\) has to be in \(\bar{S}\) during \([1,2^{k}+1)\) to prevent any other edges from contributing to the cut. This already results in the capacity of the cut being at least \(\frac{2^{k}}{2^{k}+1}\), so \(s^{\prime}\) has to be in \(S\) during \([0,1)\). To prevent any other contributions to the capacity of the cut, \(x_{k}\) needs to change partition exponentially often. Observe that any path's transit time from any \(s^{\prime}\) to \(x_{k}\) plus the 1 from \((s^{\prime},x_{0})\) marks a timing where \(x_{j}\) has to be in the \(S\) partition. Likewise \(2^{k}\) minus path transit times of paths from \(x_{k}\) to \(x_{2k}\) mark timings where \(x_{k}\) has to be in the \(\bar{S}\) partition. So during any integer interval before \(2^{k}\) starting with an odd time, \(x_{k}\) has to be in \(S\) and at any such interval starting at an even time, it has to be in \(\bar{S}\). The flow through this graph can easily be represented in linear time with at most one change in flow rate per edge. Using the bypass paths as described above, no flow gets routed through any \(x_{i}\). The bypass edges are ordered to ensure that flow moves through any \(y_{i}\) during \([2^{k}-2^{k-i}+1,2^{k}+1)\) when using all different path transit times as required for this maximum flow. This means that each edge from \(y_{i}\) to \(y_{i+1}\) in the bypass edges sends \(\frac{2^{i}}{2^{k}+1}\) during \([2^{k}-2^{k-i}+1,2^{k}-2^{k-i-1}+1)\) for the \(2^{k-i-1}\) transit time edge and during \([2^{k}-2^{k-i-1}+1,2^{k}+1)\) for the edge with transit time zero. Flow over the remaining edges is easily representable as well; \((s,s^{\prime})\) is saturated during \([1,T)\), \((x_{2k},t)\) sends \(\frac{2^{k}}{2^{k}+1}\) during \([2^{k},T)\) and \((s^{\prime},y_{0}),(y_{k},x_{2k})\) only exist to improve the visual representation, flow over them is given by \((s,s^{\prime}),(x_{2k},t)\). This generalizes to time-dependent transit times, as we can activate the edge \((x_{2k},t)\) at time \(\Theta=2^{k}\) by decreasing its transit time from \(T\) to 0 at that time.
2301.06019
Proportion of blocking curves in a pencil
Let $\mathcal{L}$ be a pencil of plane curves defined over $\mathbb{F}_q$ with no $\mathbb{F}_q$-points in its base locus. We investigate the number of curves in $\mathcal{L}$ whose $\mathbb{F}_q$-points form a blocking set. When the degree of the pencil is allowed to grow with respect to $q$, we show that the geometric problem can be translated into a purely combinatorial problem about disjoint blocking sets. We also study the same problem when the degree of the pencil is fixed.
Shamil Asgarli, Dragos Ghioca, Chi Hoi Yip
2023-01-15T05:13:35Z
http://arxiv.org/abs/2301.06019v1
# Proportion of blocking curves in a pencil ###### Abstract. Let \(\mathcal{L}\) be a pencil of plane curves defined over \(\mathbb{F}_{q}\) with no \(\mathbb{F}_{q}\)-points in its base locus. We investigate the number of curves in \(\mathcal{L}\) whose \(\mathbb{F}_{q}\)-points form a blocking set. When the degree of the pencil is allowed to grow with respect to \(q\), we show that the geometric problem can be translated into a purely combinatorial problem about disjoint blocking sets. We also study the same problem when the degree of the pencil is fixed. Key words and phrases:Pencil, plane curves, blocking sets, finite field 2020 Mathematics Subject Classification: Primary: 14H50, 51E21; Secondary: 14C21, 14N05, 14G15, 51E20 ## 1. Introduction Throughout the paper, \(p\) denotes a prime, \(q\) denotes a power of \(p\), and \(\mathbb{F}_{q}\) denotes the finite field with \(q\) elements. Recall that a set of points \(B\subseteq\mathbb{P}^{2}(\mathbb{F}_{q})\) is a _blocking set_ if every \(\mathbb{F}_{q}\)-line meets \(B\). For example, a union of \(q+1\) distinct \(\mathbb{F}_{q}\)-points on a line forms a blocking set. A blocking set \(B\) is _trivial_ if it contains all the \(q+1\) points of a line, and is _nontrivial_ if it is not a trivial blocking set. Blocking sets have been studied extensively in finite geometry and design theory. Inspired by the rich interaction between finite geometry and algebraic curves [10], the concept of blocking curves was formally introduced in [1]. Given a projective plane curve \(C\subset\mathbb{P}^{2}\) defined over \(\mathbb{F}_{q}\), recall that \(C(\mathbb{F}_{q})\) denotes the set of \(\mathbb{F}_{q}\)-rational points on \(C\). We say that \(C\) is a _blocking curve_ if \(C(\mathbb{F}_{q})\) is a blocking set; otherwise, it is _nonblocking_. Moreover, \(C\) is _nontrivially blocking_ if \(C(\mathbb{F}_{q})\) is a nontrivial blocking set. In our previous papers [1] and [1], we showed that irreducible blocking curves of low degree \(d\) (namely, satisfying \(d^{6}<q\)) do not exist, and that blocking curves of high degree are rare. In other words, our past results suggest that a random curve over \(\mathbb{F}_{q}\) is very likely to be nonblocking. These results rely on combinatorial properties of blocking sets as well as tools from algebraic geometry and arithmetic statistics. The main purpose of this paper is to understand a refined distribution of nonblocking curves by asking the following question: **Question 1.1**.: Let \(\mathcal{L}=\langle F,G\rangle\) be a pencil of plane curves such that \(\{F=0\}\) and \(\{G=0\}\) have no common \(\mathbb{F}_{q}\)-points. Is there a quantitative lower bound for the number of curves in \(\mathcal{L}\) defined over \(\mathbb{F}_{q}\) which are nonblocking? To elaborate further, let \(\mathcal{L}=\langle F,G\rangle\) be a pencil that has no \(\mathbb{F}_{q}\)-points in its base locus. We refer to Example 3.1 which illustrates that the condition on the base locus is natural and necessary. Given such a pencil, consider the \(q+1\) curves \[C_{[s:t]}=\{sF+tG=0\}\] where \([s:t]\) ranges in \(\mathbb{P}^{1}(\mathbb{F}_{q})\). These curves will be called \(\mathbb{F}_{q}\)_-members_ of \(\mathcal{L}=\langle F,G\rangle\). Since \(\mathcal{L}\) has no \(\mathbb{F}_{q}\)-points in its base locus, the sets of \(\mathbb{F}_{q}\)-rational points on these \(q+1\) curves are pairwise disjoint and cover all the \(\mathbb{F}_{q}\)-points of the plane. Indeed, if \(P\in\mathbb{P}^{2}(\mathbb{F}_{q})\) is any \(\mathbb{F}_{q}\)-point, it belongs to a unique member \(C_{[s:t]}\) with \([s:t]=[-G(P):F(P)]\). In summary, the collection of sets \(\{C_{[s:t]}(\mathbb{F}_{q})\}_{[s:t]\in\mathbb{P}^{1}(\mathbb{F}_{q})}\) forms a partition of \(\mathbb{P}^{2}(\mathbb{F}_{q})\) into \(q+1\) parts, with the understanding that some of the parts may be empty. We will say that the pencil \(\mathcal{L}\)_induces_ the partition \(\{C_{[s:t]}(\mathbb{F}_{q})\}_{[s:t]\in\mathbb{P}^{1}(\mathbb{F}_{q})}\). Let us explain a heuristic that suggests a plausible answer to Question 1.1. Since there are \(q+1\) distinct \(\mathbb{F}_{q}\)-members of \(\mathcal{L}\), and together they cover \(q^{2}+q+1\) points, it follows that the average number of \(\mathbb{F}_{q}\)-points on a given \(\mathbb{F}_{q}\)-member of \(\mathcal{L}\) is exactly: \[\frac{q^{2}+q+1}{q+1}<q+1.\] Since there does not exist a blocking set with less than \(q+1\) points, it is immediate that \(\mathcal{L}\) contains at least one nonblocking curve. Note that the averaging argument does not tell us whether or not the "median" number of points on a random \(\mathbb{F}_{q}\)-member in \(\mathcal{L}\) is less than \(q+1.\) Nevertheless, it is natural to ask whether at least half of the \(\mathbb{F}_{q}\)-members in the pencil must be nonblocking. Making this last question slightly weaker, we may instead ask the following. **Question 1.2**.: Let \(\mathcal{L}=\langle F,G\rangle\) be a pencil of plane curves such that \(\{F=0\}\) and \(\{G=0\}\) have no common \(\mathbb{F}_{q}\)-points. Does there exist a universal constant \(c_{0}>0\) such that at least \(c_{0}q\) distinct \(\mathbb{F}_{q}\)-members of \(\mathcal{L}\) are nonblocking? Using Blokhuis' theorem [1] that a nontrivial blocking set over \(\mathbb{F}_{p}\) has at least \(\frac{3}{2}(p+1)\), it can be shown that Question 1.2 has a positive answer when \(p\) is prime, and in fact, \(c_{0}=1/3\) works in this case. See Remark 3.4 for further details. However, Question 1.2 turns out to have a negative answer in general. **Theorem 1.3**.: _Let \(q=p^{n}\) be a prime power with \(n\) even. There exists a pencil of plane curves over \(\mathbb{F}_{q}\) with no \(\mathbb{F}_{q}\)-points in its base locus which contains only \(\sqrt{q}\) many nonblocking curves._ The key ingredient in the proof of Theorem 1.3 is to establish the connection between Question 1.2 and the question of determining the maximum number of disjoint blocking sets in \(\mathbb{P}^{2}(\mathbb{F}_{q})\), first studied by Beutelspacher and Eugeni [1]. The latter question can be formulated more generally and abstractly in the setting of hypergraph coloring; see [1] for related discussions. We will prove in Proposition 3.2 that every pencil with no \(\mathbb{F}_{q}\)-points in its base locus has at least \(\sqrt{q}\) many nonblocking curves, so Theorem 1.3 is sharp. In Question 1.2, the constant \(c_{0}>0\) is required to be universal. If we allow the constant to depend on the degree of the curves, then Question 1.2 has a positive answer. More precisely, we will prove the following effective result. **Theorem 1.4**.: _Given a pencil of plane curves of degree \(d\leq q\) in \(\mathbb{P}^{2}\) defined over \(\mathbb{F}_{q}\) with no \(\mathbb{F}_{q}\)-points in its base locus, at least \(\frac{q+1}{d+1}\) distinct \(\mathbb{F}_{q}\)-members of the pencil are nonblocking curves._ The inequality \(d\leq q\) in the hypothesis is natural. Indeed, if \(d>q\), then \(0<\frac{q+1}{d+1}<1\), and the conclusion still holds, because we have already seen above that the pencil must have at least one nonblocking \(\mathbb{F}_{q}\)-member. ### Outline of the paper In Section 2, we show that every partition of the \(q^{2}+q+1\) points of \(\mathbb{P}^{2}(\mathbb{F}_{q})\) into \(q+1\) sets can be realized by a pencil of plane curves. This key result Proposition 2.1 allows us to prove our main Theorem 1.3 in Section 3. We discuss the case of fixed degree pencils and prove Theorem 1.4 in Section 4. ## 2. Realizing partitions by pencils of curves The goal of this section is to prove the following result, which shows that any partition of \(\mathbb{P}^{2}(\mathbb{F}_{q})\) into \(q+1\) sets (where some sets could be empty) is induced by a pencil of plane curves. **Proposition 2.1**.: _Suppose \(U_{1},U_{2},\ldots,U_{q+1}\) form a partition of \(\mathbb{P}^{2}(\mathbb{F}_{q})\). Then there exists a pencil of curves \(\mathcal{L}=\langle F,G\rangle\) whose base locus has no \(\mathbb{F}_{q}\)-points such that \(\mathcal{L}\) induces the partition \(\{U_{i}\}_{i=1}^{q+1}\)._ The proof of Proposition 2.1 relies on two lemmas. **Lemma 2.2**.: _Given any \(Q\in\mathbb{P}^{2}(\mathbb{F}_{q})\), there exists a homogeneous polynomial \(S_{Q}\in\mathbb{F}_{q}[x,y,z]\) of degree \(3(q-1)\) such that \(S_{Q}(Q)=1\), while \(S_{Q}(P)=0\) for each point \(P\neq Q\) in \(\mathbb{P}^{2}(\mathbb{F}_{q})\)._ Proof.: Suppose \(Q:=[a_{0}:b_{0}:c_{0}]\in\mathbb{P}^{2}(\mathbb{F}_{q})\). Without loss of generality, we may assume that \(c_{0}\neq 0\). Let \(L_{1}\) and \(L_{2}\) be two (distinct) \(\mathbb{F}_{q}\)-lines passing through the point \(Q\); we let their equations be \(\alpha_{1}x+\beta_{1}+\gamma_{1}z=0\) and \(\alpha_{2}x+\beta_{2}y+\gamma_{2}z=0\) for some \(\alpha_{i},\beta_{i},\gamma_{i}\in\mathbb{F}_{q}\). Consider the homogeneous polynomial \(S_{Q}\) of degree \(3(q-1)\) given by \[S_{Q}(x,y,z)=z^{q-1}\cdot\left(z^{q-1}-(\alpha_{1}x+\beta_{1}y+\gamma_{1}z)^{q -1}\right)\cdot\left(z^{q-1}-(\alpha_{2}x+\beta_{2}y+\gamma_{2}z)^{q-1}\right).\] Since \(z\)-coordinate of \(Q\) is nonzero by assumption, we get \(S_{Q}(Q)=1\). On the other hand, \(S_{Q}(P)=0\) for each point \(P\neq Q\) in \(\mathbb{P}^{2}(\mathbb{F}_{q})\). Indeed, given such a point \(P=[a_{1}:b_{1}:c_{1}]\), we have two cases: \(c_{1}=0\) and \(c_{1}\neq 0\). If \(c_{1}=0\), then \(S_{Q}(P)=0\) is immediate. If \(c_{1}\neq 0\), then \(P\) cannot be on both \(L_{1}\) and \(L_{2}\), and we again obtain \(S_{Q}(P)=0\). **Remark 2.3**.: In Lemma 2.2, we arranged \(S_{Q}\) to have degree divisible by \(q-1\). This allows us to view \(S_{Q}\) as a well-defined function \(\mathbb{P}^{2}(\mathbb{F}_{q})\longrightarrow\mathbb{F}_{q}\). Indeed, if \([x_{0}:y_{0}:z_{0}]\) and \([x_{1}:y_{1}:z_{1}]\) represent the same point in \(\mathbb{P}^{2}(\mathbb{F}_{q})\), then \((x_{1},y_{1},z_{1})=(\lambda x_{0},\lambda y_{0},\lambda z_{0})\) for some \(\lambda\in\mathbb{F}_{q}^{*}\). As a result, \(S_{Q}(x_{1},y_{1},z_{1})=\lambda^{3(q-1)}S_{Q}(x_{0},y_{0},z_{0})=S_{Q}(x_{0}, y_{0},z_{0})\), confirming that \(S_{Q}\) is a well-defined function on \(\mathbb{P}^{2}(\mathbb{F}_{q})\). **Lemma 2.4**.: _For any function \(f\colon\mathbb{P}^{2}(\mathbb{F}_{q})\longrightarrow\mathbb{F}_{q}\) which is not identically zero, there exists a homogeneous polynomial \(R_{f}\in\mathbb{F}_{q}[x,y,z]\) of degree \(3(q-1)\) such that \(R_{f}(a,b,c)=f([a:b:c])\) for each \([a:b:c]\in\mathbb{P}^{2}(\mathbb{F}_{q})\)._ Proof.: Borrowing the notation from Lemma 2.2, we set \[R_{f}:=\sum_{Q\in\mathbb{P}^{2}(\mathbb{F}_{q})}f(Q)\cdot S_{Q}\] which satisfies the desired condition. We have now gathered the tools to prove Proposition 2.1. Proof of Proposition 2.1.: Given a partition of \(\mathbb{P}^{2}(\mathbb{F}_{q})\) into \(q+1\) sets \(U_{1},\ldots,U_{q+1}\), there exists a function \(\varphi:\mathbb{P}^{2}(\mathbb{F}_{q})\longrightarrow\mathbb{P}^{1}(\mathbb{F }_{q})\) with the property that the preimages \(\varphi^{-1}(Q)\) for the \(q+1\) points \(Q\in\mathbb{P}^{1}(\mathbb{F}_{q})\) provide exactly the same partition of \(\mathbb{P}^{2}(\mathbb{F}_{q})\) as \(U_{1},\ldots U_{q+1}\). We can find two functions \(f\colon\mathbb{P}^{2}(\mathbb{F}_{q})\longrightarrow\mathbb{F}_{q}\) and \(g\colon\mathbb{P}^{2}(\mathbb{F}_{q})\longrightarrow\mathbb{F}_{q}\) such that for each \(P\in\mathbb{P}^{2}(\mathbb{F}_{q})\), \[\varphi(P)=[f(P):g(P)]. \tag{1}\] By Lemma 2.4, we have two homogeneous polynomials \(R_{f}\) and \(R_{g}\) both of degree \(3(q-1)\) which induce \(f\) and \(g\), respectively. Let \(F=-R_{g}\) and \(G=R_{f}\). We claim that the pencil \(\mathcal{L}=\langle F,G\rangle\) induces the partition \(U_{1},\ldots U_{q+1}\). First, \(\mathcal{L}\) has no \(\mathbb{F}_{q}\)-points in its base locus since \(f(P)\) and \(g(P)\) cannot be zero simultaneously for any \(P\in\mathbb{P}^{2}(\mathbb{F}_{q})\) by (1). Next, for any \([u:v]\in\mathbb{P}^{1}\) and a point \(P\in\mathbb{P}^{2}\), we have \[uF(P)+vG(P)=0\ \ \Longleftrightarrow\ \ [u:v]=[G(P):-F(P)]=[f(P):g(P)]= \varphi(P).\] Thus, a given \(\mathbb{F}_{q}\)-point \(P\) belongs to the \(\mathbb{F}_{q}\)-member of the pencil \(\mathcal{L}\) parametrized by \([u:v]\in\mathbb{P}^{1}(\mathbb{F}_{q})\) if and only if \(P\in\phi^{-1}([u:v])\). Consequently, the sets of \(\mathbb{F}_{q}\)-points contained in the \(q+1\) distinct \(\mathbb{F}_{q}\)-members of the pencil \(\mathcal{L}=\langle F,G\rangle\) precisely correspond to the partition \(U_{1},\ldots,U_{q+1}\). **Remark 2.5**.: It is possible to generalize Proposition 2.1 where \(U_{1},U_{2},...,U_{q+1}\) together cover \(\mathbb{P}^{2}(\mathbb{F}_{q})\), although they are no longer assumed to be pairwise disjoint. In order to obtain a pencil of curves that induces the sets \(U_{1},U_{2},...,U_{q+1}\), it is necessary that the equality \[U_{i}\cap U_{j}=B:=\bigcap_{k=1}^{q+1}U_{k}\] is satisfied for each \(i\neq j\). Under this assumption, one can prove that there exists a pencil \(\mathcal{L}\) of plane curves with \(\mathbb{F}_{q}\)-members \(C_{1},C_{2},...,C_{q+1}\) such that \(C_{i}(\mathbb{F}_{q})=U_{i}\) for each \(1\leq i\leq q+1\). The proof is almost identical to the current proof of Proposition 2.1. We start by picking \[\varphi\colon\mathbb{P}^{2}(\mathbb{F}_{q})\setminus B\longrightarrow\mathbb{ P}^{1}(\mathbb{F}_{q})\] so that the preimages \(\varphi^{-1}(\alpha_{i})\) of the points \(\alpha_{1},...,\alpha_{q+1}\) in \(\mathbb{P}^{1}(\mathbb{F}_{q})\) coincide with the pairwise disjoint sets \(U_{i}\setminus B\) for \(i=1,..,q+1\). By using Lemma 2.4, we can find homogeneous polynomials \(F\) and \(G\) which both vanish on \(B\) such that the pencil \(\mathcal{L}=\langle F,G\rangle\) satisfies the desired properties. ## 3. Lower bounds on the number of nonblocking curves The averaging argument presented in the introduction showed that if a pencil of plane curves has no \(\mathbb{F}_{q}\)-points in its base locus, there must be at least one nonblocking curve in the pencil. We explain what happens when the hypothesis on the base locus is dropped. If the set of \(\mathbb{F}_{q}\)-points of the base locus is itself a blocking set, then clearly all the curves in the pencil are blocking. Additional examples of pencils whose \(\mathbb{F}_{q}\)-members are all blocking can be constructed using the explicit families of blocking curves in [1, Proposition 6.1]; in such pencils, the base locus may have a large number of \(\mathbb{F}_{q}\)-points. It may be tempting to believe that there should be at least some nonblocking curves if the base locus is small. However, we will now exhibit a pencil \(\mathcal{L}\) of plane curves of degree \(d\geq 2\) over \(\mathbb{F}_{q}\) containing _exactly one_\(\mathbb{F}_{q}\)-point in its base locus such that every \(\mathbb{F}_{q}\)-member of \(\mathcal{L}\) is a blocking curve. This example shows that the hypothesis that the base locus has no \(\mathbb{F}_{q}\)-points cannot be relaxed. **Example 3.1**.: Let \(m=\max\{0,q-3\}\) and consider the polynomial \(H(x,y,z)\) of degree \(q^{2}+q+1\) which is the product of all linear polynomials \(ax+by+cz\), where \([a:b:c]\in\mathbb{P}^{2}(\mathbb{F}_{q})\). Let \[F(x,y,z)=x^{m}H(x,y,z)+y^{q^{2}+q+1+m}\] and \[G(x,y,z)=x^{m}H(x,y,z)+z^{q^{2}+q+1+m}.\] Then the only \(\mathbb{F}_{q}\)-point in the intersection of \(\{F=0\}\) and \(\{G=0\}\) is \([1:0:0]\). Thus, \(\mathcal{L}=\langle F,G\rangle\) is a pencil of plane curves with exactly one \(\mathbb{F}_{q}\)-point in its base locus. However, for each \([a:b]\in\mathbb{P}^{1}(\mathbb{F}_{q})\), the curve \(C_{[a:b]}=\{aF+bG=0\}\) intersects any given \(\mathbb{F}_{q}\)-line \(L\) at some \(\mathbb{F}_{q}\)-point. Indeed, specializing \(aF+bG=0\) along the line \(L\) yields to \[ay^{q^{2}+q+1+m}+bz^{q^{2}+q+1+m}=0. \tag{2}\] Since \(m\) was chosen so that \(\gcd(q-1,q^{2}+q+1+m)=1\), the map \(t\mapsto t^{q^{2}+q+1+m}\) is an automorphism of the cyclic group \((\mathbb{F}_{q}^{*},\cdot)\). Thus, regardless of the line \(L\) along which we specialize, the equation (2) has a nonzero solution (that is, not both \(y\) and \(z\) are zero) in \(\mathbb{F}_{q}\) for each \([a:b]\in\mathbb{P}^{1}(\mathbb{F}_{q})\). Consequently, every \(\mathbb{F}_{q}\)-member of \(\mathcal{L}\) is a blocking curve. An alternative construction can be carried out by using the generalized version of Proposition 2.1 (see Remark 2.5) where \(U_{1},U_{2},...,U_{q+1}\) consist of \(\mathbb{F}_{q}\)-points of \(q+1\) distinct lines passing through a common point. The pencil produced using that method will have degree \(3(q-1)\). We have seen that having even a single \(\mathbb{F}_{q}\)-point in the base locus may result in all of the \(\mathbb{F}_{q}\)-members to be blocking curves. Thus, for the remainder of the paper, we will work under the hypothesis that there are no \(\mathbb{F}_{q}\)-points in the base locus of a given pencil. Our first quantitative lower bound on the number of nonblocking curves is implicit in the finite geometry literature [1, Theorem 2.2]. We include a simple proof for a sake of completeness; the ideas and notation used in this proof will also be needed in the remarks that follow. **Proposition 3.2**.: _Every pencil of plane curves over \(\mathbb{F}_{q}\) with no \(\mathbb{F}_{q}\) points in its base locus has at least \(\sqrt{q}\) many \(\mathbb{F}_{q}\)-members which are nonblocking._ Proof.: Let \(\mathcal{L}=\langle F,G\rangle\) be a pencil of plane curves with no \(\mathbb{F}_{q}\)-members in its base locus. If \(\mathcal{L}\) has an \(\mathbb{F}_{q}\)-member \(C\) whose \(\mathbb{F}_{q}\)-rational points which is a trivial blocking set, then there exists an \(\mathbb{F}_{q}\)-line \(L\) such that \(L(\mathbb{F}_{q})\subseteq C(\mathbb{F}_{q})\). In this case, the other \(q\) members of \(\mathcal{L}\) will be nonblocking by virtue of being disjoint from \(L(\mathbb{F}_{q})\). Thus, we may assume that each \(\mathbb{F}_{q}\)-member of the pencil is not trivially blocking. Suppose we have \(m\) blocking and \(q+1-m\) nonblocking \(\mathbb{F}_{q}\)-members of \(\mathcal{L}\). Since a nontrivial blocking set has at least \(q+\sqrt{q}+1\) points [1], the following inequality holds: \[m(q+\sqrt{q}+1)\leq\sum_{\begin{subarray}{c}C\text{ is }\mathbb{F}_{q}\text{- member of }\mathcal{L}\\ C\text{ is blocking}\end{subarray}}\#C(\mathbb{F}_{q})\leq\sum_{\begin{subarray}{c }C\text{ is }\mathbb{F}_{q}\text{-member of }\mathcal{L}\end{subarray}}\#C(\mathbb{F}_{q})=q^{2}+q+1.\] It follows that, \[m\leq\frac{q^{2}+q+1}{q+\sqrt{q}+1}=q-\sqrt{q}+1\] Thus, the number of nonblocking \(\mathbb{F}_{q}\)-members is \(q+1-m\geq\sqrt{q}\), as desired. As a complement to the previous proposition, we now present a quick proof of Theorem 1.3 which guarantees the existence of a pencil which has exactly \(\sqrt{q}\) many nonblocking members. Recall that if \(q\) is a square, a _Baer subplane_ in \(\mathbb{P}^{2}(\mathbb{F}_{q})\) is a subplane with size \(q+\sqrt{q}+1\). It is well-known that Baer subplanes are blocking sets. Proof of Theorem 1.3.: Since \(q\) is a square, it is known that that \(\mathbb{P}^{2}(\mathbb{F}_{q})\) can be partitioned into \[\frac{q^{2}+q+1}{q+\sqrt{q}+1}=q-\sqrt{q}+1\] Baer subplanes [1, Theorem 4.3.6]. Now, Proposition 2.1 implies that we can find a pencil of curves which has exactly \(\sqrt{q}\) many \(\mathbb{F}_{q}\)-members which are nonblocking. **Remark 3.3**.: When \(q\) is a square, we have seen that the number of nonblocking curves in a pencil must be at least \(\sqrt{q}\), and this lower bound is sharp. When \(q\) is not a square, we know each nontrivial blocking set has size at least \(q+1+cq^{2/3}\)[1], so the same argument in Proposition 3.2 can be adapted to show that there are at least \(c^{\prime}q^{2/3}\) many nonblocking \(\mathbb{F}_{q}\)-members in any pencil with no \(\mathbb{F}_{q}\)-points in its base locus. This can be seen by mimicking the same proof as in Proposition 3.2 to get the following upper bound on the number of blocking curves in the given pencil: \[\frac{q^{2}+q+1}{q+1+q^{2/3}}=\frac{q^{2}+q-q^{4/3}}{q+1+q^{2/3}}+\frac{q^{4/3 }}{q+1+q^{2/3}}\approx q+1-q^{2/3}+\frac{q^{4/3}}{q+1+q^{2/3}}\approx q+1-q^{2 /3}+q^{1/3}.\] **Remark 3.4**.: When \(q=p\) is prime, we can improve the lower bound in Proposition 3.2 significantly by relying on Blokhuis' theorem that a nontrivial blocking set in \(\mathbb{P}^{2}(\mathbb{F}_{p})\) has at least \(\frac{3}{2}(p+1)\) points [1]. By imitating the proof of Proposition 3.2, we see that the number of blocking curves satisfies \(m\leq\frac{p^{2}+p+1}{\frac{3}{2}(p+1)}<\frac{2}{3}(p+1)\). It follows that the number of nonblocking curves is \(p+1-m\geq\frac{1}{3}(p+1)\), thereby answering Question 1.2 affirmatively in the case when \(q=p\) is a prime. On the other hand, it is possible to construct \((1/3-o(1))p\) disjoint blocking sets in \(\mathbb{P}^{2}(\mathbb{F}_{p})\)[2]. Thus, Proposition 2.1 implies the existence of a pencil with at least \(1/3\) members being blocking, or equivalently, at most \(2/3\) members are nonblocking. It is conjectured by Kriesell that \(\mathbb{P}^{2}(\mathbb{F}_{q})\) can be partitioned into \(\lfloor q/2\rfloor\) blocking sets (he verified the conjecture for \(q\leq 8\)) [2, Page 150], so the best possible constant in Question 1.2 is conjecturally \(1/2\) (when \(q=p\) is prime). **Remark 3.5**.: Szonyi [15, Theorem 5.7] showed if \(q=p^{2}\) with \(p\) prime, a nontrivial blocking set in \(\mathbb{P}^{2}(\mathbb{F}_{q})\) that avoids Baer subplanes has size at least \(3(q+1)/2\). Note that this result is analogous to Blokhuis' result over \(\mathbb{F}_{p}\) with \(p\) prime. Let us explain how this observation helps us answer Question 1.2 affirmatively in the case \(q=p^{2}\) for certain pencils. We say that a pencil of curves over \(\mathbb{F}_{q}\) is _generic_ if the number of singular members (defined over the algebraic closure \(\overline{\mathbb{F}_{q}}\)) is finite. The terminology is justified by the fact that a generic line (over \(\overline{\mathbb{F}_{q}}\)) in the parameter space of plane curves of degree \(d\) meets the discriminant hypersurface in finitely many points. It follows that a given pencil \(\mathcal{L}\) is either generic or every \(\overline{\mathbb{F}_{q}}\)-member of \(\mathcal{L}\) is a singular curve. Thus, non-generic pencils are extremely special. When \(d=o(\sqrt{q})\), we can show that for any generic pencil, at least \((1/3-o(1))q\) members that are not blocking. We may assume that no curve in the pencil is trivially blocking, for otherwise, \(q\) of the curves are nonblocking. For each generic pencil there are at most \(3(d-1)^{2}=o(q)\) many singular curves by [1, Proposition 7.4]. Note for each smooth curve in the pencil, by Bezout's theorem, it does not contain a Baer subplane (otherwise, there is a line that intersects at the curve with at least \(\sqrt{q}+1>d\) points). Thus, if a smooth member in the pencil is blocking, then it has size at least \(3(q+1)/2\); therefore, there are at most \(2q/3\) smooth blocking curves in our pencils. Hence, we have more than \(q/3-3(d-1)^{2}=(1/3-o(1))q\) nonblocking curves in our pencil. ## 4. Proof of Theorem 1.4 In this section, we will show that Question 1.2 has a positive answer if the constant \(c_{0}\) is allowed to depend on \(d\). More precisely, we will prove Theorem 1.4, which guarantees that at least \(\frac{1}{d+1}\) fraction of the \(\mathbb{F}_{q}\)-members of a given pencil are nonblocking. As a preparation, we start by proving a lower bound on the number of \(\mathbb{F}_{q}\)-rational points on a blocking plane curve. The following lemma is implicitly contained in [1, Section 2.5]. For the sake of completeness, we present a self-contained proof. **Lemma 4.1**.: _Suppose \(C\) is a plane curve of degree \(d\) defined over \(\mathbb{F}_{q}\). If \(C(\mathbb{F}_{q})\) is a nontrivial blocking set, then \(\#C(\mathbb{F}_{q})>q+\frac{q+\sqrt{q}}{d}\)._ Proof.: Let \(t_{i}\) denote the number of \(\mathbb{F}_{q}\)-lines \(L\) such that \(C\cap L\) has exactly \(i\) distinct \(\mathbb{F}_{q}\)-points. Let \(N=\#C(\mathbb{F}_{q})\) denote the number of \(\mathbb{F}_{q}\)-points of \(C\). Since \(C\) is nontrivially blocking, it cannot contain any \(\mathbb{F}_{q}\)-line as a component. Thus, \(t_{0}=0\), and \(t_{i}=0\) for \(i>d\) by Bezout's theorem. Moreover, a standard double counting argument on point-line incidences (see for example [1, Lemma 2.9]) leads to the following three identities: \[\sum_{i=1}^{d}t_{i}=q^{2}+q+1,\qquad\sum_{i=1}^{d}i\cdot t_{i}=(q+1)N,\qquad \sum_{i=2}^{d}\binom{i}{2}\cdot t_{i}=\binom{N}{2}.\] By combining the first two identities, we obtain \[q^{2}+q+1=(q+1)N-\sum_{i=2}^{d}(i-1)t_{i}, \tag{3}\] while the third identity implies \[\sum_{i=2}^{d}(i-1)t_{i}\geq\frac{N(N-1)}{d}. \tag{4}\] The inequalities (3) and (4) together yield, \[q^{2}+q+1\leq(q+1)N-\frac{N(N-1)}{d}. \tag{5}\] Since \(C(\mathbb{F}_{q})\) is a _nontrivial_ blocking set, we have \(N\geq q+\sqrt{q}+1\) by [1]. Now, the inequality (5) implies, \[(q+1)N\geq q^{2}+q+1+\frac{(q+\sqrt{q}+1)(q+\sqrt{q})}{d}>(q+1)q+\frac{(q+1)( q+\sqrt{q})}{d}.\] It follows that \(N>q+\frac{q+\sqrt{q}}{d}\), as desired. **Remark 4.2**.: One can obtain a slightly stronger conclusion \(N>\left(\frac{d}{d-1}\right)q\cdot(1+o(1))\) by analyzing the inequality (5) more carefully. We now proceed with the proof of Theorem 1.4 on the number of nonblocking curves in a pencil of fixed degree. Proof of Theorem 1.4.: Given a pencil \(\mathcal{L}\) with no \(\mathbb{F}_{q}\)-points in its base locus, suppose \(m\) of the \(\mathbb{F}_{q}\)-members are blocking and \(q+1-m\) of the \(\mathbb{F}_{q}\)-members are nonblocking. If any \(\mathbb{F}_{q}\)-member of \(\mathcal{L}\) is trivially blocking, then applying the same argument as at the beginning of Proposition 3.2, we see that the other \(q\) members of the pencil are nonblocking. So, we may assume that all the blocking \(\mathbb{F}_{q}\)-members are nontrivially blocking. By applying Lemma 4.1, we obtain the following inequality: \[m\left(\frac{qd+q+\sqrt{q}}{d}\right)<\sum_{\begin{subarray}{c}C\text{ is }\mathbb{F}_{q}\text{- member of }\mathcal{L}\\ C\text{ is blocking}\end{subarray}}\#C(\mathbb{F}_{q})\leq\sum_{C\text{ is }\mathbb{F}_{q}\text{- member of }\mathcal{L}}\#C(\mathbb{F}_{q})=q^{2}+q+1.\] Therefore, using the hypothesis \(d\leq q\), we obtain \[m\leq\frac{(q^{2}+q+1)d}{qd+q+\sqrt{q}}=\frac{(q+1)(q+\frac{1}{q+1})d}{(d+1)(q +\frac{\sqrt{q}}{d+1})}\leq\frac{d}{d+1}(q+1).\] We conclude that the number of nonblocking \(\mathbb{F}_{q}\)-members of the pencil \(\mathcal{L}\) is \(q+1-m\geq\frac{q+1}{d+1}\). **Remark 4.3**.: The hypothesis \(d\leq q\) in Theorem 1.4 is natural. However, when \(\sqrt{q}\leq d\leq q\), Theorem 1.4 only guarantees \(\frac{q+1}{\sqrt{q+1}}=\sqrt{q}-1\) many nonblocking curves, which was already proved in Proposition 3.2. For Theorem 1.4 to yield more refined results, we would need to assume \(d<\sqrt{q}\). On the other hand, if \(d\) gets too small compared to \(q\), namely, \(d<q^{1/6}\) then we can refine Theorem 1.4 for generic pencils (as defined in Remark 3.5). More precisely, when \(q>d^{6}\), then the lower bound in Theorem 1.4 can be improved significantly so that the answer to Question 1.2 would be positive with \(c_{0}=1-o(1)\). This follows from the observation that the number of singular members in a generic pencil is at most \(3(d-1)^{2}=o(q)\) as stated in Remark 3.5, and our earlier result that a smooth member in the pencil is not blocking whenever \(q>d^{6}\)[1, Theorem 1.2]. We also mention that there are other refined sufficient conditions on nonblocking smooth curves in [1]. ## Acknowledgements The second author is supported by an NSERC Discovery grant. The third author is supported by a doctoral fellowship from the University of British Columbia.
2305.03496
Joule heating of an emitter on the cathode surface by field electron emission current with an account of the nonisolation of the apex
This work is devoted to the investigation of the nonstationary problem of the thermal conductivity of a nanoemitter on the surface of a massive copper cathode when a field electron emission current passes through it. At the same time, the dependence of volume resistivity, thermal conductivity on temperature, and size effects have been taken into account. The influence of the Nottingham effect has been considered. The dependence of the equilibrium temperature of the emitter apex on the field enhancement factor for different values of the electric field strength has been found. Based on the assumption that the initial stage of the breakdown begins when the emitter apex melts, the conditions for the occurrence of a vacuum breakdown and the influence of the Nottingham effect have been analyzed.https://doi.org/10.1116/6.0002474
M. Diachenko, S. Lebedynskyi, R. Kholodov
2023-05-03T12:33:36Z
http://arxiv.org/abs/2305.03496v1
# Joule heating of a nanoemitter on the cathode surface ###### Abstract This work is devoted to the investigation of the non-stationary problem of the thermal conductivity of a nanoemitter on the surface of a massive copper cathode when a field electron emission current passes through it. At the same time, the dependence of volume resistivity, thermal conductivity on temperature, and size effects have been taken into account. The influence of the Nottingham effect has been considered. The dependence of the equilibrium temperature of the emitter apex on the field enhancement factor for different values of the electric field strength has been found. Based on the assumption that the initial stage of the breakdown begins when the emitter apex melts, the conditions for the occurrence of a vacuum breakdown and the influence of the Nottingham effect have been analyzed. ## I Introduction The physical nature of vacuum high-voltage breakdown, which can occur in elementary particle accelerators, in particular in CLIC (Compact LInear Collider, CERN), is quite complex and, despite numerous studies, a complete theory of this process still does not exist. The electron emission from the cathode surface is usually local, due to the presence of emitters in the form of tips and other inhomogeneities on the surface. The presence of such objects leads to a local increase in the electric field strength, an increase in the field electron emission current density, and accordingly, heating of the emitter due to the dissipation of Joule energy. The process of resistive heating of the emitter on the cathode surface is an important stage of high-vacuum breakdown and is therefore relevant for theoretical research. Due to the development of modern accelerators of charged particles, in which the gradients reach values at which high-vacuum breakdowns occur inside the accelerator structure, in particular at linear electron-positron colliders, many theoretical works dedicated to the study of these processes have recently appeared. Thus, in Refs. [1; 2; 3; 4; 5; 6; 7], the peculiarities of heating a nano-sized emitter on the surface of a massive cathode were investigated using molecular dynamics methods. Namely, in Ref. [1], a molecular dynamic model was developed to describe the evolution of the emitter within the framework of the atomic molecular dynamics approach. In Ref. [2], a model was developed to describe the evolution of a tip of a cylindrical shape caused by current, which is based on the methods of molecular dynamics, including resistive heating and electronic thermal conductivity. The consistency of the obtained simulation results with analytical expressions was also shown. In Ref. [3], a method for calculating emission currents and the Nottingham effect was developed, while considering different modes of electron emission (thermal, field and intermediate). In Ref. [4], the dependence of the field electron emission current on the applied electric field was analyzed by the Fowler-Nordheim methods and the generalized formalism of the emission taking into account the temperature. It was shown that the usual Fowler-Nordheim equation leads to a significant underestimation of the electron emission current and can lead to a significant overestimation of the field enhancement factor, especially in the range of relatively low electric fields. In Ref. [5], the processes taking place in a metal nanoemitter in the form of a truncated cone with an apex in the form of a hemisphere under conditions of intense electron emission were investigated. Multiscale simulations were also used, simultaneously including field-induced forces, electron emission, the Nottingham effect, and Joule heating. The possibility of evaporation of large fractions of the tip was shown, which can explain the origin of neutral atoms necessary for plasma initiation. In Ref. [6], a combined method of molecular dynamics and electrodynamics was used to model the thermal evaporation of nanotips under the influence of a high electric field. In this paper, copper emitters with different initial geometries were investigated and as a result of simulations, it was shown that the aspect ratio of the emitters has a significant effect on thermal evaporation. In Ref. [7], the Nottingham effect was studied in detail for nano-sized emitters of a complex shape. It should also be noted Refs. [8; 9; 10; 11; 12] devoted to the simulation of interelectrode processes that occur at the initial stage of vacuum breakdown development. In Ref. [8], a physical model was presented, based on the one-dimensional Particle In Cell (PIC) method, which describes the formation of plasma under the assumption that an emitter is initially present on the cathode, which enhancements the field. The model includes field electron emission and evaporation of neutral atoms as plasma initiation mechanisms and takes into account several surface processes and collisions between particles. As a result of the simulation, it was shown the possibility of plasma formation when the current density is \(0.5-1~{}A/\mu m^{2}\), the ratio of neutral atoms to emission electrons is \(0.01-0.05\) and time scales are \(1-10~{}ns\). In Refs. [9] and [10], the previous model (Ref. [8]) was improved to a two-dimensional PIC model. Within the framework of these works, the temporal and spatial evolution of the vacuum arc was considered, which is created under the conditions of the existence of an emitter on the cathode surface. A current-voltage characteristic of the arc was qualitatively investigated and the conditions under which it occurs were considered. A comparison with the results of similar programs was also made and good consistency with them was shown. In Refs. [11] and [12], the two-dimensional PIC model was improved, taking into account collisions between particles using the Monte Carlo method, and more subtle effects were also considered. It should be noted the works of Refs. [13; 14; 15; 16], in which both analytical and numerical studies of the current heating of carbon nanotubes are carried out. In particular, in Refs. [13] and [14], the spatial dependence of the temperature distribution along a carbon nanotube has been studied under various regimes of electron emission, taking into account the effects of Henderson-cooling and Nottingham-heating. In this paper, within the framework of the theory of thermal conductivity, a model problem is considered, when there is a cylindrical nanoemitter on the cathode surface. At the same time, the Joule heating by the field electron emission current, the dependence of the resistivity on the temperature for a nano-sized tip, size effects, and the Nottingham effect are taken into account. It is shown that the process of evaporation of neutral atoms from the surface of the emitter can be neglected. An analysis of the conditions for the occurrence of a vacuum breakdown is carried out based on the assumption that it begins when the emitter apex melts, and the influence of the non-isolation of the tip on these conditions is investigated. ## II Theoretical background In this section, we will consider the theoretical basis of the heating of a copper emitter by the field electron emission current. We will consider a tip of cylindrical shape with a heat source in the form of Joule heating while taking into account the dependence of volume resistivity on temperature and size effects. The nanoemitter will be located on a massive copper cathode. In the case of a cylindrical tip, we can switch to the one-dimensional case and the heat conduction equation has the following form \[c\rho\frac{\partial T}{\partial t}=\kappa\frac{\partial^{2}T}{\partial x^{2} }+q\left(T\right), \tag{1}\] where \(c\) is the specific heat capacity, \(\rho\) is the density of the tip material, \(\kappa\) is the thermal conductivity. In Eq. (1), \(q\) determines the power of heat sources, and in the case of Joule heating by current, the expression for \(q\) has the form \[q\left(T\right)=\rho_{r}\left(T\right)j^{2}. \tag{2}\] where \(\rho_{r}\) is the resistivity, \(j\) is the current density. In the problem of thermal conductivity of nano-sized tips of cylindrical shape, when the diameter is smaller than the average length of the free path of electrons, it is also necessary to take into account size effects. These effects significantly affect the resistivity of a substance and the thermal conductivity. In accordance with Ref. [2], the expression for the resistivity can be written in the form \[\rho_{r}=\frac{\eta}{r}\frac{\rho_{0}}{T_{0}}T, \tag{3}\] where \(\rho_{0}\) is the value of resistivity at a temperature \(T_{0}\), \(r\) is the radius of the cylinder, \(\eta=70\)\(nm\). This expression is obtained from the results of computer simulation and within the values of \(0.5\)\(nm<r<10\)\(nm\) the error is less than \(6\%\). The size effects also affect the coefficient of thermal conductivity, and according to the Wiedemann-Franz-Lorenz law, it can be written as \[\kappa=\frac{LT}{\rho}=\frac{rLT_{0}}{\eta\rho_{0}}, \tag{4}\] where \(L\) is the Lorenz number. Note that there are more sophisticated models of the electrical resistivity and the thermal conductivity (Refs. [1] and [2]), but in this work, to simplify the numerical solution algorithm of the non-stationary thermal conductivity equation, we limit ourselves to the above dependencies. The current density of the field emission from the emitter on the metal surface due to electron tunneling through the potential barrier at the metal-vacuum interface is described by a Murphy-Good model (Ref. [17]): \[j=\lambda_{T}\frac{a}{t^{2}\left(y\right)}\frac{F^{2}}{\varphi}\exp\left(-b \frac{\varphi^{3/2}}{F}\vartheta\left(y\right)\right), \tag{5}\] where \(\lambda_{T}\) is a temperature correction factor \[\lambda_{T}=\frac{\pi ksT}{\sin(\pi kST)},\] and the notation is entered \[s=\frac{3b\sqrt{\varphi_{t}}(y)}{2F},\ \ y=d\frac{\sqrt{F}}{\varphi},\ \ a= \frac{e^{3}}{16\pi^{2}\hbar},\ \ b=\frac{4\sqrt{2m}}{3\hbar e},\] \(d\approx 3.79\cdot 10^{-4}eVcm^{1/2}V^{-1/2}\). Eq. (5) also takes into account the local enhancement of the electric field at the emitter apex, and thus the local field \(F=\beta E\), where \(E\) is the macroscopic field and \(\beta\) is the enhancement factor. If the emitter has the shape of a cylinder, then it is approximately assumed that this factor is determined by the ratio of the height to the radius of the emitter \(\beta=h/r\). Henceforth, we will use this approximation for the coefficient \(\beta\). For numerical calculations, the functions \(t\left(y\right)\) and \(\vartheta\left(y\right)\) can be approximately taken in the following form (Ref. [18]) \[t\left(y\right)\approx 1+\frac{y^{2}}{9}\left(1-lny\right),\ \ \vartheta\left(y\right)\approx 1-y^{2}+\frac{y^{2}}{3}lny. \tag{6}\] The error with this choice of functions (6) is less than \(0.6\%\) for values of \(y\) in the range from \(0\) to \(1\). In the numerical calculations conducted in this article, parameter y has a maximum value of \(0.78\), which is within the relevant range of values. It should be emphasized that the Fowler-Nordheim formula is obtained for the case of zero temperature of the metal, but in real conditions, the temperature differs from zero due to the heating of the emitter by the field electron emission current, so the effect of temperature on the process under study should also be taken into account. But within the framework of this work, we will neglect this dependence. To solve Eq. (1), it is also necessary to specify the initial and boundary conditions. In this work, the initial temperature of the emitter will be \(T_{0}=293.15\ K\), a constant temperature will be maintained on the lower base of the tip (the point of contact with the cathode), which coincides with the initial temperature \(T_{0}\). For the upper base, we will consider two cases, when it is isolated (there is no heat flow through the top base of the tip) and when it is not isolated due to the sublimation process and the Nottingham effect. For the first case, the initial and boundary conditions of the problem are as follows \[T|_{t=0}=T_{0},\ \ T|_{x=0}=T_{0},\ \ \ \frac{\partial T}{\partial x}\bigg{|}_{x=h}=0. \tag{7}\] Let us first consider the stationary case, when the change in temperature over time can be neglected, then the heat conduction equation Eq. (1) can be rewritten in the form \[\frac{\partial^{2}T}{\partial x^{2}}+\zeta T=0, \tag{8}\] where \(\zeta=q/(\kappa T)\). In this case, the boundary conditions have the form Eq. (7). The temperature distribution can be found from the equation (8) in the form \[T(x)=T_{0}\left[\cos(\sqrt{\zeta}x)+tg(\sqrt{\zeta}h)\sin(\sqrt{\zeta}x) \right]. \tag{9}\] From Eq. (9), the temperature on the upper base of the cylinder is determined as \[T|_{x=h}=\frac{T_{0}}{\cos(\sqrt{\zeta}h)}. \tag{10}\] Taking into account the formula (10), we can obtain an expression for the current density at which melting of the tip occurs. To do this, it is necessary to substitute the melting temperature \(T_{m}\) into the left-hand side of Eq. (10) and solve it for the current density. This way, we can obtain \[j_{m}=\frac{T_{0}\sqrt{L}}{\eta\rho_{0}\beta}\arccos\frac{T_{0}}{T_{m}}, \tag{11}\] where \(T_{m}\) is the melting point, \(j_{m}\) is the current density at which the tip temperature reaches the melting point. ## III Numerical calculations In this section, we will consider the non-stationary problem of the thermal conductivity of a cylindrical tip during Joule heating by a current with density (5). We will numerically solve the equation (1) using the implicit Euler scheme, as well as the Thomas algorithm. To do this, we will reduce the equation of thermal conductivity to a dimensionless form by introducing the following dimensionless quantities \[\tilde{T}=\frac{T}{T_{0}},\ \ \tilde{x}=\frac{x}{h},\ \ \tilde{t}=\frac{t}{t_{0}},\ \ t_{0}=\frac{\eta\rho_{0}c\rho h^{2}}{rL \rho_{0}}. \tag{12}\] Considering the introduced dimensionless quantities (12), the thermal conductivity equation can be rewritten in the following form \[\frac{\partial\tilde{T}}{\partial\tilde{t}}=\frac{\partial^{2}\tilde{T}}{ \partial\tilde{x}^{2}}+Q\tilde{T}, \tag{13}\] where \[Q=\frac{1}{L}\bigg{(}\frac{\eta\rho_{0}}{T_{0}}\beta j\bigg{)}^{2}.\] ### Case of an isolated tip apex First, let us consider the case when the emitter apex is isolated. We will consider the initial and boundary conditions in the form (7). At the same time, these conditions can be rewritten in terms of dimensionless quantities as follows \[\tilde{T}\big{|}_{t=0}=1,\ \ \tilde{T}\big{|}_{x=0}=1,\ \ \frac{\partial \tilde{T}}{\partial\tilde{x}}\bigg{|}_{x=1}=0. \tag{14}\] Using Euler's implicit scheme, the heat conduction equation can be written \[\frac{\tilde{T}_{i}^{n+1}-\tilde{T}_{i}^{n}}{\delta\tilde{t}}=\frac{\tilde{T}_ {i+1}^{n+1}-2\tilde{T}_{i}^{n+1}+\tilde{T}_{i-1}^{n+1}}{\delta\tilde{x}^{2}}+ Q\tilde{T}_{i}^{n+1}, \tag{15}\] where \(\delta\tilde{x}\), \(\delta\tilde{t}\) are steps by coordinate and time, index \(n\) corresponds to the grid node number by time, and \(i\) by coordinate. The equation (15) can be rewritten in a more convenient form for applying the Thomas algorithm \[\tilde{T}_{i+1}^{n+1}-B_{i}\tilde{T}_{i}^{n+1}+\tilde{T}_{i-1}^{n+1}=G_{i}. \tag{16}\] The expression (16) has the following notation \[G_{i}=-\frac{\delta\tilde{x}^{2}}{\delta\tilde{t}}\tilde{T}_{i}^{n},\ \ B_{i}=2+\frac{\delta\tilde{x}^{2}}{\delta\tilde{t}}-Q\delta\tilde{x}^{2}.\] As part of the Thomas algorithm, it is assumed that there are such sets of numbers \(\alpha_{i}\) and \(\xi_{i}\) for which the equality holds \[\tilde{T}_{i}^{n+1}=\alpha_{i}\tilde{T}_{i+1}^{n+1}+\xi_{i}. \tag{17}\] Given the expression (16), we can find recurrence relations for these coefficients \[\alpha_{i}=\frac{1}{B_{i}-\alpha_{i-1}},\ \ \xi_{i}=\frac{\xi_{i-1}-G_{i}}{B_{i}- \alpha_{i-1}}. \tag{18}\] To determine the coefficients \(\alpha_{i}\), \(\xi_{i}\) (18), it is necessary to find \(\alpha_{0}\) and \(\xi_{0}\), which can be obtained from the lower boundary condition \[\alpha_{0}=0,\ \ \xi_{0}=1. \tag{19}\] Since the calculation scheme has the second order of accuracy in terms of \(\delta\tilde{x}\), it is necessary to discretize the upper boundary condition with \(O\left(\delta\tilde{x}^{2}\right)\) accuracy. For this, it is necessary to expand the function \(\tilde{T}(x)\) into a Taylor series around the point \(\tilde{x}=1\) to terms of the second order with respect to \(\delta\tilde{x}\) \[\tilde{T}_{N-2}^{n+1}=\tilde{T}_{N-1}^{n+1}-\delta\tilde{x}\left.\frac{\partial \tilde{T}}{\partial\tilde{x}}\right|_{N-1}^{n+1}+\frac{\delta\tilde{x}^{2}}{2 }\left.\frac{\partial^{2}\tilde{T}}{\partial\tilde{x}^{2}}\right|_{N-1}^{n+1}. \tag{20}\] Taking into account the upper boundary condition, that is, the isolation of the upper base of the tip, and the heat conduction equation (13), the following relation can be written \[\tilde{T}_{N-2}^{n+1}=\tilde{T}_{N-1}^{n+1}+\frac{\delta\tilde{x}^{2}}{2}\left( \frac{\tilde{T}_{N-1}^{n+1}-\tilde{T}_{N-1}^{n}}{\delta\tilde{t}}-Q\tilde{T}_{ N-1}^{n+1}\right). \tag{21}\] Thus, the temperature at the coordinate grid node \(N-2\) at the moment \(n+1\) according to (21) is determined by the temperature value at the last node \(N-1\). The expression for \(\tilde{T}_{N-2}^{n+1}\) can also be written in terms of Thomas algorithm coefficients as follows \[\tilde{T}_{N-2}^{n+1}=\alpha_{N-2}T_{N-1}^{n+1}+\xi_{N-2}. \tag{22}\] Considering the expressions (21) and (22), we can find \[\tilde{T}_{N-1}^{n+1}=\frac{\xi_{N-2}+\frac{\delta\tilde{x}^{2}}{2\delta t}T_{ N-1}^{n}}{1-\alpha_{N-2}+\frac{\delta\tilde{x}^{2}}{2\delta t}-\frac{\delta \tilde{x}^{2}}{2}Q}. \tag{23}\] In this way, the solution of the heat conduction equation (13) by the Thomas algorithm is reduced to the following stages. First, the coefficients \(\alpha_{0}\) and \(\xi_{0}\) are calculated from the lower boundary condition, then \(\alpha_{i}\) and \(\xi_{i}\) are found by direct running (\(i=1,\ 2,\...\,\ N-1\)), at the last stage the temperatures in the other nodes are found \(T_{i}^{n+1}\) (\(i=N-2,\ N-3,\...\,\ 1\)) by the inverse run according to the formula (17). This is what one iteration looks like over time. First, for numerical calculations according to the above scheme, we will choose the following parameters of the problem \(r=2.2\ nm\), \(h=100\ nm\), \(E=170\ MV/m\), the initial and boundary conditions will be (7). In this paper, we will also consider an emitter made of oxygen-free electronic copper (C10100), the parameters of which are given in Ref. [19]. It should be noted that for these calculations, we do not take into account the dependence of the current density on the temperature in Eq. (5), since the main task was to compare the numerical results with the analytical solution for the stationary case. However, the temperature effect is taken into account in further calculations. Figure 1 shows the results of the numerical calculation, from which it can be seen that for the given parameters, the greatest heating is observed on the tip apex and the temperature distribution becomes unchanged over time. Figure 2 shows the dependence of the temperature of the emitter apex on the time. As can be seen from this figure, after a time of the order of \(t_{0}\), the temperature reaches a stationary mode and coincides with the analytical value (10) for the stationary case. It should be noted that for the given calculation parameters, \(t_{0}\approx 2.8\ ns\). Figure 3 shows the dependence of the temperature on the coordinate after the onset of the stationary mode (\(t=30t_{0}\)). This figure also shows a good agreement between numerical calculations and analytical expressions. Figure 1: Dependence of tip temperature on coordinate and time for the case of an isolated apex. Figure 3: Temperature distribution along the coordinate after heating during time \(10t_{0}\). Figure 2: Dependence of the temperature of the tip apex on time. The dependence of the temperature of the emitter apex on the field enhancement factor \(\beta\), that is, on the radius of the tip at a fixed height, for different values of the electric field strength was also considered. The results of numerical calculations are demonstrated in figure 4. If it is assumed that the vacuum breakdown begins when the emitter apex reaches the melting point, then the \(\beta\) coefficient when the breakdown begins can be found for different values of the electric field. Figure 6 shows such a dependence, and as we can see, for an electric field strength of \(100\ MV/m\), the enhancement factor is \(76.3\), at lower values, breakdown does not occur. If the field strength is \(170\ MV/m\), then the coefficient is \(47.4\), which is consistent with experimental results (Ref. [20]), in which it was approximately equal to \(50\) for a given value of the electric field. From the interpolation given in the figure 4, it is also possible to determine \(\beta\) for other values of the electric field. ### Case of a non-isolated tip apex In previous calculations, we used the approximation that the emitter is isolated. Henceforth, we will take into account the non-isolation of its apex due to the Nottingham effect. We will show that the process of sublimation can be neglected under the investigated conditions. To solve the heat conduction equation, we will use the computational scheme as in the previous subsection, but to take into account the boundary condition on the upper base of the tip, we will also use the method of successive iterations. #### iii.2.1 Sublimation To evaluate the influence of the sublimation process, we will calculate the mass of the substance evaporated from the tip at the melting temperature. We will use the Hertz-Knudsen equation, according to which the rate of sublimation of neutral atoms from the cathode surface is determined as follows \[w_{eup}=\sqrt{\frac{M}{2\pi RT}}p, \tag{24}\] where \(M\) is the molar mass of the substance, \(R\) is the universal gas constant, and \(p\) is the saturated vapor pressure, which is a function of temperature and is determined from an empirical formula that looks like this (Ref. [21]) \[\lg\tilde{p}=A+\frac{B}{\tilde{T}}+C\lg\tilde{T}+D\tilde{T}\cdot 10^{-3}. \tag{25}\] It should be noted that in this paper we consider the process of heating the emitter before its melting, so in the expression (25) we will use empirical parameters for the solid phase of the substance, which take the values (Ref. [21]) \(A=7.810\), \(B=-17687\), \(C=-0.2638\), \(D=-0.1486\). Let us calculate the mass of the substance that evaporates as a result of the sublimation process during the time \(10t_{0}\). Taking into account the expression (25), the rate of this process is \(w_{eup}\approx 5\cdot 10^{-5}\ kg/m^{2}s\). Then \(m_{eup}\approx 1.4\cdot 10^{-28}\ kg\), which is much smaller than the mass of a copper atom. Thus, the process of sublimation can be neglected when we consider the process of heating the tip before the start of melting. Henceforth, the evaporation of neutral atoms will not be considered in numerical calculations. #### iii.2.2 Nottingham effect In this subsection, we will take into account the Nottingham effect. It consists of the difference in the average energy of electrons that come from the depth of the cathode to its surface and electrons that leave it. At the same time, if the temperature of the cathode surface is lower than the inversion temperature, then the average energy of the electrons leaving the surface will be lower than the energy of the electrons coming from the depth of the metal, and heating of the tip apex will be observed (the Nottingham heating). If the temperature is greater than the inversion temperature, Henderson cooling will be observed on the contrary. The inversion temperature is determined by the expression \[T_{inver}=5.67\cdot 10^{-5}\frac{\beta E}{\sqrt{\varphi}}, \tag{26}\] where \(E\) is the electric field strength in \(V/cm\), \(\varphi\) is the work function of the electron in \(eV\), \(T_{inver}\) is the inversion temperature in \(K\). Let us estimate the temperature (26) for the parameters of the problem. Thus, for the value of the electric field \(E=170\ MV/m\), the enhancement factor \(\beta=50\) and the work function of the electron \(\varphi=4.5\ eV\), we have that \(T_{inver}\approx 2272\ K\). As we can see, the inversion temperature is much higher than the melting temperature of copper, so we will consider only the case when the Nottingham heating occurs. For the case when \(T_{emis}<1.2\cdot T_{inver}\), the difference in the average energy of electrons leaving the metal surface and electrons coming from the depth of the cathode has the form Figure 4: Dependence of the equilibrium temperature of the emitter apex on \(\beta\) at different values of the applied electric field. (Ref. [22]) \[\Delta\epsilon=\frac{\pi^{2}}{2}\bigg{(}\frac{kT_{emis}}{\epsilon_{F}}\bigg{)}^{2 }\epsilon_{F}+2k\pi T_{emis}\epsilon_{F}\frac{\pi T_{emis}}{2T_{imer}}, \tag{27}\] where \(\epsilon_{F}\) is the Fermi energy. Considering the Nottingham effect, the boundary condition on the upper base of the emitter has the following form \[\left.\frac{\partial\mathcal{T}}{\partial\bar{x}}\right|_{\bar{x}=1}=\frac{j} {e}\Delta\epsilon\frac{h}{\kappa T_{0}}, \tag{28}\] where \(e\) is the electron charge. In the second approximation by a step along the coordinate grid, from the boundary condition (28), the relation can be found \[\tilde{T}_{N-1}^{n+1}=\frac{\xi_{N-2}+\frac{\delta\tilde{x}^{2}}{2\delta t} \tilde{T}_{N-1}^{n}+V\Delta\epsilon(\tilde{T}_{N}^{n+1})\delta\bar{x}}{1- \alpha_{N-2}+\frac{\delta\tilde{x}^{2}}{2\delta t}-Q\frac{\delta\tilde{x}^{2} }{2}}. \tag{29}\] Figures 5 and 6 present the results of numerical calculations of the heat conduction equation, taking into account the Nottingham effect. As can be seen from Fig. 5, the curves have a smaller slope than in the case of an isolated tip, and the points of intersection of the curves with the melting temperature are shifted to the left. This can also be seen in Fig. 6. If we assume that the initial stage of the breakdown occurs when the tip apex begins to melt, then taking into account the Nottingham effect leads to a decrease in the breakdown electric field, which can be seen in Fig. 6. For example, if \(\beta=50\), the reduction is approximately 10%. ## IV Conclusions In this paper, the non-stationary problem of thermal conductivity during Joule heating by a current of a cylindrical nanoemitter, which is located on a massive copper cathode, was considered. At the same time, the dependence of resistivity and thermal conductivity on temperature, as well as size effects were taken into account. The dependence of the temperature of the emitter apex on the field enhancement factor for different values of the electric field strength was obtained. The conditions for the occurrence of a vacuum breakdown are analyzed based on the assumption that it begins when the emitter apex melts. It is shown for the case of an isolated copper emitter that at an electric field strength of \(100~{}MV/m\), the enhancement factor is 76.3, which corresponds to a radius of 1.3 \(nm\) and a height of 100 \(nm\) at larger values of the radius, breakdown does not occur. If \(E=170~{}MV/m\), then this factor is 47.3, which is consistent with both experimental and theoretical results (Ref. [2]), in which this factor was approximately equal to 50 for a given value of the electric field. It is shown that the sublimation of neutral atoms until the moment of melting of the emitter apex can be neglected. It is numerically shown that taking into account the Nottingham effect leads to a decrease in the breakdown electric field strength, so for \(\beta=50\), the decrease is approximately 12%. ###### Acknowledgements. The publication is based on the research provided by the grant support of the National Academy of Sciences of Ukraine (NASU) for research laboratories/groups of young scientists of the National Academy of Sciences of Ukraine to research priority areas of development of science and technology in 2021-2022 under contract No 16/01-2021 (3). Figure 5: Temperature distribution along the coordinate after heating during time \(30t_{0}\), taking into account the Nottingham effect. Figure 6: Dependence of the breakdown electric field strength on the field enhancement factor \(\beta\) for the cases of isolated emitter apex and in consideration of the Nottingham heating. ## Author declarations ### Conflict of Interest The authors have no conflicts to disclose. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2302.08564
First-principles calculations of magnetic states in pyrochlores using a source-corrected exchange and correlation functional
We present a first-principles investigation of the spin-ice state in Dy$_2$Ti$_2$O$_7$ using a magnetic source-free exchange and correlation functional, implemented in the Castep electronic-structure code. By comparing results from the conventional local spin-density approximation, we show that a spin-ice state in Dy$_2$Ti$_2$O$_7$ can be reliably obtained by removing the magnetic sources from the exchange and correlation contributions to the potential, and we contrast this against the computed ground states of other frustrated pyrochlore magnets.
Z. Hawkhead, N. Gidopoulos, S. J. Blundell, S. J. Clark, T. Lancaster
2023-02-16T20:13:15Z
http://arxiv.org/abs/2302.08564v1
First-principles calculations of magnetic states in pyrochlores using a source-corrected exchange and correlation functional ###### Abstract We present a first-principles investigation of the spin-ice state in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) using a magnetic source-free exchange and correlation functional, implemented in the Castep electronic-structure code. By comparing results from the conventional local spin-density approximation, we show that a spin-ice state in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) can be reliably obtained by removing the magnetic sources from the exchange and correlation contributions to the potential, and we contrast this against the computed ground states of other frustrated pyrochlore magnets. ## I Introduction Materials that show long-range, non-collinear magnetic spin textures occur widely in Nature. A system with collinear order has a global quantisation axis along which all the spins are aligned or antialigned, while for a non-collinear state, each ordered spin can potentially have a different local direction. Such non-collinear states are commonly found, for example, in systems where the spin-orbit interaction leads, in a low-energy approximation, to single-ion anisotropy or to a Dzyaloshinski-Moriya interaction. Following the initial incorporation of collinear magnetism into density functional theory (DFT) within the local density approximation, attempts were made to describe non-collinear magnetism using the formalism of DFT. Starting in the 1980s, Kubler _et al._, [1] made progress in describing non-collinear spins via a self-consistent method. To incorporate spin-orbit coupling required the development of a fully relativistic treatment [2], in which all electrons are treated non-collinearly within the spin-polarised coupled Dirac equation. After development of the methods, the main hindrance in the use of DFT for non-collinear magnetism is the accuracy of the functionals. The optimised effective potential (OEP) method [3; 4; 5] has been shown [6] to successfully describe the magnitudes of magnetic moments, and therefore represents a candidate route to extend DFT's exchange and correlation (xc) functionals. Sharma _et al._[7] extended the spin-density formalism to non-collinear spin via the OEP, which has the advantage that it does not rely on the local collinearity of spins required when applying standard functionals. Eich and Gross [8] made a similar extension within a local density-like approximation. Non-collinearity arises in a number of magnetic compounds that adopt the pyrochlore lattice, a three-dimensional (3D) arrangement of corner-sharing tetrahedra, which is known to exhibit a high degree of geometrical frustration. For example, the pyrochlore oxide Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[9] exhibits an interesting non-collinear magnetic structure owing to the strong Ising-like crystal-field anisotropy at each Dy site. The Dy\({}^{3+}\) ions are magnetic and the \(J=15/2\) manifold is split by the crystal field, leading to a \(\approx\)10\(\mu_{\rm B}\) ground state moment. The ground state is separated from crystal-field excited states by a gap of a few hundred Kelvin [10]. The crystal-field anisotropy constrains the magnetic moments to lie along the local \(\langle 111\rangle\) axes [11; 12]. An effective ferromagnetic coupling between these moments results from the combination [13] of dominant long-range dipolar interactions (\(D=\)1.41 K) with antiferromagnetic nearest-neighbour exchange (\(J=-3.72\) K) [14]. As a result of this combination of interactions and the local anisotropy, below about 1 K the system settles into a disordered spin-ice state. This state is characterized by a '2-in 2-out' spin configuration (meaning that two spins point in and two spins point out of each tetrahedron), analogous to proton displacement vectors in Pauling's model of hydrogen disorder in water ice, the residual configurational entropy measured for these materials being close to Pauling's predicted value for ice [15; 16]. The excitations in spin ice are created by reversing a single spin, which thereby produces a pair of magnetic monopoles which can move independently through the lattice [17]. Studying these monopole excitations has become of great current interest [18; 19; 20], but an important scientific aim is to understand the local electronic properties if Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) in more detail, particularly as the monopole transport may arise from the precise local arrangement of spins [20; 21]. Hitherto there have been limited first-principles simulations of the spin-ice state in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\), and the spin-ice physics is generally understood within the low-energy description of the underlying magnetic energy-levels described above, invoking only crystal field levels and single-ion anisotropy effects. The bulk of the first principles work carried out on pyrochlore materials focuses on electronic and structural effects. Early work investigating the electronic structure of a range of pyrochlores, including Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\), had success matching x-ray emission spectra [22]. Similar success has also been achieved in using DFT to study further pyrochlore materials [23; 24]. DFT has also been applied to compare the results of neutron scattering experiments in magnetic pyrochlores by calculating the phonon spectra [25]. Little work has been done from first principles on the magnetic configurations of Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\), although there have been attempts at understanding magnetic behaviours in other magnetic pyrochlore materials [26; 27; 28]. Since the ordered spin-ice magnetic structure is inherently non-collinear and highly degenerate, it provides a challenge for spin-DFT calculations, particularly with the xc functionals currently at our disposal. Here we present results of spin-DFT calculations where we make use of a source-corrected version of the well-known local spin-density approximation (LSDA) functional to stabilise the spin-ice state. By making comparisons to calculations performed using the conventional LSDA, we show that realising the spin ice state is possible because of the use of the source-corrected functional. We also present calculations on two further pyrochlore materials, Nd\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and Sm\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\), which are known to host an ordered _all-in, all-out_ (AIAO) spin texture [29; 30; 31]. These latter examples show that the source-free LSDA correctly predicts that the AIAO state is of lower energy than a spin-ice state, which is not stable in the two materials considered. ## II Source-free methods ### Theory The natural description of magnetism using electronic structure uses the spin density, or equivalently the magnetisation, which is a continuous vector field that is discretised onto a grid in order to perform calculations. For calculations of magnetic properties there are then two key parameters: the charge density \(n(\mathbf{r})\) and the magnetisation density \(\mathbf{m}(\mathbf{r})\). The accuracy of a DFT calculation depends on the choice of xc functional [32; 33], with different functionals capturing different aspects of a physical system. In standard DFT we have an xc functional \(E_{\rm xc}[n]\) and an associated xc-potential, \(V_{\rm xc}(\mathbf{r})\), given by the functional derivative: \[V_{\rm xc}(\mathbf{r})=\frac{\delta E_{\rm xc}[n(\mathbf{r})]}{\delta n(\mathbf{r})}. \tag{1}\] In calculating the xc spin-potential for a non-collinear density we cannot use the analogue of Eq. 1, since the density in a non-collinear treatment is not a scalar field. Instead the electron potentials can be expressed as 2\(\times\)2 matrices, where we use spinors to account for the vector magnetisation [34]. We can express the xc spin potential in terms of a non-spin potential and a magnetic field via a four-component potential [34; 35; 36] \[\mathcal{V}_{\rm xc}(\mathbf{r})=V_{\rm xc}(\mathbf{r})\mathrm{I}_{2}+\mu_{\rm B} \mathbf{B}_{\rm xc}(\mathbf{r})\cdot\mathbf{\sigma}, \tag{2}\] where the magnetic field \(\mathbf{B}_{\rm xc}\) is the vector part of the xc potential, \(\mathbf{\sigma}\) is the vector of Pauli spin matrices, \(\mu_{\rm B}\) is the Bohr magneton, and \(\mathrm{I}_{2}\) is the \(2\times 2\) identity matrix. We can then relate each term in Eq. 2 separately to \(E_{\rm xc}\): (i) the scalar potential \(V_{\rm xc}(\mathbf{r})\) is found using Eq. 1, taking the density to be the scalar part of the non-collinear density; (ii) the vector term, or the magnetic field, is given by \[\mathbf{B}_{\rm xc}(\mathbf{r})=-\frac{\delta E_{\rm xc}[n(\mathbf{r}),\mathbf{m}(\mathbf{r})]}{ \delta\mathbf{m}(\mathbf{r})}. \tag{3}\] Much research has gone into developing xc functionals specific to non-collinear spin [7; 8]. However, there is no widely-used, accurate functional that consistently replicates experimentally-observed magnetic states, and most currently available xc functionals are simple extensions to functionals designed for collinear systems. To make use of functionals such as the LSDA, at each point in space we rotate the vector spin-density such that it lies along the \(z\)-axis, allowing us to decompose it into spin-up (\(n_{\uparrow}\)) and spin-down (\(n_{\downarrow}\)) densities which can be used to calculate \(E_{\rm xc}\)[1; 37]. We then take the functional derivative of \(E_{\rm xc}[n_{\uparrow},n_{\downarrow}]\) with respect to \(n_{\uparrow}\) and \(n_{\downarrow}\). This yields \(\mathbf{B}_{\rm xc}\) along the \(z\)-axis. Then, we carry out the inverse rotation on \(\mathbf{B}_{\rm xc}\) and we finally obtain a non-collinear vector \(\mathbf{B}_{\rm xc}\), which however remains locally parallel (at every point in real space) to the magnetisation density, or the spin density. As the resulting energy is calculated point-wise, the method imposes a non-physical constraint that the magnetisation must be locally collinear with \(\mathbf{B}_{\rm xc}(\mathbf{r})\). Sharma _et al._[35] highlight another important problem with standard functionals. From the Maxwell equations, for any arbitrary magnetic field, \(\mathbf{B}\), the divergence of the field should be zero (\(\mathbf{\nabla}\cdot\mathbf{B}=0\)), which follows from the absence of magnetic sources. However, this condition is not met for the common functionals, LSDA and PBE, which give results consistent with magnetic sources existing on the surfaces of a sample of the material. Sharma _et al._[35] suggest a method for improving these functionals that we reproduce in outline here. Starting from the Helmholtz theorem, a vector field can be decomposed into two components: one of which is divergence free and one that is curl free [38], \[\mathbf{B}(\mathbf{r})=\mathbf{\nabla}\times\mathbf{A}+\mathbf{\nabla}\phi. \tag{4}\] To ensure that the magnetic field is source free we must explicitly subtract the term \(\mathbf{\nabla}\phi\) that contributes to the divergence of the field. We have the freedom to select the gauge and chose \(\phi\) such that it is the solution to the Poisson equation, \[\nabla^{2}\phi(\mathbf{r})=-4\pi\mathbf{\nabla}\cdot\mathbf{B}. \tag{5}\] The source-free magnetic field \(\tilde{\mathbf{B}}\) can then be constructed using \[\tilde{\mathbf{B}}(\mathbf{r})\equiv\mathbf{B}(\mathbf{r})+\frac{1}{4\pi}\nabla\phi(\mathbf{r}), \tag{6}\] and we can then obtain the xc magnetic field in terms of the substitution, \[\mathbf{B}_{\rm xc}(\mathbf{r})\to s\tilde{\mathbf{B}}_{\rm xc}(\mathbf{r}), \tag{7}\] where \(s\) is an empirical scaling parameter. We have implemented this source-free method in the plane-wave pseudopotential code Castep. The source-free LSDA described by Sharma _et al._[35] is an alteration of the existing functional. Each time the xc energy and potential is calculated, the potential is used to construct the xc magnetic field \(\mathbf{B}_{\mathrm{xc}}(\mathbf{r})\). Using the procedure described above, we can then calculate the source-free field \(\mathbf{\tilde{B}}_{\mathrm{xc}}(\mathbf{r})\) and reconstruct the spin-potential. It is this potential that is then used in the Hamiltonian of the system. Critically, this is not a one-shot approach: the correction is calculated every time the xc potential is required and is therefore self-consistent. To implement the method, we make use of the plane-wave basis set in order to efficiently solve the Poisson equation in Eq. 5. In a plane-wave basis, \(\phi(\mathbf{r})\) can be expressed as \[\phi(\mathbf{r})=\sum_{\mathbf{G}_{j}}c_{\mathbf{G}_{j}}e^{\mathrm{i}\,\mathbf{G}_{j}\cdot\mathbf{ r}}, \tag{8}\] where \(\mathbf{G}_{j}\) are the reciprocal lattice vectors and \(c_{\mathbf{G}_{j}}\) are the Fourier expansion coefficients of \(\phi(\mathbf{r})\). The Fourier expansion allows us to efficiently compute the Laplacian of the scalar potential \(\phi\), \[\nabla^{2}\phi(\mathbf{r})=-\sum_{\mathbf{G}_{j}}|\mathbf{G}_{j}|^{2}c_{\mathbf{G}_{j}}e^{ \mathrm{i}\,\mathbf{G}_{j}\cdot\mathbf{r}}. \tag{9}\] We can also express \(\mathbf{B}_{\mathrm{xc}}(\mathbf{r})\) in terms of its Fourier coefficients \(\mathbf{b}_{\mathbf{G}_{j}}\): \[\mathbf{B}_{\mathrm{xc}}(\mathbf{r})=\sum_{\mathbf{G}_{j}}\mathbf{b}_{\mathbf{G}_{j}}e^{\mathrm{i} \,\mathbf{G}_{j}\cdot\mathbf{r}}. \tag{10}\] For a given vector \(\mathbf{G}_{j}\), the Fourier coefficients of \(\mathbf{\nabla}\cdot\mathbf{B}_{\mathrm{xc}}(\mathbf{r})\) follow from \[\mathscr{F}[\mathbf{\nabla}\cdot\mathbf{B}_{\mathrm{xc}}(\mathbf{r})](\mathbf{G}_{j})= \mathrm{i}\,\mathbf{G}_{j}\cdot\mathbf{b}_{\mathbf{G}_{j}}, \tag{11}\] where \(\mathscr{F}\) denotes the FT. By rearranging Eq. 5 and substituting the reciprocal space expressions in Eq. 9 and Eq. 11, we come to an expression for the Fourier coefficients of \(\phi(\mathbf{r})\), \[c_{\mathbf{G}_{j}}=\frac{\mathrm{i}\,\mathbf{G}_{j}\cdot\mathbf{b}_{\mathbf{G}_{j}}}{4\pi|\bm {G}_{j}|^{2}}. \tag{12}\] Knowing these coefficients, we can build the source-free magnetic field and reconstruct the spin-potential using Eq. 2. We note that the corrected xc potential is no longer strictly local as we use knowledge of the gradient of the potential when solving the Poisson equation. ### Tests on elemental magnetic materials To test the validity of the implementation of the source-corrected LSDA, we performed calculations on a range of elemental magnets and magnetic compounds. The main focus of the testing is on body-centred cubic (bcc) or \(\alpha\)-Fe, where we performed magnetic calculations on a geometry optimised unit cell. Total energies are converged to better than 10 meV using a 7\(\times\)7\(\times\)7 Monkhorst Pack (MP) \(\mathbf{k}\)-point-grid and a plane-wave cut off radius of 1600 eV. We included spin-orbit coupling (SOC) and used norm-conserving relativistic pseudopotentials throughout. Similarly, other calculations are converged to better than 100 meV/atom. Each of the materials tested have a ferromagnetic phase which we investigate using a quantisation axis aligned with the crystallographic \(c\)-direction. We initialised spin along this direction in each material to ensure that the energy minimisation returned a state with long-range magnetic order. A useful way to assess the correction to \(\mathbf{B}_{\mathrm{xc}}\) is to visualize the magnetic field lines, since it is easy to demonstrate that the source terms have been removed from the field. In Fig. 1, we compare the field lines due to \(\mathbf{B}_{\mathrm{xc}}\) in \(\alpha\)-Fe. Using both the conventional LSDA [Fig. 1(a,b) and the source-free LSDA [Fig. 1(c,d)] we find magnetic moments on each atom aligned along the \(c\)-axis. For the LSDA [Fig. 1(a,b)], the field lines are parallel through Figure 1: Field lines for \(\mathbf{B}_{\mathrm{cx}}\) in \(\alpha\)-Fe calculated with the source-corrected and source-uncorrected LSDA. (a,b) xc field calculated using the conventional LSDA functional. (c,d) xc fields calculated using the source-corrected LSDA functional. Colour of the field lines represents the magnitude of the field at that point, from white at low fields to red at high fields. Vectors show the direction of the field flow with size and colour representing magnitude. Where the field magnitude is lower than 10% of the maximum, vectors have been omitted for clarity. Magnetic moments lie in the negative \(c\)-direction in both cases. out the entire infinite crystal, implying the presence of a magnetic source at the surface of the system. It is clear that these field lines for \(\alpha\)-Fe are also globally collinear with the magnetisation, which is one of the issues with the LSDA highlighted by Sharma _et al._[35]. If instead we study the field lines due to the source-corrected functional [Fig. 1(c,d)] we see that they form closed loops around the Fe ions. As we have used the same quantisation axis in both calculation, the magnetisation is also collinear with the \(c\)-axis when using the source-free LSDA. However, it is clear then that the field lines for \(\mathbf{B}_{\mathrm{xc}}\) are no longer constrained to be locally collinear with the magnetisation when the source terms are removed. We see similar results for the \(\mathbf{B}_{\mathrm{xc}}\) field lines in hexagonal close-packed (hcp) Co (Fig. 2). Using the LSDA [Fig. 2(a)] we again see parallel field lines aligning with the magnetisation which lies along the \(c\)-axis for both Co ions. From the source-corrected LSDA [Fig. 2(b)] we see the non-collinearity of the field lines. Close to the Co ions, field lines emerge from the centre of the ion and terminate again at the centre of the ion. The behaviour further from the Co in the interstitial region is more complicated. An example of the effects of the source-corrected functional on the xc field lines on a non-elemental magnet, FeTe, is shown in the Supplemental Material [39]. In this case, the field lines flow between the layers of Fe ions past the Te ions. It is less obvious that the xc field lines in FeTe represent a source-free field. We lose the simplicity of the elemental magnets by including non-magnetic ions which complicate the exchange interactions. However, we still see the improved non-collinearity arising from the source-free functional. ## III Spin Ice Ground States We now turn to the application of the source-corrected methodology to the problem of the spin-ice magnetic structures. In order to model a spin-ice state in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) we used a primitive unit cell including 22 atoms, with lattice parameters of 7.19 A calculated by DFT structural relaxation [40]. The spin structure was calculated with a 5\(\times\)5\(\times\)5 MP \(\mathbf{k}\)-point-grid with a plane-wave cut off of 840 eV. Convergence testing for Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) is shown in the SM [39]. For calculations on Nd\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and Sm\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) we used plane-wave cut-off energies of 843 eV and 860 eV respectively with a 5\(\times\)5\(\times\)5 MP \(\mathbf{k}\)-point-grid grid. The SCF calculation was performed using the ensemble density functional theory (EDFT) minimisation scheme. We performed identical calculations treating xc with both conventional LSDA and the newly-implemented source-free LSDA to compare the resulting spin configurations. In the case of both conventional LSDA and the source-corrected LSDA we initialise a non-collinear spin on each of the Dy ions along the direction of the local \(\langle 111\rangle\) direction in the spin ice configuration. Based on testing of the scaling parameter \(s\) in the SM [39], we conclude that there is no _a priori_ reason to use a any value other than \(s=1\) in Eq. 7. We first examine the magnetic field lines of \(\mathbf{B}_{\mathrm{xc}}\) in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\). The xc field lines for both functionals are shown in Fig. 3, where it is still possible to see that these remain locally collinear around the Dy ions in the case of the conventional LSDA [Fig. 3(a)], aligning with the magnetisation which is localised around these ions. In the interatomic regions it is less clear that the field lines are collinear, largely due to the lack of significant spin density. Instead these regions are dominated by numerical noise. However, for the source-free functional [Fig. 3(b)], the field lines display more obvious non-collinearity and no longer follow the magnetisation. We show below that by better capturing the physics of the internal fields using the source-corrected LSDA, we are able to realise the observed ground-state magnetic structure. The improvement in non-collinearity provided by the source-free LSDA leads us to realise a spin-ice structure in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) which we find is not possible using conventional LSDA, despite using the same spin-ice initialisation. The resulting spin structures are shown in Fig. 4 where we have taken our spin density and projected it onto the Dy ions using Mulliken analysis. These calculations were performed a number of times, and in each case for the given convergence parameters, we realise a spin-ice state using the source-free LSDA functional. Conversely, while conventional LSDA results in a non-collinear configuration of spins, there is no well-defined magnetic structure and a different arrangement is found each time [e.g. Fig. 4(a)], indicative of the randomly-initialised orbitals falling into a different local minimum each time we perform the calculation. By removing the unphysical source terms from the \(\mathbf{B}_{\mathrm{xc}}\) it appears we improve the energy landscape which aids in the minimisa Figure 2: Magnetic field lines in hcp-Co calculated with (a) conventional LSDA and source-free LSDA. Colour of the field lines represents the magnitude of the field at that point, from white at low fields to red at high fields. tion, and the source-free functional is then able to reliably reproduce the spin-ice structure in Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) [Fig. 4(b)]. As was seen with the magnetic moment of our set of test materials (see Ref. [39]), the magnetic moment on the Dy\({}^{3+}\) is increased under the source-free functional, from \(5.0\mu_{\rm B}\) using conventional LSDA to \(5.2\mu_{\rm B}\) with the source free functional. We note that these are significantly smaller than the ordered-moment sizes seen in experiment. Although this could have motivated a different choice of the scaling parameter \(s\), we did not do this and set \(s=1\) throughout. To further test the capabilities of the source-free functional, we performed calculations on Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) with the initial spin in an AIAO configuration. We see in Fig. 5 that the final spin orientation has not remained in the AIAO state, instead it has begun to fall into the spin-ice state seen above, and correspondingly find that the total energy of the spin-ice state in Fig. 4(b) is lower in energy than the all-in, all-out calculation. To show that the source-free functional provides systematic improvement to the magnetism in the wider class of materials, we have also performed calculations on Nd\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and Sm\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) which are known to have an AIAO ground-state magnetic structure [29; 31; 41]. We initialised each calculation with both a spin-ice state and an AIAO state. In the case of Nd\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\), the final spin configurations are shown in Fig. 6(top). For the AIAO initialisation, we find that the resultant spin state is very similar to the initialisation, with some deviation from the local \(\langle 111\rangle\) direction. However, when we compare it to the result of a computation following initialisation in the spin-ice structure, which is known not to be a stable state in this material, we see a random arrangement of spins. This is much like the conventional LSDA calculations of Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) shown in Fig. 4(a). The final AIAO state is found to be 0.4 eV lower in energy than the state found by initialising a spin-ice. We see similar results for Sm\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) [Fig. 6(bottom) where we return an AIAO state and we fail to find a spin-ice configuration. In this latter case, the total energy for the AIAO state is found to be 1.0 eV lower that for state found following spin-ice initialisation. Interestingly, by initialising the spin-ice state in Sm\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\), the final state is found to be similar to the \(\psi_{2}\) ground state seen in a different pyrochlore, Er\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[42; 43; 44]. ## IV Conclusions In conclusion, we have implemented a recently-developed xc functional in a plane-wave code Castep, which provides a correction to the LSDA that removes magnetic sources from the resulting xc magnetic field, with the aim of improving the ability to describe non-collinear magnetic spin states. We have demonstrated that our implementation of this functional by performing calculations on a number of simple magnetic materials. We applied the functional to the famous spin-ice material Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) and find that where the LSDA is unable to capture the spin-ice state when initialised in this configuration, the source-free functional reproducible realises a spin-ice configuration. We hope that the availability of the source-free xc functional in Castep might allow Figure 4: Example magnetic configurations of Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) calculated using (a) conventional LSDA, and (b) source-free LSDA, shown in the conventional unit cell. Dy ions and Ti ions are shown in green and silver respectively. The red arrows show the non-collinear spin density projected onto a local atomic basis for the Dy ions. Oxygen atoms are not shown. Figure 3: Magnetic field lines of \(\mathbf{B}_{\rm xc}\) for a primitive unit cell of Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) calculated using the (a) conventional LSDA and (b) source-corrected LSDA, The opacity of the field lines represents the relative strength of the field. For the conventional LSDA field there is a local collinearity with the spin projected onto the Dy atoms, whereas for the source free functional \(B_{\rm xc}\) is no longer aligned with the magnetisation. Dy, Ti and O ions are shown in green, silver and red respectively. the calculation of exotic magnetic textures which were previously inaccessible to DFT. ## V Acknowledgments This work used the ARCHER UK National Supercomputing Service ([http://www.archer.ac.uk](http://www.archer.ac.uk)) and is supported by EPSRC (UK) (EP/P022782/1). We are grateful for computational support from Durham Hamilton HPC. This work is supported by EPSRC (UK) [EP/N032128/1]. Research data will be hosted at XXX.
2301.11565
Aligning Three-Decade Surge in Urban Cooling with Global Warming
Rising demand for space cooling has been placing enormous strain on various technological, environmental, and societal dimensions, resulting in issues related to energy consumption, environmental sustainability, health and well-being, affordability, and equity. Holistic approaches that combine energy efficiency optimization, policy-making, and societal adaptation must be rapidly promoted as viable, timely solutions. We interpret the 30-year upward trend and spikes in urban cooling demand from the perspective of climate change, urbanization, and background climates, focusing on five representative cities: Hong Kong, Sydney, Montreal, Zurich, and London. An unequivocal, worrying upward trend in cooling demand is observed in meteorological data from 1990 to 2021, using cooling degree hours (CDH) as a city-scale metric. The surge in cooling energy demand can be largely attributed to global warming, urban heat islands, and extreme heat events. Further, our quantification of the impact of the base temperature, in relation to the historical CDH, reveals that a 20% energy saving could be achieved instantly within a rather broad range of temperature and humidity by increasing the setpoint temperature by one degree, while characteristic sensational and physiological levels can be maintained at 'acceptable' and 'physiological thermal neutrality' respectively. However, the potential of reducing cooling demand can be nonlinearly and significantly lowered due to the presence of compound high relative humidity and high air temperature. To reduce cooling energy demand rapidly in a warming climate, we highlight the necessity of promoting hard and soft behavioral adaptation along with regulatory intervention for the operation of space cooling systems.
Haiwei Li, Yongling Zhao, Ronita Bardhan, Pak-Wai Chan, Dominique Derome, Zhiwen Luo, Diana Urge-Vorsatz, Jan Carmeliet
2023-01-27T07:04:31Z
http://arxiv.org/abs/2301.11565v2
Three decades' trends and extremes of building cooling demand in Hong Kong, Sydney, Montreal, Zurich and London ###### Abstract This brief communication interprets three decades' evolution of building cooling demand of urban and rural areas through the lens of five representative cities, i.e., Hong Kong, Sydney, Montreal, Zurich, and London. The upward trend and extremes in building cooling demand, estimated from cooling degree hours (CDH) using meteorological data from 1990 to 2021, largely explained by global warming, urban heat islands, and extreme heat events. The quantification of the impact of base temperatures further reveals that 20% energy saving could be achieved by increasing one degree setpoint temperatures, which could be potentially reached by regulatory intervention on building system operation. ## Main The building sector is responsible for 30% of global energy consumption, and 27% of total energy emissions[1]. At present, 56% of the world's population lives in urban areas, which is expected to continually increase to 68% by 2050[2], escalating the energy burden in cities. The energy demand for space heating or cooling is determined by building design and operation, building physical properties and occupancy activities, socio-economic development, and most importantly, climatic conditions[3, 4, 5]. Climate change encompasses global warming. The earth's average surface temperature has risen by approximately 1 \({}^{\circ}\)C since the late 19\({}^{\mathrm{th}}\) century[6]. Measures to mitigate and adapt the warming climate are in urgent need[7]. Furthermore, the frequency, duration, and severity of extreme temperature events such as heatwaves and record-breaking high temperatures are aggravated[8]. Moreover, urban areas experience higher air or surface temperatures than their surrounding sub-urban or rural areas, which is called the urban heat island (UHH)[9, 10]. This phenomenon appears frequently alongside urbanization, with enlarged urban heat storage capacity, long-wave radiation trapping, reduced evapotranspiration and convection efficiency, and increased anthropogenic heat. UHI exacerbates the warming effects in cities, leading to high heat-related illness and mortality rates, increasing energy demand, air pollution associated with hot temperature and reduced air circulation, and a potential energy crisis among a large portion of the global population. Understanding the long-term evolution of cooling energy demands and the causal factors of the trends and extremes is crucial for predicting future energy emissions, formulating mitigation strategies, and facilitating the implementation of the Paris Agreement[11] and Sustainable Development Goals (SDGs)[12]. We analyze three decades of building cooling demand and the principal drivers, such as warming background climate by global warming, urban heat island (UHI), heatwaves, cities' population, and some potential mitigation measures for five cities residing in different climates: Hong Kong, Sydney, Zurich, Montreal, and London. The cooling demand for urban and rural (or suburban) areas is quantified by yearly cooling degree hours (CDH) calculated from climatological standard 30-year (from 1990 to 2021) observation data[13]. The calculation of CDH is summarized in the Methods section. We remark that compared to the commonly used average temperature, CDH, a cumulated calculation measure over time, is a more solid metric to assess heat-related cooling demand. Fig. 1 shows the yearly CDH in 5 cities from 1990 to 2021, demonstrating yearly cooling energy demand in all five cities during the last three decades. The cooling season refers to the period that requires cooling inducing an air conditioning load. The main climate driver of energy demand is the background ambient temperature during the cooling season[14, 15]. Other climate factors, such as humidity levels and wind speeds, accompanied by the air temperature, reflect human heat sensation and influence the climate nexus of cooling demand[16]. The background climatology determines the distinct difference in the cooling demand magnitudes in five cities. Hong Kong, showing CDH's in the range of 25,000 to 33,000 \({}^{\circ}\)C\(\cdot\)h is in the humid summer subtropical zone (Koppen climate classification: Cwa), and Sydney, showing a CDH in the range of 2,000 to 10,000 \({}^{\circ}\)C\(\cdot\)h, is in the all-year humid subtropical zone (Cfa). Both Hong Kong and Sydney have a cooling season from spring to autumn, while the cooling season for Montreal, Zurich, and London is mostly summer, from June to August. Montreal (1,000-6,000 \({}^{\circ}\)C\(\cdot\)h) is in the humid continental zone (Dfb); Zurich (1,000-7,000 \({}^{\circ}\)C'h)and London (0-3,000 \({}^{\circ}\)C'h) belong to the marine west coast (oceanic) climate zone (Cfb). The results show that the warmer subtropical cities, Hong Kong and Sydney, have distinctly higher cooling demand and longer cooling seasons than continental and oceanic climate cities, Montreal, Zurich, and London. Also, the influence of latitude is clearly observed for the last three cities, showing lower cooling demand for the higher latitude city London compared to Montreal and Zurich. Statistically, average CDH increase for each decade is 14,619 \({}^{\circ}\)C'h in urban Hong Kong, 16,997 \({}^{\circ}\)C'h in rural Hong Kong; 15,907 \({}^{\circ}\)C'h in urban Sydney, 13,491 \({}^{\circ}\)C'h in rural Sydney; 5,783 \({}^{\circ}\)C'h in urban Montreal, 3,285 \({}^{\circ}\)C'h in rural Montreal; 7184 \({}^{\circ}\)C'h in urban Zurich, 4,365 \({}^{\circ}\)C'h in rural Figure 1: (a) Yearly cooling degree hours (CDH) of urban and rural areas (or suburban) in Hong Kong, Sydney, Montreal, Zurich, and London from 1990 to 2021. The base temperature for CDH calculation is 22 \({}^{\circ}\)C. The graphs in the center of each circular plot display the land types about 10 km around the weather stations. Remark the different maximum scale for Hong Kong (40,000) compared to the other cities maximum of 8,000. (b) Population growth (bars) and annual mean urban air temperature rise (lines) from 1990 to 2021. We remark that the sub-figures have different scales of the y-axis for different cities. Zurich; and 2,604 \({}^{\circ}\)C'h in urban London, 793 \({}^{\circ}\)C'h in rural London. All five cities experienced pronounced growing trends in cooling energy demand in the last three decades. The increasing trend in CDH values has a robust association with temperature change, including increasing time-averaged temperature, increasing peak temperatures, and heat events during the cooling season. The increase in cooling demand occurs mainly during the cooling season, particularly in the summer. The percentage of CDH increase in the summer period is 47.6% (Hong Kong), 56.5% (Sydney), 72.5% (Montreal), 86.4% (Zurich), and 78.7% (London) of the total CDH increase in all four seasons. Cities that have hot summer climates, Hong Kong and Sydney are more sensitive to temperature change, presenting more evident growth in cooling demand. In other words, the increased cooling load driven by climate change is placed on top of the high-demand cooling seasons and is more evident in the high-demand cities. ### Urban vs. rural areas The level of cooling demand in urban areas is higher than that in rural areas in terms of both the magnitudes and growth rate. The reduction of evapotranspiration and convection efficiency and the increase of anthropogenic heat in urban areas are considered the main contributors to urban warming and UHI[17]. As presented in Fig.1, the land types of urban areas of the cities have distinct differences compared to the land types of rural areas. Urban areas have more impervious heat-storing built-ups and less vegetation or water bodies than rural areas, meaning low water availability and evapotranspiration in urban environments, leading to high urban-rural temperature differences and higher cooling demand in urban areas. The convection efficiency, which is associated with aerodynamic resistance changes, represents the heat dissipation or heat transfer from building surfaces to the atmosphere[9]. High aerodynamic resistance of urban areas results in low efficient convection in comparison with rural areas, which reduces the convection efficiency and increases the UHI intensity and cooling demand[18]. Meanwhile, the growing population density (Fig.1b) and some other socio-economic factors, such as increased annual income and energy prices, also contribute to anthropogenic heat generation and increasing trends of cooling demand[19]. Some literature has incorporated population weighting and other weightings to analyze the socio-economic sensitivity[20, 21]. Although it is not available to clearly distinguish the urban and rural socio-economic factors in all five cities, the general growth of population and GDP, and the relative higher growth in urban areas indicate potential higher cooling demand in urban areas than the actual CDH values of Fig. 1. Re-introducing green spaces and water surfaces into the urban area could increase both evapotranspiration and convection efficiency, reduce the energy demand for cooling to the electricity grid and hence reduce anthropogenic heat emission. The base temperature is a fundamental consideration in CDH analysis, which is chosen based on the relationship between local climate, occupancy activities, building properties, and the cooling applications in a building [22]. Prior research uses base temperatures ranging from 18 \({}^{\circ}\)C to 28 \({}^{\circ}\)C for CDH calculations [23, 24, 25]. Upgrading the building cooling system, using higher cooling setpoints, and improving the thermal insulation of the building could increase the base temperature of the building. Fig. 2 (a) reports that, as the base temperature is increased, the value of CDH decreases significantly. The cooling demand is reduced the most, by about 20%, as the base temperature is increased by the first degree, from 22 \({}^{\circ}\)C to 23 \({}^{\circ}\)C. The results imply that a relatively small improvement by carrying out building envelope or energy system retrofitting could achieve huge energy saving potential. With the implementation of building passive designs and retrofitting, higher setpoint temperatures of 27-28 \({}^{\circ}\)C could be adapted in future scenarios [26]. Renovating the existing building stocks in terms of improving energy performance is crucial worldwide, as pointed out by the Figure 2: (a) Urban CDH of Hong Kong, Sydney, Montreal, Zurich, and London for different base temperatures from 22\({}^{\circ}\)C to 27\({}^{\circ}\)C. The increase in base temperature can be interpreted as potential impact of building retrofitting (e.g., envelope insulation). (b) The number of days with maximum air temperature \(\geq\)25, 30, 35\({}^{\circ}\)C in the urban areas of Hong Kong, Sydney, Montreal, Zurich, and London, based on the calculation of climate norms stated in WMO 2017 Guidelines [13]. Commercial Building Disclosure (CBD) program in Australia[27] and the Annex projects launched by International Energy Agency (IEA)[28]. Effective regulatory intervention on the building energy retrofitting and operation codes should be a preferred instrument for policymakers aiming for reduction of building cooling energy demand and related emissions in the building sector[29]. ### Spikes of CDH and high-temperature events High-temperature events can be distinguished by the number of days with maximum temperature exceeding 25 \({}^{\circ}\)C, 30 \({}^{\circ}\)C, and 35 \({}^{\circ}\)C, shown in Fig. 2 (b), as proposed in the WMO Guidelines. These events can be also seen in the yearly CDH, where spikes in CDH can be interpreted as indicators of the occurrence of extreme heat events, for instance, heatwaves. The cooling load is more than doubled in the years with exceptionally high frequency and high duration of summer heat events. The increasing trend of CDH is relatively smooth in Hong Kong, Sydney, and Montreal; while the number of spikes in CDH is more frequently seen in western Europe, e.g., London and Zurich. Scientists have identified Europe as a 'heatwave spot' since its increase in heat extreme occurrences has been much faster than for other regions in the world over the past decades[30], which is due not only to natural climate drivers, such as atmospheric circulation and jet stream states, oceanic circulation and change of sea-surface temperatures, but also anthropogenic drivers, such as the increasing greenhouse gas emissions. Zurich and London suffered from the record-breaking heatwave that prevailed in Europe in the year 2003. That severe heatwave was considered to be the warmest period of the last 500 years, which not only caused energy consumption to increase, but also burdened health and emergency services in Europe, leading to over tens of thousands of deaths[31, 32]. Recently, an exceptional heatwave event affected the U.K. in July 2022, reaching 40 \({}^{\circ}\)C for the first time and causing over 2,800 excess deaths in the elder population[33]. Urgent and effective climate-sensitive urban planning with sustainable and resilient mitigation measures is critical to tackling future energy demand spikes. ### Methods Cooling degree hour (CDH) calculation utilizes the outdoor air temperature by quantifying to what degree and for how long the outdoor air temperature is higher than a base temperature with a resolution of one hour. The mathematical expression is explained in the Equation. 1, as defined by ASHRAE[34]. \[\text{CDH}=(\text{1hour})\sum\nolimits_{\text{hours}}(t_{oa}-t_{b})^{+} \tag{1}\] where \(t_{b}\) is the base temperature and \(t_{oa}\) is outdoor ambient temperature for every hour. The positive sign (+) above the parenthesis means that only positive values are counted. ## Data availability The ambient temperatures and CDH dataset are accessible on the website of the Chair of Building Physics, ETH Zurich. The dataset includes 30 years of hourly data, from 1990 to 2021, in urban and rural areas of Hong Kong, Sydney, Zurich, Montreal, and London. The weather stations are Hong Kong Observatory Headquarters (Hong Kong urban, 22.30\({}^{\circ}\) N, 114.17\({}^{\circ}\) E), Ta Kwu Ling (Hong Kong rural, 22.53\({}^{\circ}\) N, 114.16\({}^{\circ}\) E); Bankstown (Sydney urban, -33.92\({}^{\circ}\) S, 150.98\({}^{\circ}\) E), Observatory Hill (Sydney rural, -33.86\({}^{\circ}\) S, 151.20\({}^{\circ}\) E); Trudeau (Montreal urban, 45.47\({}^{\circ}\) N, 73.74\({}^{\circ}\) W), Mirabel (Montreal rural, 45.68\({}^{\circ}\) N, 74.04\({}^{\circ}\) W); Zurich Kaserne (Zurich urban, 47.38\({}^{\circ}\) N, 8.53\({}^{\circ}\) E), Kloten (Zurich rural, 47.29\({}^{\circ}\) N, 8.32\({}^{\circ}\) E); St James' Park (London urban, 51.50\({}^{\circ}\) N, 0.23\({}^{\circ}\) W), Kenley (London rural, 51.303\({}^{\circ}\) N, -0.09\({}^{\circ}\) W).
2310.17655
Music Recommendation Based on Audio Fingerprint
This work combined different audio features to obtain a more robust fingerprint to be used in a music recommendation process. The combination of these methods resulted in a high-dimensional vector. To reduce the number of values, PCA was applied to the set of resulting fingerprints, selecting the number of principal components that corresponded to an explained variance of $95\%$. Finally, with these PCA-fingerprints, the similarity matrix of each fingerprint with the entire data set was calculated. The process was applied to 200 songs from a personal music library; the songs were tagged with the artists' corresponding genres. The recommendations (fingerprints of songs with the closest similarity) were rated successful if the recommended songs' genre matched the target songs' genre. With this procedure, it was possible to obtain an accuracy of $89\%$ (successful recommendations out of total recommendation requests).
Diego Saldaña Ulloa
2023-10-06T03:49:13Z
http://arxiv.org/abs/2310.17655v1
# Music Recommendation Based on Audio Fingerprint ###### Abstract This work combined different audio features to obtain a more robust fingerprint to be used in a music recommendation process. The combination of these methods resulted in a high-dimensional vector. To reduce the number of values, PCA was applied to the set of resulting fingerprints, selecting the number of principal components that corresponded to an explained variance of \(95\%\). Finally, with these PCA-fingerprints, the similarity matrix of each fingerprint with the entire data set was calculated. The process was applied to 200 songs from a personal music library; the songs were tagged with the artists' corresponding genres. The recommendations (fingerprints of songs with the closest similarity) were rated successful if the recommended songs' genre matched the target songs' genre. With this procedure, it was possible to obtain an accuracy of \(89\%\) (successful recommendations out of total recommendation requests). audio fingerprint; music recommendation; spectral audio features; MFCC; chroma features; tempo features ## Introduction The use of digital systems that operate with music tracks has increased considerably in recent years. Different tools exist for analyzing audio files, and applications focus on correctly identifying a musical record to provide additional services [1]. Audio identification has become an essential part of different tasks on a day-to-day basis. This process is carried out employing audio fingerprints. In this scope, a fingerprint is defined as a unique representation of an audio track formed by different descriptive audio features. Audio fingerprinting techniques are sometimes confused with audio watermarking techniques. Audio watermarking aims to embed information in an audio track, for example, general information about the track for its correct identification. In this way, audio fingerprints can have many immediate applications, such as identification of copyrighted tracks, music recommendations, or voice recognition, to name a few [2]. An audio identification process through watermarking would involve directly extracting the embedded message. However, audio watermarking could introduce noise since the audio track information must be altered [1]. An audio identification system using fingerprints must have specific characteristics that make the system robust and capable of performing correct matches [1], among which we find: the number of accurate identifications, the security of the system against noisy signals, the granularity related to fragments audio files as short as possible, the independence of the audio formats, the scalability over the number of fingerprints in the system and the complexity related to the computational cost. The audio identification system generally comprises an extraction block and a block that stores the information for later identification. A system like this can develop numerous applications; one of the most striking at present is the one related to music recommendation systems [3]. The central idea relies on obtaining the fundamental characteristics of a musical piece (fingerprint) and recommending similar pieces. In this way, the musical recommendation problem depends to a large extent on the construction of the fingerprint, that is, the set of techniques used to define the fingerprint of a song. The process generally involves extracting time-frequency features based on modifications to the Fourier transform [4], and there are different methods to achieve this objective [5][6][7]. In this work, we combine different methods that extract information from an audio track based on its spectral, chromatic, and tempo features to extract the most significant amount of structural information and provide robustness to the process. Subsequently, we apply this procedure to a set of predefined songs to fully characterize them; the intention is to develop a recommendation system based on an individual's musical tastes. The structure of this work is as follows: Section 1 deals with previous related works, section 2 introduces the theory for the extraction of spectral, chromatic, and tempo features given an audio signal; Section 3 presents the design proposal of the feature extraction system, and the recommendation process, Section 4 focuses on the experimental results taking into account the predefined data set; finally, the conclusions are detailed. ## 1 Related Works The first version of audio fingerprinting was used in the 90s to detect advertisements in broadcast streams [8]. This first version used the signal's energy information to match the signal waves' envelope. The objective focused on reusing transmissions and the automatic placement of other advertisements. Currently, the processes to obtain the fingerprint of an audio signal focus on the extraction of structural features based on spectral analysis. For example, it is common to use variations of the Fourier transform, such as the Short Time Fourier Transform STFT [4][9][10], which aims to extract the transformation coefficients associated with the frequencies of the signal. Unlike the standard Fourier transform, STFT operates on time windows, resulting in the final coefficients being associated with interval frequencies of the signal and not the complete signal. Another spectral technique used for fingerprinting is the Mel Frequency Cepstral Coefficients (MFCC) [11][12][13]. The MFCC is obtained by mapping the STFT coefficients to the Mel scale, defined as a perceptual scale based on pitches and frequencies perceptible by the human ear. There are also features related to the pitch classes of Western music called chroma features. These variables capture the melodic characteristics of a piece of music. Different works use this type of variable [7][14], and, like the MFCC, they work by calculating the STFT coefficients, combining them with binning techniques. On the other hand, it is common to use features related to the tempo or beats of the audio signal. This feature type has been used as a complementary process in audio analysis to obtain fingerprints related to musical aspects such as tempo or rhythm [15][16]. Generally, this approach works using the spectral parameters obtained by STFT to identify the onset of the signal corresponding to the beats' fundamental characteristics. The types of features that we have mentioned correspond to spectral, chromatic, tempo, and Mel Scale-based. Some works have focused on combining some [10][11][13][15] to obtain a more robust fingerprint capable of capturing more information about the audio signal and achieving correct identification. Despite the above, we did not find a work that is precisely dedicated to combining all the previous methods to obtain a fingerprint. Regarding musical recommendation systems, some works focus exclusively on the similarities of songs based on spectral features [17][18]. Other works are in charge of combining this type of feature with more contextual information about the song, for example, the name, the duration, or the genre [19][20]. The purpose is to create vector arrays that fully identify a piece of music, making it as distinctive as possible. With these arrays, songs can be compared to recommend similar items. Currently, music recommendation systems are used by large streaming music companies and use a combination of methods for constructing the features as well as different algorithms for comparing each element [3]. ## 2 Audio Features Next, we will present the theory related to different types of audio features that will be used to develop a robust fingerprint. These features represent the audio signal's spectral, chromatic, Mel scale-based, and tempo characteristics. ### Short Time Fourier Transform The Fourier transform is useful for analyzing different types of signals. This transformation aims to decompose the signal into frequency components \(w\in\mathbb{R}\) associated with coefficients \(d_{w}\in\mathbb{R}\). The coefficients indicate the degree to which a sinusoidal component with frequency \(w\) adjusts the signal. In this way, the Fourier transform represents the signal in the frequency domain compared to its original form, which represents the time domain. For example, the Fourier transform of a musical extract indicates which notes (frequencies) have been played but not when they are played [5]. In order to make use of the temporal information associated with a signal, the so-called short-time Fourier transform (STFT) is used. This transform type operates on small portions of the signal (time windows) compared to the original transform. Let \(x\in\mathbb{R}\) be a discrete signal, \(r:[0:N-1]\rightarrow\mathbb{R}\) a time window function of length \(N\in\mathbb{N}\), and a parameter \(H\in\mathbb{N}\) called hop size that indicates the intervals in which the function \(r\) moves along the signal; The STFT of the signal is given by \[\xi(m,k)=\sum_{n=0}^{N-1}x(n+mH)r(n)exp(-2\pi ikn/N), \tag{1}\] where \(m\in\mathbb{Z}\) and \(k\in[0:N/2]\). \(\xi(m,k)\) is the \(k^{\text{th}}\) fourier coefficient for the \(m^{\text{th}}\) time frame. Since the results of STFT are complex numbers, we can transform those values into a real two-dimensional representation by squaring their magnitude \[\Theta(m,k)=|\xi(m,k)|^{2}. \tag{2}\] This is called the spectrogram and is helpful for visualizing the transformation. The horizontal axis represents the time, and the vertical axis represents frequency. ### Mel Frequency Cepstral Coefficients This type of coefficient is based on the structural analysis of timbre. Initially, the coefficients were used for voice recognition tasks, and obtaining them involves mapping the frequencies to the Mel scale [5]. The Mel scale is a perceptual scale that tries to mimic the hearing tones of the human ear. It divides the frequencies into equidistant intervals on the final scale. The following function gives the frequency transformation to the Mel scale \[F_{\rm Mel}(x)=1127\,\ln(1+\frac{x}{700}). \tag{3}\] To compute the MFCC, a series of steps are used [13][21][22] that can be summarized as 1) calculate the STFT coefficients (as in the previous section); 2) make a mapping between the calculated coefficients and the Mel scale. A series of triangular filter banks along the Mel scale usually do this transformation. The objective is a binarization along the scale, depending on the number of filter banks used. The Mel coefficients have the following form \[S(n)=\sum_{k=0}^{N-1}|\xi(k)|^{2}J_{m}(k), \tag{4}\] where \(0\leq m\leq M-1\), \(m\) is the number of triangular filters, and \(J\) is the weight applied to the \(k^{\rm th}\) energy spectrum bin that contributes to the \(m^{\rm th}\) output band. This result is the energy spectrum of the Mel frequencies. 3) Finally, the Discrete Cosine Transform (DCT) is calculated on the previous coefficients to decorrelate them and obtain only a determined number of coefficients \[\Lambda_{m}(n)=\sum_{m=0}^{M-1}S(n)\,\cos(\frac{\pi n(m-0.5)}{M}), \tag{5}\] where \(n\in\mathbb{N}\) is the number of MFCC desired, in this work 13 coefficients were used because is a typical value according to [13][21]. ### Chroma Coefficients This type of feature considers properties related to the harmony and melody of a signal. In order to obtain these features, tempered scales are considered, mainly the one related to Western music that divides the scale into 12 different tones \(\{C,C\#,D,D\#,E,F,F\#,G,G\#,A,A\#,B\}\). To compute the chroma coefficients, we must calculate the STFT coefficients and obtain a log frequency spectrogram \[Y(n,p)=\sum_{k\in P(p)}|\xi(n,k)|^{2}, \tag{6}\] where \(p\in[0:127]\) is the MIDI note number and represents the note's pitch, and \(P(p)\) is a function that maps \(k\) to the MIDI numbers. In short, the MIDI note number encodes the musical pitches along different octaves. Once this spectrogram is obtained, the chroma coefficients or chromagram are calculated by adding all the pitch coefficients that belong to the same chroma \[\Psi(n,c)=\sum_{p\in[0:127]:pmod12=c}Y(n,p), \tag{7}\] where \(c\in[0:11]\). ### Tempo and Beat Features The beat corresponds to the pulse in an audio signal and is an additional characteristic that can be appreciated when listening to a song. The tempo is the rate at which the beats occur, corresponding to a fundamental aspect of music. Like chromatic coefficients, these features are related to the melodic aspects of an audio signal. In order to identify the beats of a piece of music, one of the approaches used is beat tracking by dynamic programming. The goal is to generate a sequence of beat times that correspond to perceived onsets in the audio signal and that have a regular rhythm [23]. In order to do this, an objective function is used that combines a part related to the locations of the beats and another function that penalizes the deviations of irregular intervals \[\Xi(t_{i})=\sum_{i=1}^{N}O(t_{i})+\alpha\sum_{i=2}^{N}F(t_{i}-t_{i-1},\tau_{p}) \tag{8}\] where \(t_{i}\) is the sequence of \(N\) beats, \(O(t)\) is an onset strength envelope of the audio signal, \(\alpha\) is a parameter to measure the importance of the penalty function \(F\), and \(\tau_{p}\) is an ideal beat spacing. To calculate the best value of \(\Xi\), we take the recursive relation \[\Xi^{*}(t)=O(t)+\max_{\tau=0,...,t}\{\alpha F(t-\tau,\tau_{p})+\Xi^{*}(\tau)\} \tag{9}\] In other words, the best value \(\Xi\) for time t is the sum of the local onset strength plus the best value of \(\Xi\) at time \(\tau\). Since this function always operates with the preceding \(\tau\) value (to get the instants of time \(t\) that mark the beats), the iteration of this procedure is from the end towards the start of the signal. ## 3 Audio Features and Music Recommendation In practical terms, the methods described in the previous section produce matrix coefficients that describe essential aspects of a musical composition. For example, the coefficients resulting from STFT, also called a spectrogram, give an overview of an audio signal since we obtain information about the frequencies that compose it. MFCCs provide information about timbre related to speech. Through the chroma coefficients, we can obtain properties of the harmony and melody related to the Western musical scale (chromagram). Finally, we can identify the exact moments corresponding to a song's beats (pulses) related to its tempo. Although most of the mentioned methods start initially from the STFT results, each differs in the sequence of operations to obtain more information regarding an audio signal or song. This work focuses on obtaining the previously mentioned coefficients of a set of songs and combining them to obtain a more robust fingerprint that considers particular and essential aspects of a song. To carry out this objective, we define the vector \(\Omega\) as the vector that includes the coefficients of each of the methods described above, such that, \[\Omega=\bar{\Theta}\oplus\bar{\Lambda}\oplus\bar{\Psi}\oplus\bar{\Xi}, \tag{10}\] where \(\bar{\Theta}\), \(\bar{\Lambda}\), \(\bar{\Psi}\) denotes the matrix of the mean values of the rows, \(\bar{\Xi}\) is a vector of size of \(\Omega\) with tempo values, and \(\oplus\) indicates the concatenation of these matrices. With the mean values of the rows of each matrix, we reduce the dimension encapsulating the variations of each frequency along all the time. \begin{table} \begin{tabular}{c c} \hline Genre & Number of songs \\ \hline Metal Variants & 28 \\ Rock, Alternative, Indie, New Wave & 27 \\ Ballad, Pop & 15 \\ House, Electronic, Dance, Dance, Trace & 14 \\ Synthpop, Electroprop, Technopop & 12 \\ Pop, Ballad, Rock & 12 \\ Pop, Rock, Folk, Indie & 11 \\ Jazz, Blues, Ballad & 10 \\ Pop, Punk, Rock & 10 \\ Electropop, Electronic, Hip hop & 9 \\ Pop, Dance, Electropop & 9 \\ Nortech, Electronic & 8 \\ Rock, Pop, Urbano & 7 \\ Classical, Soundtrack & 6 \\ Pop, Indie, Rock & 6 \\ Rock, Metal & 5 \\ Pop, Rap, Dance & 4 \\ Rock, Blues, Jazz & 4 \\ Hip Hop, Rap, Rock & 3 \\ \hline \end{tabular} \end{table} Table 1: Set of Songs Defining this vector or fingerprint focuses on condensing as much information as possible from an audio signal. However, due to the concatenation of the matrices, the resulting vector can have high dimensionality. Since the objective is to obtain musical recommendations, reducing the dimensionality of the vector is essential to reduce the complexity of the process. One widely used technique to reduce a data set's dimensionality is principal component analysis (PCA) [24]. This method creates linear combinations of the original variables called principal components. In this way, the principal components retain part of the original information (variance) of the original data. This reduction method aims to choose the number of principal components that retain a considerable percentage of the total variance of the original data [24]. Let \(\Omega=\{\Omega^{\{1\}},\Omega^{\{2\}},...,\Omega^{\{n\}}\}\) be the matrix formed by the union of different musical fingerprints and \(\Omega^{\prime}\) the resulting matrix of the principal components of \(\Omega\), where \(\Omega^{\prime}\)=\(\{\Omega^{\prime}_{1},\Omega^{\prime}_{2},...,\Omega^{\prime}_{n}\}\) correspond to the principal components of each of the musical fingerprints; we can compute the distance between fingerprints \(i,j\) by some distance metric, like euclidean distance \[S_{e}=||\Omega^{\prime}_{i}-\Omega^{\prime}_{j}|| \tag{11}\] For this case, the \(S_{e}\) matrix corresponds to the distance matrix between each fingerprint. This matrix expresses the similarities between music tracks and is widely used in music recommendation systems [3]. A music recommendation system aims to find the top K of the closest fingerprints (smallest distance) corresponding to the musical recommendations given a set of target songs. The target songs correspond to songs presumably added to a list of favorite songs. This type of recommendation is called content-based and is used to give the most similar items the user has liked [25]. Figure 1: The number of components needed to explain the 95% of the variance ## 4 Experimental Results For the experimental part, 200 songs from a personal music library composed of different genres and artists were used. Each song was tagged with the genres corresponding to the artist, obtained from Wikipedia. Table 1 shows the number of songs corresponding to a set of genres. Additionally, the bitrate of each audio file was kept constant at 128 kbps (CD quality). In order to compute the coefficient described in the previous sections, 60 seconds of audio were used starting at second 60, i.e., from the second 60 to the 120. Each audio file was processed in a mono signal. For the methods that use STFT, the sampling rate was set to 22050 Hz (samples per second), a window size of 2048, and a cosine window (hann). The first part of the process focused on fingerprinting the sound files, as described in the previous section. In our case, each resulting vector (fingerprint) comprised 1062 coefficients. PCA was applied to each vector to reduce its dimensionality, and the number of components selected corresponded to a retained variance of 95% Figure 1. This reduced the size of the vector from 1062 to 29, that is, a reduction of 97% on the number of dimensions. Figure 2 shows a graph of the two principal components of each vector song. It can be seen that the genres of the songs tend to cluster: in the center, we find the genres of rock, pop, alternative, and indie; in the lower right part, we find the metal variants; in the upper right part, the genres related to electronic music and in the central left part classical, ballads, jazz and indie. This shows us that chromatic, tempo, and spectral coefficients are useful in describing a music track. The similarity between each pair of songs was computed using the 29 principal components for each song and the Euclidean distance, as described in the previous section. Then, the top-K of the most similar elements for each track was obtained using the similarity matrix. For this case, the top 3 most similar songs were retrieved, and a successful evaluation was set for any recommendation that corresponded to the genre of the target song. This type of evaluation system is commonly used within recommendation systems [3]. With this consideration, it was possible to obtain an accuracy of 89% (successful recommendations out of total recommendation requests), reaffirming the ability of the combination of chroma, tempo, MFCC, and STFT coefficients to obtain a robust fingerprint and to be able to categorize a song. ## 5 Conclusions This work combined different audio features to obtain a robust musical fingerprint of a set of songs from a personal music library of 200 songs. Each song was tagged according to the musical genres of the artist. This fingerprint was later used in the process of recommending similar songs. The features used to define the fingerprint correspond to spectral, chromatic, tempo, and Mel scale-based features. Since each of the methods described above generally results in a matrix of values, the average values of the rows were obtained to capture the variation of each frequency throughout the signal. This process enabled obtaining a simplified vector (fingerprint) that combines the audio information described above. Because the size of the resulting vector contained 1062 elements, PCA was applied to the set of fingerprints of each song, and the 29 main components corresponding to a retained variance of 95% were selected. With this information (PCA-fingerprints), the similarity matrix was calculated, defined as the matrix of values that reflect the similarity of each song with the rest of the data set. For each song, the top 3 most similar songs were obtained, and the recommendation was qualified as successful if the genre of any of the recommendations matched the genre of the target song. With this process, it was possible to obtain 89% accuracy in the recommendation process, demonstrating that the union of different audio features allows for a more robust fingerprint that captures relevant song information.
2305.13685
Causal Intervention for Abstractive Related Work Generation
Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research. However, most existing abstractive models ignore the inherent causality of related work generation, leading to low quality of generated related work and spurious correlations that affect the models' generalizability. In this study, we argue that causal intervention can address these limitations and improve the quality and coherence of the generated related works. To this end, we propose a novel Causal Intervention Module for Related Work Generation (CaM) to effectively capture causalities in the generation process and improve the quality and coherence of the generated related works. Specifically, we first model the relations among sentence order, document relation, and transitional content in related work generation using a causal graph. Then, to implement the causal intervention and mitigate the negative impact of spurious correlations, we use do-calculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an end-to-end generation model. Extensive experiments on two real-world datasets show that causal interventions in CaM can effectively promote the model to learn causal relations and produce related work of higher quality and coherence.
Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang
2023-05-23T04:48:30Z
http://arxiv.org/abs/2305.13685v1
# Causal Intervention for Abstractive Related Work Generation # Causal Intervention for Abstractive Related Work Generation Jiachang Liu Beijing Institute of Technology Beijing, China [email protected] Qi Zhang Tongji University Shanghai, China [email protected] Chongyang Shi Beijing Institute of Technology Beijing, China [email protected] Usman Naseem University of Sydney Sydney, Australia Shoujin Wang University of Technology Sydney, Australia Ivor Tsang A*STAR Singapore ###### Abstract Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research. However, most existing abstractive models ignore the inherent causality of related work generation, leading to low quality of generated related work and spurious correlations that affect the models' generalizability. In this study, we argue that causal intervention can address these limitations and improve the quality and coherence of the generated related works. To this end, we propose a novel _Causal Intervention Module for Related Work Generation_ (CaM) to effectively capture causalities in the generation process and improve the quality and coherence of the generated related works. Specifically, we first model the relations among sentence order, document relation, and transitional content in related work generation using a causal graph. Then, to implement the causal intervention and mitigate the negative impact of spurious correlations, we use _do_-calculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an end-to-end generation model. Extensive experiments on two real-world datasets show that causal interventions in CaM can effectively promote the model to learn causal relations and produce related work of higher quality and coherence. ## 1 Introduction A comprehensive related work necessarily covers abundant reference papers, which costs authors plenty of time in reading and summarization and even forces authors to pursue ever-updating advanced work (Hu and Wan, 2014). Fortunately, the task of related work generation emerged and attracted increasing attention from the community of text summarization and content analysis in recent years (Chen et al., 2021, 2022). Related work generation can be considered as a variant of the multi-document summarization task (Li and Ouyang, 2022). Distinct from multi-document summarization, related work generation entails comparison after the summarization of a set of references and needs to sort out the similarities and differences between these references (Agarwal et al., 2011). Recently, various abstractive text generation methods have been proposed to generate related work based on the abstracts of references. For example, Xing et al. (2020) used the context of citation and the abstract of each cited paper as the input to generate related work. Ge et al. (2021) encoded the citation network and used it as external knowledge to generate related work. Chen et al. (2022) proposed a target-aware related work generator that captures the relations between reference papers and the target paper through a target-centered attention mechanism. Equipped with well-designed encoding strategies, external knowledge, or novel training techniques, these studies have made promising progress in generating coherent related works. However, those models are inclined to explore and exploit spurious correlations such as high-frequency word/phrase patterns, writing habits, or presentation skills, building superficial shortcuts between reference papers and the related work of the target paper. Such spurious correlations may affect or even harm the quality of the generated related work, especially under the distribution shift between the testing set and training set. This is because spurious correlations different from genuine causal relations may not intrinsically contribute to the related work generation and easily cause the robustness problem and impair the models' generalizability (Arjovsky et al., 2019). Figure 1 illustrates the difference between causality and spurious correlation. The phrases "for example" and "later" are often used to bridge two sentences in related work. Their usage may be attributed to writers' presentation habits about organizing sentence orders or the reference docu ment relations corresponding to the sentences. Ideally, a related work generation model is expected to learn the reference relation and distinguish it from the writing habits. However, the generation model easily captures the superficial habitual sentence organization (spurious correlation) instead of learning complex semantic reference relations (causality), especially when the habitual patterns frequently occur in the training set. In this case, the transitional phrases generated mainly based on writing habits are likely to be unsuitable and subsequently affect the content generation of related work during testing when the training and testing sets are not distributed uniformly. Fortunately, causal intervention can effectively remove spurious correlations and focus on causal correlations by intervening in the learning process. It not only observes the impact of the sentence order and document relation on generating transitional content but probes the impact of each possible order on the whole generation of related work, thereby removing the spurious correlations [14]. Accordingly, causal intervention serving as an effective solution allows causal relations to exert a greater impact and instruct the model to produce the correct content. To address the aforementioned gaps in existing work for related work generation, we propose a **C**ausal Intervention **M**odule for Related Work Generation (CaM). CaM can effectively remove spurious correlations by performing the causal intervention, therefore producing related work with high quality. Specifically, we first model the relations among sentence order, document relation, and transitional content in related work generation and figure out the confounder that raises spurious correlations (see Figure 2). Then, we implement causal intervention via the proposed CaM that consists of three components: 1) _Primitive Intervention_ cuts off the connection that induces spurious correlations in the causal graph by leveraging _do_-calculus and _backdoor criterion_[14]; 2) _Context-aware Remapping_ smoothens the distribution of intervened embeddings and injects contextual information; and 3) _Optimal Intensity Learning_ learns the best intensity of overall intervention by controlling the output from different parts. Finally, we strategically fuse CaM with Transformer [15] to deliver an end-to-end causal related work generation model. Our main contributions are as follows: * To the best of our knowledge, this work is the first attempt to introduce causality theory into related work generation task. * We propose a novel **C**ausal Intervention **M**odule for Related Work Generation (CaM) which implements causal intervention to mitigate the impact of spurious correlations. CaM is subtly fused with Transformer to derive an end-to-end causal model, enabling the propagation of intervened information. * Extensive experiments on two related work generation datasets demonstrate that our model outperforms the state-of-the-art approaches and verifies the effectiveness and rationality of bringing causality theory into the related work generation task. ## 2 Problem Formulation Given a set of reference papers \(D=\{r_{1},...,r_{|D|}\}\), we assume the ground truth related work \(Y=(w_{1},w_{2},...,w_{M})\), where \(r_{i}=(w_{1}^{i},w_{2}^{i},...,w_{|r_{i}|}^{i})\) denotes a single cited paper, \(w_{j}^{i}\) is the \(j\)-th word in \(r_{i}\), and \(w_{j}\) is the \(j\)-th word in related work \(Y\). Generally, the related work generation task can be formulated as generating a related work section \(\hat{Y}=(\hat{w}_{1},\hat{w}_{2},...,\hat{w}_{\hat{M}})\) based on the reference input \(D\) and minimizing the difference between \(Y\) and \(\hat{Y}\). Considering that the abstract section is usually well-drafted to provide a concise paper summarization [10], we use the abstract section to represent each reference paper. ## 3 Methodology We first analyze the causalities in related work generation, identify the confounder that raises spurious correlations, and use the causal graph to model these relations. Then, we introduce how CaM is Figure 1: An illustration of the effect difference between causality (solid arrows) and spurious correlations (dashed arrows) in related work generation. designed to enhance the quality of related work through causal intervention. Finally, we describe how CaM, as an intervention module, is integrated with the Transformer to influence the entire generation process. The overall structure of our model is shown in Figure 3. ### Causal Modeling for Related Work Generation We believe that three aspects play significant roles in related work generation for better depicting the relations between different references, namely, sentence order \(c\), document relation \(x\), and transitional content \(y\) (illustrated in Figure 2). In many cases, sentence order is independent of the specified content and directly establishes relations with transitional content. For example, we tend to use _"firstly"_ at the beginning and _"finally"_ at the end while composing a paragraph, regardless of what exactly is in between. This relation corresponds to path \(c\to y\), and it should be preserved as an writing experience or habit. Meanwhile, there is a lot of transitional content that portrays the relations between referred papers based on the actual content, at this time, models need to analyze and use these relations. The corresponding path is \(x\to y\). Though ideally, sentence order and document relation can instruct the generation of transitional content based on practical writing needs, quite often, deep learning models are unable to trade off the influence of these two aspects correctly but prioritize sentence order. This can be attributed to the fact that sentence order information is easily accessible and learnable. In Figure 2, such relation corresponds to \(c\to x\to y\). In this case, sentence order \(c\) is the confounder that raises a spurious correlation with transitional content \(y\). Although performing well on the training set, once a data distribution shift exists between the test set and training set where the test set focuses more on document relations, the transitional content instructed by sentence order can be quite unreliable. In order to mitigate the impact of the spurious correlation, we need to cut off the path \(c\to x\), enabling the model to generate transitional content based on the correct and reliable causality of both \(c\to y\) and \(x\to y\). ### Causal Intervention Module for Related Work Generation The proposed Causal Intervention Module for Related Work Generation (CaM) contains three parts. **Primitive Intervention** performs causal intervention and preliminarily removes the spurious correlations between sentence order and transitional content.**Context-aware Remapping** captures and fuses contextual information, facilitating the smoothing of the intervened embeddings. **Optimal Intensity Learning** learns the best intensity of holistic causal intervention. The overall structure is demonstrated in Figure 3. #### 3.2.1 Primitive Intervention Based on the causal graph G shown in Figure 2, we first perform the following derivation using _do_-calculus and _backdoor criterion_. \[\begin{split} p(y|do(x))&=\sum_{\mathbf{c}}p(y|do(x ),\mathbf{c})p(\mathbf{c}|do(x))\\ &=\sum_{\mathbf{c}}p(y|x,\mathbf{c})p(\mathbf{c}|do(x))\\ &=\sum_{\mathbf{c}}p(y|x,\mathbf{c})p(\mathbf{c})\end{split} \tag{1}\] In short, the do-calculus is a mathematical representation of an intervention, and the backdoor criterion can help identify the causal effect of \(x\) on \(y\)(Pearl, 2009). As a result, by taking into consideration the effect of each possible value of sentence order \(c\) on transitional content \(y\), \(c\) stops affecting document relation \(x\) when using \(x\) to estimate \(y\), which means path \(c\to x\) is cut off (see the arrow-pointed graph in Figure 2). Next, we will explain how to estimate separately \(p(y|x,\mathbf{c})\) and \(p(\mathbf{c})\) using deep learning models and finally obtain \(p(y|do(x))\). Let \(E^{ori}=(e_{1}^{ori},e_{2}^{ori},...,e_{M}^{ori})\) denote the input embeddings corresponding to \(\hat{M}\)-sized related work and \(E^{itv}=(e_{1}^{itv},e_{2}^{itv},...,e_{M}^{itv})\) denote the output embeddings of Primitive Intervention. We first integrate the sentence order information into the input embeddings: \[e_{i}^{\text{{o}}dr(j)}=\text{Linear}(e_{i}^{ori}\oplus o_{j}) \tag{2}\] \(O=\{o_{j}\}_{j=1}^{s}\) denotes the order information for each sentence, and \(s\) is the total number of sentences in the generated related work and can be considered as a hyper-parameter. \(e_{i}^{\text{{o}}dr(j)}\) denotes the Figure 2: Causal graph \(G\) for related work generation. By applying _do_-calculus, path \(c\to x\) is cut off and the impact of spurious correlation \(c\to x\to y\) is mitigated. order-enhanced embedding for the \(i\)-th word which corresponds to the \(j\)-th sentence in related work. We take \(o_{j}=(\lg\left(j+1\right),\cdots,\lg\left(j+1\right))\) with the same dimension as \(e^{ori}\). The linear layer (i.e., \(\mathrm{Linear}\)) further projects the concatenated embedding to \(e^{odr}\) with the same dimension as \(e^{ori}\). Accordingly, we have the estimation of \(p(y|x,\mathbf{c}):=e^{odr}\). Then, we use a feed-forward network and the output subsequence \(E_{sub}=(e_{1}^{itv},...,e_{i-1}^{itv})\) to predict the sentence position probability of the current decoding word: \[h_{i}=\mathrm{Softmax}(\mathrm{FFN}(\mathrm{ReLU}(\sum^{i-1}E_{sub}))) \tag{3}\] each \(h_{i}^{j}\in h_{i}\) denotes the probability. Thus, we estimate the sentence position probability of each decoding word \(p(\mathbf{c}):=h\). After obtaining the estimation of \(p(y|x,\mathbf{c})\) and \(p(\mathbf{c})\), the final embedding with primitive causal intervention can be achieved: \[e_{i}^{itv}=\sum_{j=1}^{s}e_{i}^{odr(j)}\times h_{i}^{j},h_{i}^{j}\in h_{i} \tag{4}\] where \(e_{i}^{odr(j)}\times h_{i}^{j}\) multiplying sentence order probability with order-enhanced embeddings is exactly \(p(y|x,\mathbf{c})p(\mathbf{c})\) in Equation 1. The summation for each position \(j\) completes the last step of Primitive Intervention. Since most transitions are rendered by start words, our approach CaM intervenes only with these words, that is part of \(e^{itv}\in E^{itv}\) is equal to \(e^{ori}\in E^{ori}\). For simplicity, we still use \(E^{itv}\) in the following. #### 3.2.2 Context-aware Remapping Two problems may exist in Primitive Intervention: 1) The lack of trainable parts may lead to the mapping spaces of the intervened embeddings and the original ones being apart and obstructs the subsequent decoding process. 2) Intervention on individual words may damage the context along with the order-enhanced embedding. To solve these two problems, we propose the Context-aware Remapping mechanism. First, we scan \(E^{itv}\) with a context window of fixed size \(n_{w}\): \[\begin{split} B_{i}&=\mathrm{WIN}_{ii+n_{w}}([e_{ 1}^{itv},e_{2}^{itv},...,e_{\hat{M}}^{itv}])\\ &=(e_{i}^{itv},...,e_{i+n_{w}}^{itv}),i=1,...,\hat{M}-n_{w}\end{split} \tag{5}\] where \(\mathrm{WIN}(\cdot)\) returns a consecutive subsequence of \(E^{itv}\) at length \(n_{w}\). Then, we follow the process of Multi-head Attention Mechanism (Vaswani et al., 2017) to update the embeddings in \(B_{i}\): \[\begin{split} B_{i}^{rmp}&=\mathrm{MultiHead}(B_{i},B_{i},B_{i})\\ &=(e_{i}^{rmp},...,e_{i+n_{w}}^{rmp})\end{split} \tag{6}\] Even though all embeddings in \(B_{i}\) are updated, we only keep the renewed \(e_{i+(n_{w}/2)}^{rmp}\in B_{i}^{rmp}\), and leave the rest unchanged. Since \(\mathrm{WIN}(\cdot)\) scans the entire sequence step by step, every embedding will have the chance to update. The output is denoted as \(E^{rmp}=(e_{1}^{rmp},e_{2}^{rmp},...,e_{\hat{M}}^{rmp})\). #### 3.2.3 Optimal Intensity Learning In many cases, there is no guarantee that causal intervention with maximum (unaltered) intensity will Figure 3: The structure of CaM fused with the Transformer in the decoder. CaM consists of three parts: Primitive Intervention, Context-aware Remapping and Optimal Intensity Learning. necessarily improve model performance, especially when combined with pre-trained models (Brown et al., 2020; Lewis et al., 2020), as the intervention may conflict with the pre-training strategies. To guarantee performance improvement, we propose Optimal Intensity Learning. By applying Primitive Intervention and Context-aware Remapping, we have three types of embeddings, \(E^{ori}\),\(E^{itv}\), and \(E^{rmp}\). To figure out their respective importance to the final output, we derive the output intensity corresponding to each of them: \[g_{i}^{ori} =\sigma(W_{ori}\cdot e_{i}^{ori}) \tag{7}\] \[g_{i}^{itv} =\sigma(W_{itv}\cdot e_{i}^{ori})\] (8) \[g_{i}^{rmp} =\sigma(W_{rmp}\cdot e_{i}^{ori})\] (9) \[c_{i}^{ori},c_{i}^{itv},c_{i}^{rmp} =f_{s}([g_{i}^{ori},g_{i}^{itv},g_{i}^{rmp}]) \tag{10}\] \(\sigma(\cdot)\) is the \(\mathrm{sigmoid}\) function, \(f_{s}(\cdot)\) is the \(\mathrm{softmax}\) function. Combining \(c_{i}^{ori},c_{i}^{itv},c_{i}^{rmp}\), we can obtain the optimal intervention intensity and the final word embedding set \(E^{opm}=(e_{1}^{opm},...,e_{\hat{M}}^{opm})\) with causal intervention: \[e_{i}^{opm}=c_{i}^{ori}e_{i}^{ori}+c_{i}^{itv}e_{i}^{itv}+c_{i}^{rmp}e_{i}^{rmp} \tag{11}\] ### Fusing CaM with Transformer To derive an end-to-end causal generation model and ensure that the intervened information can be propagated, we choose to integrate CaM with Transformer (Vaswani et al., 2017).However, unlike the RNN-based models that generate words recurrently (Nallapati et al., 2016), the attention mechanism computes the embeddings of all words in parallel, while the intervention is performed on the sentence start words. To tackle this challenge, we perform vocabulary mapping on word embeddings before intervention and compare the result with sentence start token \([\mathrm{CLS}]\) to obtain \(Mask\): \[I =\mathrm{argmax}[\mathrm{Linear}_{vocab}(E^{ori})] \tag{12}\] \[Mask=\delta(I,ID_{CLS}) \tag{13}\] \(I\) contains the vocabulary index of each word. \(\delta(\cdot)\) compares the values of the two parameters, and returns \(1\) if the same, \(0\) otherwise. \(Mask\) indicates whether the word is a sentence start word. Therefore, \(E^{opm}\) can be calculated as: \[E^{opm}=E^{opm}\odot Mask+E^{ori}\odot(\sim Mask) \tag{14}\] The \(\odot\) operation multiplies each embedding with the corresponding \(\{0,1\}\) values, and \(\sim\) denotes the inverse operation. Note that we omit \(Mask\) for conciseness in Section 3.2.3. \(Mask\) helps restore the non-sentence-start word embeddings and preserve the intervened sentence-start ones. As illustrated in Figure 3, we put CaM between the Transformer layers in the decoder. The analysis of the amount and location settings will be discussed in detail in Section 4.7. The model is trained to minimize the cross-entropy loss between the predicted \(\hat{Y}\) and the ground-truth \(Y\), \(v\) is the vocabulary index for \(w_{i}\in Y\): \[\mathcal{L}=-{\sum_{i}^{\hat{M}}}\log p_{i}^{v}(\hat{Y}) \tag{15}\] ## 4 Experiments ### Datasets Following the settings in Chen et al. (2021, 2022), we adopt two publicly available datasets derived from the scholar corpora S2ORC (Lo et al., 2020) and Delve (Akujuobi and Zhang, 2017) respectively to evaluate our proposed method in related work generation. S2ORC consists of scientific papers from multiple domains, and Delve focuses on the computer domain. The datasets are summarized in Table 1, where the corresponding ratios of the training/validation/test pairs are detailed. ### Settings We implement our model with PyTorch on NVIDIA 3080Ti GPU. In our model, the maximum reference paper number is set to 5, i.e., \(|D|=5\). We select the first \(440/|D|\) words in each reference paper abstract and concatenate them to obtain the model input sequence. The total number of sentences in target related work is set to 6, i.e., \(s=6\). We use beam search for decoding, with a beam size of 4 and a maximum decoding step of 200. When fusing CaM with Transformer, the dimension of word embedding is set to 768, both attention heads number and layer number is set to 12, and the intermediate size is set to 3072. We use Stochastic Gradient Descent(SGD) as the optimizer with a learning rate \begin{table} \begin{tabular}{l l l} \hline \hline **Statistic** & **S2ORC** & **Delve** \\ \hline Pairs \# & 126k/5k/5k & 72k/3k/3k \\ source \# & 5.02 & 3.69 \\ words/sent(doc) \# & 1079/45 & 626/26 \\ words/sent(sum) \# & 148/6.69 & 181/7.88 \\ vocab size \# & 377,431 & 190,381 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the datasets 1e-2. To ensure desirable performance and save training costs, we utilized pretrained BERT Devlin et al. (2019). We use ROUGE-1, ROUGE-2 and ROUGE-L on F1 as the evaluation metrics Lin (2004). Since we adopt exactly the same datasets (including the dataset settings) as RRG Chen et al. (2021) used, we directly use the results in the RRG paper for baseline comparison. ### Compared Methods We compare our CaM with the following eight state-of-the-art baselines, including both extractive and abstractive methods. #### 4.3.1 Extractive Methods (1) **TextRank**Mihalcea and Tarau (2004): A graph-based text ranking model that can be used in multi-document sentence extraction. (2) **BertSumEXT**Liu and Lapata (2019): An extractive document summarization model that extends BERT by inserting multiple [CLS] tokens.(3) **MGSumext**Jin et al. (2020): A multi-granularity interaction network that jointly learns different semantic representations. #### 4.3.2 Abstractive Methods (1) **TransformerABS**Vaswani et al. (2017): An abstractive summarization model based on Transformer with attention mechanism. (2) **BertSumABS**Liu and Lapata (2019): An abstractive model based on BERT with a designed two-stage fine-tuning approach. (3) **MGSum-abs**Jin et al. (2020): A multi-granularity interaction network that can be utilized for abstractive document summarization.(4) **GS**Li et al. (2020): An abstractive summarization model that utilizes special graphs to encode documents to capture cross-document relations. (5) **T5-base**Raffel et al. (2020): A text-to-text generative language model that leverages transfer learning techniques. (6) **BART-base**Lewis et al. (2020): A powerful sequence-to-sequence model that combines the benefits of autoregressive and denoising pretraining objectives. (7) **Longformer**Beltagy et al. (2020): A transformer-based model that can efficiently process long-range dependencies in text. (8) **RGG**Chen et al. (2021): An encoder-decoder model specifically tailored for related work generation, which constructs and refines the relation graph of reference papers. ### Overall Performance It can be found in Table 2 that abstractive models have attracted more attention in recent years and usually outperform extractive ones. Among the generative models, pretrained model T5 and BART achieve promising results in our task without additional design. Meanwhile, Longformer, which is good at handling long text input, also achieves favorable results. However, the performance of these models is limited by the complexity of the academic content in the dataset. Our proposed CaM achieves the best performance on both datasets. Due to fusing CaM with Transformer, its large scale ensures that our model can still effectively capture document relations without additional modeling. Accordingly, CaM enables the model to obviate the impact of spurious correlations through causal intervention and promotes the model to learn more robust causalities to \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{**S2ORC**} & \multicolumn{3}{c}{**Delve**} \\ & ROUGE-1 & ROUGE-2 & ROUGE-L & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline \multicolumn{5}{l}{_Extractive Methods_} \\ TextRank & 22.36 & 2.65 & 19.73 & 25.25 & 3.04 & 22.14 \\ BertSumEXT & 24.62 & 3.62 & 21.88 & 28.43 & 3.98 & 24.71 \\ MGSum-ext & 24.10 & 3.19 & 20.87 & 27.85 & 3.95 & 24.28 \\ \hline \multicolumn{5}{l}{_Abstractive Methods_} \\ TransformerABS & 21.65 & 3.64 & 20.43 & 26.89 & 3.92 & 23.64 \\ BertSumABS & 23.63 & 4.17 & 21.69 & 28.02 & 3.50 & 24.74 \\ MGSum-abs & 23.94 & 4.58 & 21.57 & 28.13 & 4.12 & 24.95 \\ GS & 23.92 & 4.51 & 22.05 & 28.27 & 4.36 & 25.08 \\ T5-base & 23.20 & 4.01 & 21.41 & 26.38 & 5.69 & 24.35 \\ BART-base & 23.36 & 4.13 & 21.08 & 26.96 & 5.33 & 24.42 \\ longformer & 26.00 & 4.96 & 23.20 & 28.05 & 5.20 & 25.65 \\ RRG & 25.46 & 4.93 & 22.97 & 29.10 & 4.94 & 26.29 \\ **CaM (ours)** & **26.65** & **5.40** & **24.62** & **29.31** & **6.17** & **26.61** \\ \hline \hline \end{tabular} \end{table} Table 2: ROUGE scores comparison between our CaM and the baselines. achieve the best performance. ### Ablation Study To analyze the contribution of the different components of CaM, we separately control the use of Primitive Intervention (PI), Context-aware Remapping (RMP) and Optimal Intensity Learning (OPT). Figure 4 and Figure 5 show the performance comparison between different variants of CaM. First, it can be observed that the basic Transformer model already guarantees a desirable base performance. When only PI is used, the model generally shows a slight performance drop. PI+RMP outperforms RMP, showing the necessity of the PI and the effectiveness of RMP. PI+RMP+OPT achieves optimal results, indicating that OPT can effectively exploit the information across different representations. ### Human Evaluation We evaluate the quality of related works generated by the CaM, RRG, and BERT from three perspectives (informativeness, coherence, and succinctness) by randomly selecting forty samples from S2ORC and rating the generated results by 15 master and doctoral students on the three metrics (from 0 to 3, with higher scores indicating better results). As table 3 shows, our method achieves the best in informativeness and coherence, and the causal intervention makes coherence the most superior. However, succinctness is slightly lower than RRG, probably due to the output length limit. We will complete the human evaluation by more participants using the MTurk platform and report the evaluation results in the final version. ### Fusing Strategy Comparison In our setting, the base Transformer model consists of \(12\) layers, so there are multiple locations to fuse a different number of CaMs. For each scenario, CaMs are placed evenly among the Transformer layers, and one will always be placed at the end of the entire model. The results of all cases are shown in Figure 6. It can be observed that the model performs best when the number of CaM is 4 both on S2ORC and Delve. With a small number of CaMs, the model may underperform the benchmark model and fail to achieve optimal performance due to the lack of sufficient continuous intervention. If there are too many CaMs, the distance between different CaMs will be too short, leaving an insufficient learning process for the Transformer layers, and this might cause the CaMs to bring the noise. \begin{table} \begin{tabular}{l c c c} \hline \hline & **inf** & **coh** & **suc** \\ \hline **CaM** & 2.21 & 2.38 & 2.01 \\ **RRG** & 2.07 & 2.10 & 2.05 \\ **BERT** & 2.11 & 1.97 & 1.92 \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation result Figure 4: Ablation result on S2ORC. Figure 5: Ablation result on Delve. Figure 6: Performance analysis on the number of CaMs fused with Transformer. ### Robustness Analysis To verify the robustness of knowledge learned by causal intervention, we designed two experiments on CaM and the base Transformer (TF). #### 4.8.1 Testing with Reordered Samples We randomly select 50 samples (15 from S2ORC and 35 from Delve) and manually rearrange the order of the cited papers in each of them, as well as the order of their corresponding sentences.Transitional content in related works is also removed since the reordering damages the original logical relations. It can be observed from Figure 7 that CaM has better performance regardless of whether the samples have been reordered or not. By switching to the reordered samples, the performance of Transformer decreases on all three metrics, but CaM only decreases on ROUGE-1 and ROUGE-2 at a much lower rate. Particularly, compared to the Transformer, CaM makes improvement on ROUGE-L when tested with reordered samples. The result indicates that CaM is able to tackle the noise disturbance caused by reordering, and the generated content maintains better coherence. #### 4.8.2 Testing with Migrated Test Set We train the models on Delve and test them on S2ORC, which is a challenging task and significant for robustness analysis. As expected, the performances of all models drop, but we can still obtain credible conclusions. Since CaM outperforms Transformer initially, simply comparing the ROUGE scores after migrating the test set is not informative. To this end, we use _Relative Outperformance Rate_ (ROR) for evaluation: \[\mathrm{ROR}=(\mathrm{S_{CaM}}-\mathrm{S_{TF}})/\mathrm{S_{TF}} \tag{16}\] \(\mathrm{S_{CaM}}\) and \(\mathrm{S_{TF}}\) are the ROUGE scores of CaM and Transformer, respectively. ROR computes the advantage of CaM over Transformer. Figure 8 reports that CaM outperforms Transformer regardless of migrating from Delve to S2ORC for testing. In addition, comparing the change of ROR, we observe that although migration brings performance drop, CaM not only maintains its advantage over Transformer but also enlarges it. The above two experiments demonstrate that the CaM effectively learns causalities to improve model robustness. ### Causality Visualization To visualize how causal intervention worked in the generation process, we compare the related works generated by the base Transformer and CaM with a case study (full results in Table 4). Specifically, we map their cross attention corresponding to "however" and "the" to the input content using Figure 8: The result of migrating test set from Delve to S2ORC (trained on Delve). Figure 7: Comparison between Transformer and CaM on original and reordered samples. Figure 9: Visualization of the generating process within CaM and Transformer(TF). different color shades (see Figure 10) to explore what information of these two words rely on. More details of the above two experiments can be found in Appendix B. We picked out the words that "however" and "the" focused on the most and analyzed the implications of these words in the context of the input. The results are shown in Figure 9. It can be found that the words highlighted by CaM have their respective effects in the cited papers. When generating "however", the model aggregates this information, comparing the relations between the documents and producing the correct result. However, there is no obvious connection between the words focused on by Transformer, hence there is no clear decision process after combining the information, and the generated word "the" is simply a result obtained from learned experience and preference. Through causality visualization, it can be observed very concretely how CaM improves model performance by conducting causal intervention. ## 5 Conclusions In this paper, we propose a Causal Intervention Module for Related Work Generation (CaM) to capture causalities in related work generation. We first model the relations in related work generation using a causal graph. The proposed CaM implements causal intervention and enables the model to capture causality. We subtly fuse CaM with Transformer to obtain an end-to-end model to integrate the intervened information throughout the generation process. Extensive experiments show the superiority of CaM over the latest models and demonstrate our method's effectiveness. ## Limitations Although extensive experiments have demonstrated that CaM can effectively improve the performance of the base model, as mentioned above, since the intervention occurs on the sentence start words, it is inconclusive that CaM can bring improvement if the generation of sentence start words is inaccurate. That is, CaM can improve the effect of large-scale models or pre-trained models very well, but if it is to be combined with small-scale models and trained from scratch, then the effectiveness of the model might not be ensured. This will also be a direction of improvement for our future work.
2302.03431
Exploration and Regularization of the Latent Action Space in Recommendation
In recommender systems, reinforcement learning solutions have effectively boosted recommendation performance because of their ability to capture long-term user-system interaction. However, the action space of the recommendation policy is a list of items, which could be extremely large with a dynamic candidate item pool. To overcome this challenge, we propose a hyper-actor and critic learning framework where the policy decomposes the item list generation process into a hyper-action inference step and an effect-action selection step. The first step maps the given state space into a vectorized hyper-action space, and the second step selects the item list based on the hyper-action. In order to regulate the discrepancy between the two action spaces, we design an alignment module along with a kernel mapping function for items to ensure inference accuracy and include a supervision module to stabilize the learning process. We build simulated environments on public datasets and empirically show that our framework is superior in recommendation compared to standard RL baselines.
Shuchang Liu, Qingpeng Cai, Bowen Sun, Yuhao Wang, Ji Jiang, Dong Zheng, Kun Gai, Peng Jiang, Xiangyu Zhao, Yongfeng Zhang
2023-02-07T12:34:31Z
http://arxiv.org/abs/2302.03431v2
# Exploration and Regularization of the Latent Action Space in Recommendation ###### Abstract. In recommender systems, reinforcement learning solutions have effectively boosted recommendation performance because of their ability to capture long-term user-system interaction. However, the action space of the recommendation policy is a list of items, which could be extremely large with a dynamic candidate item pool. To overcome this challenge, we propose a hyper-actor and critic learning framework where the policy decomposes the item list generation process into a hyper-action inference step and an effect-action selection step. The first step maps the given state space into a vectorized hyper-action space, and the second step selects the item list based on the hyper-action. In order to regulate the discrepancy between the two action spaces, we design an alignment module along with a kernel mapping function for items to ensure inference accuracy and include a supervision module to stabilize the learning process. We build simulated environments on public datasets and empirically show that our framework is superior in recommendation compared to standard RL baselines. Recommender Systems, Reinforcement Learning, Representation Learning + Footnote †: ccs: Proceedings of the ACM Web Conference 2023 (WWW '23), May 1–5, 2023, Austin, TX, USA 0 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-9416-12/30/4.. $15.00 [https://doi.org/10.1145/3543507.3583244](https://doi.org/10.1145/3543507.3583244) + ## 1. Introduction Recommender Systems (RS) serve as one of the fundamental components for a wide range of web services including e-commerce, social media, news, and advertising. In recent years, studies have shown that the long-term interactions between users and the RS formulate a Markov Decision Process (MDP), where Reinforcement Learning (RL) methods can be used to further improve the predictive performances (Beng et al., 2016) compared to traditional learning-to-rank solutions (Zhu et al., 2017). Rather than optimizing the immediate user response, the key insight behind RL is to maximize the cumulative reward over all the interactions over time. When adopting this formulation in practice, the recommendation problem distinguishes itself from other RL tasks by the challenge of _large dynamic discrete_ action space. For example, in an e-commerce system or video recommendation system, it may need to select for each request a recommendation list from a pool of millions of products, and the candidate pool grows every day. This means that it would be unrealistic for tabular-based methods (e.g. Q-Learning, SARSA, Policy Iteration) that favor small action spaces and methods designed for fixed action spaces. Though efforts have been made to alleviate this issue by decomposition of the recommendation list (Krishnan et al., 2017) into item-wise sub actions, learning a policy gradient over the entire candidate item pool is still an challenging task. Fortunately, this challenge has already been solved in early-age non-RL models like latent factor models (Zhou et al., 2017) and two-tower collaborative filtering method (Zhu et al., 2018). They learn a common latent space that can represent both the user requests and items (or item lists), so that the learned latent space can accommodate arbitrary number of items and is agnostic to the dynamic changes of the item pool. In this work, we combine this insight into the RL-methods and focus on the list recommendation problem. The latent representations of recommendation lists are generalized as hyper-actions as shown in Figure 1. In a forward inference, the policy first propose a vectorized _hyper-action_, then this latent vector will serve as a deterministic function that rank, and finally select a list of items, denoted as _effect-action_, from the candidate pool. Note that this extra implicit inference step also induces new challenges to the RL solution: Most importantly, it introduces inconsistency between the two actions spaces. Specifically, we want to apply efficient end-to-end training on the hyper-actions but it is the effect-actions that actually interact with the users. On the other hand, the most accurate latent representation of the discrete effect-action may not be exactly the same as that of the proposed hyper-action that selects it, so the inference accuracy is not guaranteed. Additionally, it also introduces an extra exploration stage and it is unknown whether one should explore hyper-actions on the latent space or explore the discrete effect-action space. All of these add up to the instability and uncertainty of the learning, so we need a solution that can regulate the two action spaces and stabilize the RL process. To solve the aforementioned challenges, we propose a general Hyper-Actor Critic (HAC) learning framework that contains four components: the user state and hyper-action generator; the scoring function that maps hyper-actions into effect-actions (i.e. recommendation lists); the critic network that evaluates both the hyper-action space and the effect-action space; and an inverse mapping module that infers the hyper-action back based on the effect action. During training, the backbone actor-critic learning paradigm is augmented with an alignment module that ensures consistency between two action spaces and a supervision module that improves the stability and effectiveness of RL. The resulting framework generalizes many existing solutions to recommendation tasks like DDPG and Online/Offline Supervise Learning (SL). We summarize our contribution as follows: * We propose a practical and efficient RL framework that learns to recommend a list of items from a large item pool through a latent hyper-action. * We build online simulators based on public datasets and empirically show that the resulting framework achieves better performance compared to standard RL and SL solutions. * We also point out that providing supervision and regulating the consistency between the hyper-action space and the effect-action space are helpful for improving the sampling efficiency and inference accuracy. ## 2. Method ### Problem Formulation Here we consider the session-based recommendation scenario where the system considers an item pool of size \(N\), denoted as \(\mathcal{I}\), and for each user we observe the static user features \(\mathbf{u}\) and the interaction history of a session \(x_{1:t}=\{x_{1},\dots,x_{t}\}\) where \(x_{i}\in\mathcal{I}(1\leq i\leq t)\). And the goal is to interactively recommend lists of items to users that maximizes user cumulative reward over the session of interactions. As we have described in section 1, we emphasize the existence of the latent action space and formulate the recommendation task as a modified Markov Decision Process (MDP) as captured in Figure 1. Then, the MDP components with the latent action space become: * \(\mathcal{S}\): the continuous representation space of user state. * \(\mathcal{A}\): the final effect-action space corresponds to the possible recommendation lists. Without loss of generality, we consider the list of fixed size \(k\) so the action space is \(\mathcal{A}=\mathcal{I}^{k}\). * \(\mathcal{Z}\): the latent hyper-action space which is a continuous vector space that encodes how the effect-action will be selected. We assume a many-to-one mapping function \(f:\mathcal{Z}\rightarrow\mathcal{A}\). * \(\mathcal{R}\): The cumulative reward function that estimates the user feedback in the user session, and the immediate reward \(r(s,a)\) captures the single step reward when taking an action \(a\in\mathcal{A}\) on state \(s\in\mathcal{S}\). Note that there is an implicit transition model \(\mathcal{P}(s_{t}^{r}|s_{t},a_{t})\) that describes the probability of reaching a certain new state \(s_{t}^{r}\) after taking action \(a_{t}\) on state \(s_{t}\). In RS, this transition function is integrated into the user state encoder and is usually modeled by a sequential model that takes the user interaction sequence as input. At each interaction step in a user session, given a user's current state \(s_{t}\in\mathcal{S}\) (e.g. portrait and history interactions), the recommendation policy \(\pi_{\theta}(a_{t}|s_{t})\) first infer a hyper-action representation \(Z_{t}\) and then generates a list of items as the effect action \(a_{t}\). The user's feedback along with the updated user state is returned from the user environment and the reward function \(r(s_{t},a_{t})\) is assumed as given so is regarded as part of the environment. The goal is to find an optimal recommendation policy \(\pi^{*}(a_{t}|s_{t}):\mathcal{S}\rightarrow\mathcal{A}\) that maximizes the expected cumulative reward throughout the session: \[\mathbb{E}_{\tau\rightarrow\pi}[\mathcal{R}(\tau)]=\mathbb{E}_{\tau \rightarrow\pi}\Big{[}\sum_{t=0}^{|\tau|}\gamma^{t}r(s_{t},a_{t})\Big{]} \tag{1}\] Figure 1. MDP formulation with latent hyper-action where \(\tau=[(s_{0},a_{0},r_{0}),(s_{1},a_{1},r_{1}),\ldots]\) denotes the sampled trajectories, and \(\gamma\in[0,1)\) denotes the discount factor for future reward. ### Overall Framework We present our framework as Figure 2, and denote it as the Hyper-Actor Critic (HAC) learning method. The recommendation policy \(\pi(a_{t}|s_{t})\) is decomposed into an hyper-actor network \(P(Z_{t}|s_{t})\) that generates a vectorized hyper-action and a ranking scorer \(P(a_{t}|Z_{t})\) that select the final recommendation list based on the hyper-action. Then we propose to share the critic network between action spaces so that it can evaluate either the hyper-action or the final effect-action (i.e. the recommendation list). Our framework uses DDPG (Srivastava et al., 2017) as foundation, but differently, we address the importance of using different action spaces for actor learning and critic learning. Specifically, we optimize the critic based on effect-actions to guarantee the accuracy of action/state evaluation, and use hyper-actions to optimize the actor so that efficient end-to-end training and exploration can be applied. To ensure consistency between the two different action spaces, we also learn an inverse pooling module with item kernel functions to infer the hyper-action back from the effect-action. This means that the evaluation of the two action spaces will share the same critic, and the knowledge learned from the effect-action can be transferred to hyper-actions. To stabilize the learning process, we also include supervision on the effect-action using immediate user responses. ### User State and Hyper-Actor In the case of RS, the observable information of a user usually includes the up-to-date user interaction history \(\mathbf{x}_{1:t}\) and the static demographic features \(\mathbf{u}\). Then one can use any sequential model to encode this information and infer the dynamic user state: \[\mathbf{s}_{t}=\text{StateEnc}(\Phi(x_{1}),\ldots,\Phi(x_{t}),\mathbf{u}) \tag{2}\] where the item kernel function \(\Phi\) maps the items' raw features into a dense embedding in the kernel space. We also use a user kernel function to map the user features \(\mathbf{u}\) into the same kernel space of items, then concatenate the sequence of history items with the user embedding as the new sequence. To encode user features and histories, there exist several feasible sequential models such as (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018), and we use the state encoder in SASRec (Wang et al., 2018) as our backbone since it can capture both the sequential patterns in the dynamic history and the user-item correlations through the self-attention mechanism. With the state encoded, a vectorized representation (i.e. the hyper-action) is inferred by an Multi-Layer Perceptron (MLP) module: \[Z_{t}=\text{MLP}(\mathbf{s}_{t}) \tag{3}\] We assume that the distribution of this hyper-action follows the standard Gaussian \(\mathcal{N}(Z_{t},\sigma_{Z}^{2})\) and we can use the reparameterization trick to engage the end-to-end training. ### Scoring Functions and Effect Action Given \(Z_{t}\) that contains sufficient information of user preferences, we can assume that the selection of items only depends on this latent vector. In other words, we have conditional independence: \[P(a_{t}|\mathbf{s}_{t},Z_{t})=P(a_{t}|Z_{t}) \tag{4}\] Note that this setting also indicates that the inferred \(Z_{t}\) is a _hyper-action_ that can be considered as the parameters of the later item selection module. As a result, the overall recommendation policy follows \(\pi(a_{t}|s_{t})\sim P(a_{t}|Z_{t})\). When generating the effect-action, we select items from the candidate pool according to a scoring function parameterized by \(Z_{t}\). The scoring function provides a ranking score for each of the candidate items in \(\mathcal{I}\), and the final recommendation list is generated through either top-\(k\) selection or categorical sampling. Taking the SASRec model as an example, the scoring function is a dot product between the kernel item embedding and the encoded user state: \[\text{score}(i|Z_{t})=\Phi(i)^{\top}Z_{t} \tag{5}\] Note that the parameters of the item kernel function \(\Phi\) are not considered as part of the hyper-action \(Z_{t}\) since it is independent of the given user state. In order to engage efficient learning and inference, one can assume that selection of each item is conditionally independent of each other: \(P(a_{t}|Z_{t})=\prod_{i\in a_{t}}P(i|Z_{t})\), similar to the slate decomposition in (Kang et al., 2018). Then, we can define the selection or sampling probability of an item as \(P(i|Z_{t})=\text{softmax}_{\mathcal{I}}(\text{score}(i|Z_{t}))\). ### Shared Critic and The Inverse Module The purpose of the critic is to accurately evaluate the long-term quality of the state-action pair (e.g. Q function in DDPG) or the expected value of the given state (e.g. V function in A2C) so that it can effectively guide the actor learning and the action exploration. Compared to the standard RL framework, the new problem setting allows us to evaluate either the hyper-action with \(Q(s_{t},Z_{t})\) or the effect-action with \(Q(s_{t},a_{t})\) or both. In order to ensure consistent evaluation of the actions from different spaces, we propose to transfer knowledge between \(Q(s_{t},a_{t})\) and \(Q(s_{t},Z_{t})\) through a shared critic network. As shown in Figure 2, this shared critic is a mapping function \(g:\mathcal{S}\times\mathcal{Z}\rightarrow\mathbb{R}\), that takes the user state \(s_{t}\) and the action embedding in the kernel space \(Z_{t}\). Or equivalently, \(Q(s_{t},Z_{t})=g(s_{t},Z_{t})\). To evaluate the effect-action, an inverse module \(h\) is introduced to infer the hyper-action back: \[\hat{Z}_{t}=h(a_{t})=\text{pooling}(\Phi(i)|i\in a_{t}) \tag{6}\] and the evaluation becomes \(Q(s_{t},a_{t})=g(s_{t},\hat{Z}_{t})\). In practice, we found that the average pooling of the item embedding in the kernel space generates the most stable result, though there are infinitely many latent \(Z\) that can generate the same list. Compared to existing solutions that infer the latent action using adjacent states like PG-RA (Bang et al., 2018), we believe that the effect-action in recommendation task has sufficient information to recover the hyper-action. In addition, we use a reconstruction loss to further regulate the consistency between the hyper-action and the effect-action through an alignment loss function. As we will describe in the next section, it ensures that the generated hyper-action \(Z_{t}\) is in the valid region close to the candidate items in the kernel space. ### Overall Learning Framework The overall optimization process is a modified actor-critic learning framework that consists of a critic loss, an actor loss, a hyper-actor loss, and a supervised loss. And a experience replay buffer \(\mathcal{D}\) will collect the sample records in the form of \((s_{t},a_{t},r(s_{t},a_{t}),s_{t+1},d)\) where \(d\) is the termination signal indicating whether the user has left. The critic loss aims to train an accurate evaluator that captures the patterns in the quality of actions: \[\mathcal{L}_{\text{TD}}=\mathbb{E}_{\mathcal{D}}\left[(r(s_{t},a_{t})+r(1-d)Q(s_ {t+1},a_{t+1})-Q(s_{t},a_{t}))^{2}\right] \tag{7}\] where \(a_{t+1}\) is generated by the recommendation policy using greedy method (equivalent to finding \(\arg\max Q(s_{t},a_{t})\) when the Q function is accurate). This is a standard TD error and we only calculate Q for the effect-action when learning the critic to ensure the accuracy of evaluation. Note that in DDPG-based methods, each actor and critic is paired with a target network which is used to estimate future state-action pairs \(Q(s_{t+1},a_{t+1})\). These target networks adopt elastic updates from the actor and critic so that their slow optimization can stabilize the learning. Then, with an accurate critic as the evaluator, we can efficiently learn our actor by maximizing the Q-value of the inferred action, which is equivalent to minimizing the following actor loss: \[\mathcal{L}_{\text{QMax}}=\mathbb{E}_{\mathcal{D}}\left[Q(s_{t},Z_{t})\right] \tag{8}\] where \(Z_{t}\) is inferred by the hyper-actor as described in section 2.3. Note that the learning of critic and actor uses different action spaces, so we need to align the two spaces to avoid the mode collapse (Sutskever et al., 2017) of the generated hyper-action. In our solution, we use the L2 regularizer to ensure this consistency: \[\mathcal{L}_{\text{Hyper}}=\mathbb{E}_{\mathcal{D}}\left[\|Z_{t}-\hat{Z}_{t} \|^{2}\right] \tag{9}\] where \(Z_{t}\) is produced by the hyper-action based on state \(s_{t}\), and \(\hat{Z}_{t}\) is generated by first greedily select the effect action using \(Z_{t}\) as described in section 2.4 and then reconstruct the hyper-action back using the inverse module as described in section 2.5. Additionally, to better stabilize the RL and exploit the detailed user response signals on each item, we also include a supervised learning objective based on the effect-action. \[\mathcal{L}_{\text{BCE}}=\mathbb{E}\Big{[}\sum_{i\in a_{t}}y_{t,i}\log P(i|Z_{ t})+(1-y_{t,i})\log(1-P(i|Z_{t}))\Big{]} \tag{10}\] which is a binary cross-entropy loss where \(y_{t,i}\) is the ground truth user response on the exposed item \(i\). We remind readers that there are other advanced supervision and regularization methods that can accommodate RL-based models and potentially extend our framework. For example, (Kang et al., 2019) could supervise the effect-action space for off-policy training, and (Sutskever et al., 2017) is well-suited for the distance control on the hyper-action space. As a summary, we present the resulting learning paradigm as algorithm 1. And note that the parameters of the inverse module come from the item kernel, so both Eq.(9) and Eq.(10) update them as in line 8-9. ### Exploration in Hyper-Action Space and Effect-Action Space Readers may notice that the inclusion of latent hyper-action also introduces an extra action sampling step as shown in Figure 2, so the resulting framework allows both the sampling on the hyper-actions space (e.g. by adding Gaussian noise) and the sampling on the effect-action space (e.g. categorical sampling of items based Figure 2. Hyper-Actor Critic (HAC) learning framework. \(\odot\) represents the scoring function that selects items from \(\mathcal{I}\), on ranking scores). Theoretically, this indicates that the sampling probability of effect-actions should be described as the following: \[P(a_{t}|\mathbf{s}_{t})=\int_{Z_{t}}P(a_{t}|\mathbf{Z}_{t})P(\mathbf{Z}_{t}|\mathbf{s}_{t}) \tag{11}\] When the effect-action generation is deterministic, the exploration only depends on the sampling of \(\mathbf{Z}_{t}\), similar to that in (Beng et al., 2017); and if the hyper-actor is deterministic, the exploration only depends on the effect-action sampling as in standard policy gradient methods. Note that the variance \(\sigma_{Z}^{2}\) of the hyper-action controls the uncertainty of the inferred latent action embedding, and it is critical to find an adequate value that can improve the exploration effectiveness in RL. On one hand, giving a variance that is too small will limit the exploration of new actions resulting in sub-optimal results; On the other hand, making the variance too large will induce unstable action exploration that hardly converges. In general, we would like to take advantage of the efficient learning and exploration of the hyper-action space, so it becomes critical to align the distribution of \(\mathbf{Z}_{t}\) and the embedded item in the kernel space, as we mentioned in section 2.6. As we showcase in Figure 3, the item kernel function helps increase the expressiveness of the policy by folding the action space where exploration could be more efficient. Though we are skeptical whether there is a guarantee for all RL solutions that explores the latent action space, we will empirically show the effectiveness of encoding both users and items into the same kernel space and regulating the action with the inverse pooling module using Eq.(9) in section 3.4. ## 3. Experiments ### Experimental Settings #### 3.1.1. Datasets We include three public datasets in our experiments: **RL4RS1** is a session-based dataset (Zhu et al., 2017) that is first introduced in the BigData Cup 2010 to boost RL research; **ML1M2** is the MovieLens data (Kang et al., 2017) with 1 million records which consists of user's ratings of movies; **KuaRanId1K3** is a recent dataset for sequential short-video recommendation, and we use the 1K version (Kumar et al., 2018) which has irrelevant videos removed. We preprocess all these datasets into a unified format where each record consists of (user features, user history, exposed items, user feedback, and timestamp) in sequential order. The details of this process are summarized in Appendix A.1, and the resulting dataset statistics is provided in Table 1. Footnote 1: [https://github.com/fruai/flak/RL4RS](https://github.com/fruai/flak/RL4RS) Footnote 2: [https://grouplens.org/datasets/movielens/1m/](https://grouplens.org/datasets/movielens/1m/) Footnote 3: [https://kuarand.com/](https://kuarand.com/) #### 3.1.2. Online Environment Simulator To simulate the online user interaction we train a user response model \(\Psi:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{k}\) for each of the dataset. The user state is derived based on the static user features and the dynamic history interactions. \(\Psi\) outputs the probabilities of the user's positive feedback on each item in recommended \(a_{t}\), and the final response \(y_{t}\in\{0,1\}^{k}\) (e.g. click) is uniformly sampled according to the probabilities. We design the reward \(r(s_{t},a_{t})\in[-0.2,1.0]\) as the average of the item-wise reward. We provide details of the environment in Appendix A.2. #### 3.1.3. Models and Baselines We use SASRec (Kumar et al., 2018) as our backbone actor as described in section 2.3, it consists of a Transformer-based state encoder and hyper-action generator, and the dot-product scorer. We also implemented the following RL baselines using our proposed HAC framework to better showcase how our HAC generalizes existing methods: * **Online SL**: the SASRec actor directly learns from immediate user feedback instead of the long-term commutative reward. * **A2C**: the synchronized version of the A3C (Zhu et al., 2017) that applies the policy gradient on the effect-action space. * **DDPG**: a Deep DPG framework using the hyper-action space for both the actors and critics (Zhu et al., 2017). This method is equivalent to our HAC model without supervision. * **TD3**: improve the DDPG with double Q-learning so the training of critic becomes more stable (Kumar et al., 2018). * **DDPG-RA**: the DDPG framework with the action representation learning as in (Beng et al., 2017). This method is closest to our work and it regulates the effect-actions while our HAC model aligns the hyper-actions. To better compare RL and SL methods, we also include the **Offline SL** that optimizes the policy using Eq.(10) based on the offline data instead of the online environment. The model architectures and specifications of these models are provided in Appendix A.3. #### 3.1.4. Evaluation For all datasets, we split them into the first 80% for training and the last 20% for evaluation according to record timestamps. We then pretrain the online environment on the training set, and pretrain another online environment on the entire dataset for later evaluation. We train our recommendation policies in the first environment and evaluate them in the second. During training, we set the discount of reward as \(\gamma=0.9\) and limit the interaction depth to \(\leq 20\) for all experiments. We find that most RL-based methods converge and stabilize within 50,000 iterations. For long-term evaluation metrics, we consider the **Total Reward \begin{table} \begin{tabular}{c|c c c} \hline Dataset & \(|\mathcal{U}|\) & \(|\mathcal{I}|\) & \#record & \(k\) \\ \hline RL4RS & - & 283 & 781,367 & 9 \\ MovieLens-1M & 6400 & 3706 & 1,000,208 & 10 \\ KuaiRand & 986 & 11,643 & 969,831 & 10 \\ \hline \end{tabular} \end{table} Table 1. Dataset Summary. \(k\) represents the size of the recommendation list (i.e. the effect-action size). RL4RS dataset provides user profile features instead of user ID so it does not have a count for the user set. Figure 3. Exploration in Different Action Spaces that represents the summation of the rewards in an entire user session and the **Depth** represents how many interactions the user, and each session is observed using the simulated online environment that interacts with the learned policies. And for both metrics, a higher value means better performance. To evaluate the stability of the learned policies, we include a **Reward Variance** metric that estimates how inconsistent a policy would deal with different user states, so a lower value indicates a more stable policy. Note that this metric describes the variance across states not the variance across random seeds. In each experiment, we evaluate all aforementioned metrics and report the average values across different user sessions. ### Effectiveness For all tasks, the goal of the recommender system is to maximize the long-term satisfaction represented by total reward and average depth. For each model, we grid-search the hyper-parameters and pick the setting with the best results to report in Table 2. **Main result:** We can see that the proposed HAC framework consistently achieves the best performances across all datasets on both long-term metrics: 6% improvement on Rl4RS, 1% on ML1M, and 3% on KuaiRand over the best baselines. This indicates the expressiveness of the proposed hyper-actions and the effectiveness of the learning method. Note that all other RL methods can only achieve better results than offline supervised learning in the RL4RS task, but become worse in ML1M and KuaiRand with larger effect-action spaces. This indicates that our HAC framework can better capture the patterns for larger action spaces. **RL-baselines:** Among RL solutions, A2C always has the worst performance and appears to be the most unstable learning framework, but the gap between A2C and DDPG tends to be smaller in datasets (ML1M and KuaiRand) with larger action spaces and A2C even achieves better performances than DDPG in KuaiRand with the largest action space. Since A2C directly optimizes the effect-action and DDPG uses the hyper-action, this reduced gap may indicate that it may become harder to learn consistent and accurate hyper-action representations in larger effect-action spaces. This may also proves that ensuring consistency between the two action spaces is critical to achieving effective RL. TD3 slightly improves the performance over DDPG but still behave in a similar way. **Action space regularization:** In addition to our method, The DDPG-RA method also addresses the alignment of action spaces and has the closest behavior to our method. Differently, it does not regulate the hyper-actions has HAC, instead, it aligns the effect-action space that is not directly used in guiding the actor. Additionally, DDPG-RA uses the hyper-action rather than the effect-action when learning critics, so it achieves better results than A2C and DDPG, but does not surpass our method or even supervised methods. ### Learning HAC **Supervision and stable learning:** To further illustrate the training behavior of the HAC model, we plot the learning curves of HAC in Figure 4 where we compare a "HAC w/o supervision" that has no gradient for the supervised loss Eq.(10). All methods saturate \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{RL4RS} & \multicolumn{2}{c}{ML1M} & \multicolumn{2}{c}{KuaiRand} \\ & Total Reward & Depth & Total Reward & Depth & Total Reward & Depth \\ \hline Offline SL & 6.721 & 8.163 & 18.559 & 18.717 & 14.394 & 14.982 \\ \hline Online SL & 9.502 & 10.571 & 18.629 & 18.780 & 13.456 & 14.147 \\ A2C & 7.789 & 9.140 & 16.158 & 16.556 & 12.460 & 13.250 \\ DDPG & 8.337 & 9.588 & 17.205 & 17.508 & 11.394 & 12.313 \\ TD3 & 8.553 & 9.791 & 17.545 & 17.814 & 11.777 & 12.664 \\ PG-RA & 8.561 & 9.728 & 18.466 & 18.633 & 10.859 & 11.814 \\ HAC & **10.059** & **11.102** & **18.863** & **18.988** & **14.789** & **15.335** \\ \hline \hline \end{tabular} \end{table} Table 2. Online Performance. The best performances in bold and second best in Underline Figure 4. Training curves of HAC without supervision on RL4RS. X-axis corresponds to the number of iteration. The four losses are presented in log scales. Figure 5. Effect of supervision loss on KuaiRand. X-axis represents the learning rate of Eq.(10). on the performance metric as shown in the penal "Total Reward" and can successfully reduce all four loss functions mentioned in section 2.6. Note that "HAC w/o supervision" can still successfully reduce the BCE loss on each item, indicating the effectiveness of RL based on the reward that aggregates the item-wise signals. We can also see that including the supervision would help boost the model performance and reduce the actor loss \(\mathcal{L}_{\text{QMax}}\) that helps explore better actions. Note that HAC has a lower critic loss \(\mathcal{L}_{\text{TD}}\) than "HAC w/o supervision", which indicates a more stable learning process. We can also verify this by observing the variance of the total reward across users since higher reward variance indicates that the learned policy is less capable of providing good actions under different user states. As shown in Figure 5 for the KuaiRand environment, increasing the importance of supervised loss would help improve the recommendation performance and reduce the variance. Yet, assigning the supervision module with a learning rate (0.001 in KuaiRand) that is too large may over-exploit the user's intention and harm the performance. We observe similar patterns in the other two datasets and provide the results in Appendix B. **Hyper-action regulation:** We conduct the same ablation experiments when comparing different magnitudes of hyper-action alignment for loss Eq.(9) and plot the learning curves of HAC in Figure 6 In general, including the hyper-action alignment (\(\lambda_{h}=0.1\) and \(\lambda_{h}=1\)) would cause the HAC framework to learn slower than the HAC that only uses supervision (\(\lambda_{h}=0\)) in terms of the convergence of total reward and reward variance. In contrast, the more consistent action space helps the model to learn and explore better action policies. Besides, increasing the importance of this alignment module results in worse \(\mathcal{L}_{\text{QMax}}\) and better \(\mathcal{L}_{\text{TD}}\), indicating that critic is more accurate in capturing the quality of actions. Note that \(\lambda_{h}=1\) is more stable than \(\lambda_{h}=0.1\) but may be less effective in exploration. To better verify this We illustrate the evaluation result in Figure 7 where the results exhibit a best point \(\lambda_{h}=0.1\) where the recommendation is higher and more stable in reward. ### Ablation Study **Model components:** To better investigate how different learning modules in our framework work, we compare several alternatives of our method: 1) DDPG: HAC without supervision and action alignment, and it uses hyper-action space for both actor learning and critic learning; 2) HAC w/o \(\mathcal{L}_{\text{BCE}}\): excluding the supervision of HAC; 3) HAC w/o \(\mathcal{L}_{\text{Hyper}}\): excluding the hyper-action alignment module. We summarize the results in Figure 8 for ML1M dataset. We can see that excluding either the supervision or the alignment module would reduce the performance and increase the reward variance. This indicates that both modules help improve the model's accuracy and learning stability. Similar results are also observed in other datasets as augmented in Appendix B. Note that DDPG achieves relatively the same performance as HAC w/o \(\mathcal{L}_{\text{BCE}}\), this indicates that using separating action spaces for actor learning and critic learning as in HAC may reduce the performance and simply including an action alignment module would not fill the gap. In this sense, HAC needs both hyper-action alignment and supervision. This also means that the inconsistency between the two action spaces is smaller than the inconsistency between the aggregated reward and item-wise user response signals. **Exploration on different action spaces:** In terms of the exploration of HAC model, we can apply exploration on both the hyper-action space and the effect-action space. To compare the effect of different magnitude of hyper-action exploration, we change the variance of the Gaussian noise during learning and fix all other hyperparameters of HAC, and present the results in Figure 9. The comparison of recommendation performance shows an optimal point in the middle of the search space, indicating that one should carefully design the exploration so that the sampling variance is not too small or too large. As we have discussed in section 2.7, small Figure 8. Effect of model components on ML1M. Figure 6. Training curves of HAC on RL4RS. The reward variance correspond to the variance of total reward across different users. The four losses are presented in log scales. \(\lambda_{h}\) represents the magnitude of hyper-action alignment. \(\lambda_{h}=0\) does not include this alignment loss so has \(\mathcal{L}_{\text{Hyper}}=0\). Figure 7. Effect of hyper-action alignment on ML1M. X-axis represents the magnitude of Eq.(9), i.e. \(\lambda_{h}\) in Figure 6. variance may limit the exploration of new actions and large variance may induce unstable exploration that hardly converges. And empirically, we find that sampling on effect-actions is less effective than exploration on hyper-actions. As the example in Figure 10, applying top-\(k\) greedy selection achieves the best result and adding categorical sampling would make the learned policy sub-optimal. ## 4. Related Work ### Sequential Recommendation and Session-based Recommendation The session-based recommendation (SBR) problem is closely related to the sequential recommendation (SR) task (Wang et al., 2017). An SR task aims to learn a policy that can infer the next recommendation (item or list) based on the given user's historical interactions. In comparison, the SBR considers the existence of the beginning and termination of an interaction session. We see our setting as somewhere intersects the two notions: by setting the list size to 1, it would be almost identical to SR except for the reward setup; yet, our goal is to optimize the entire future reward of the user session, which is closer to the next-partial SBR as defined in (Wang et al., 2017). Under both problem settings, the most adopted solution uses the Markov Chain assumption to model the dynamic transitions of the user-system interactions. The main challenge of this line of work is how to construct a representative user state based on the long-term history. Early solutions to the recommendation problem adopted the collaborative filtering techniques (Kang et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018) and later approaches embrace deep neural networks like Recurrent Network (Wang et al., 2017), Convolution Network (Wang et al., 2018), Memory Network (Kang et al., 2016), Self-Attention (Wang et al., 2017; Wang et al., 2018), GCN (Chen et al., 2018; Wang et al., 2018), Machine Reasoning (Chen et al., 2018; Wang et al., 2018; Wang et al., 2018) and Foundation Models (Kang et al., 2016) to improve the model expressiveness, so that it can better capture the abundant and complex information from user/item features and interaction histories. The key insight behind all these methods is to accurately encode the long-range histories, but this paradigm does not optimize the long-term user rewards. ### Reinforcement Learning in Recommendation The RL-based RS (Chen et al., 2018; Wang et al., 2018; Wang et al., 2018) also follows the MDP formulation and it emphasizes the importance of optimizing the cumulative reward that represents long-term user satisfaction. In the simplest setting, tabular-based methods (Wang et al., 2018) are used to optimize an evaluation table but only work for a fixed set of state-action pairs. Then value-based methods (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and policy gradient methods (Chen et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) are proposed to learn to evaluate and optimize the action policy based on the sampled long-term reward. The actor-critic paradigm (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018) integrates these two methods by simultaneously learning an action evaluator and an action generator. The main challenges of RL-based RS consist of the large combinatorial state/actions space (Kang et al., 2016; Wang et al., 2018; Wang et al., 2018), regulating the unstable learning behavior (Chen et al., 2018; Wang et al., 2018), and finding optimal reward function for heterogeneous user behaviors (Chen et al., 2018; Wang et al., 2018). Our work focus on the action space representation learning and the stability of RL. And we consider PG-RA (Chen et al., 2018) as the closest work that also emphasizes the learning of latent action representations. As we have mentioned in section 3.1, PG-RA aims to transfer knowledge through a shared scoring function and applies action alignment on the effect-action space, which is not well suited for the latent-factor decomposition for users and items. We have empirically verified this inferior performance in 3.2. Additionally, we have illustrated the effectiveness of our method through evaluation on different simulated environments, but we remind readers that there is still a chance that the actual online environment is a more complex and dynamic mechanism. In this sense, there are works focusing on building a more realistic online user environment for RS (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) which could complement our work in practice. ## 5. Conclusion We proposed a practical actor-critic learning framework that optimizes a recommendation policy in both the discrete effect-action space as well as the vectorized hyper-action space. Empirical results verified the effectiveness and stability of our proposed method in online environments built on public datasets. The main purpose of our design is the generalization ability that can accommodate various existing solution frameworks including Actor-Critic, DDPG, as well as supervised learning. And we consider the notion of hyper-action as a preliminary attempt on applying more sophisticated scoring functions than a simple dot product. ###### Acknowledgements. This research was partially supported by APRC\(-\)CityU New Research Initiatives (No.9610565, Start-up Grant for New Faculty of City University of Hong Kong), SIR\(-\)CityU Strategic Interdisciplinary Research Grant (No.7020046, No.7020074), and CityU\(-\)HKIDS Early Career Research Grant (No.9360163) Figure 10. HAC’s effect-action exploration magnitude on RL4RS dataset. X-axis correspond to the rate of greedy top-k selection rather than categorical sampling for effect-actions in HAC model, and we set hyper-action noise to 0. Figure 9. Effect of HAC’s hyper-action exploration magnitude on RL4RS dataset. X-axis correspond to the Gaussian noise variance for hyper-actions in HAC model.
2309.01154
A Survey on What Developers Think About Testing
Software is infamous for its poor quality and frequent occurrence of bugs. While there is no doubt that thorough testing is an appropriate answer to ensure sufficient quality, the poor state of software generally suggests that developers may not always engage as thoroughly with testing as they should. This observation aligns with the prevailing belief that developers simply do not like writing tests. In order to determine the truth of this belief, we conducted a comprehensive survey with 21 questions aimed at (1) assessing developers' current engagement with testing and (2) identifying factors influencing their inclination toward testing; that is, whether they would actually like to test more but are inhibited by their work environment, or whether they would really prefer to test even less if given the choice. Drawing on 284 responses from professional software developers, we uncover reasons that positively and negatively impact developers' motivation to test. Notably, reasons for motivation to write more tests encompass not only a general pursuit of software quality but also personal satisfaction. However, developers nevertheless perceive testing as mundane and tend to prioritize other tasks. One approach emerging from the responses to mitigate these negative factors is by providing better recognition for developers' testing efforts.
Philipp Straubinger, Gordon Fraser
2023-09-03T12:18:41Z
http://arxiv.org/abs/2309.01154v1
# A Survey on What Developers Think About Testing ###### Abstract Software is infamous for its poor quality and frequent occurrence of bugs. While there is no doubt that thorough testing is an appropriate answer to ensure sufficient quality, the poor state of software generally suggests that developers may not always engage as thoroughly with testing as they should. This observation aligns with the prevailing belief that developers simply do not like writing tests. In order to determine the truth of this belief, we conducted a comprehensive survey with 21 questions aimed at (1) assessing developers' current engagement with testing and (2) identifying factors influencing their inclination toward testing; that is, whether they would actually like to test more but are inhibited by their work environment, or whether they would really prefer to test welless if given the choice. Drawing on 284 responses from professional software developers, we uncover reasons that positively and negatively impact developers' motivation to test. Notably, reasons for motivation to write more tests encompass not only a general pursuit of software quality but also personal satisfaction. However, developers nevertheless perceive testing as mundane and tend to prioritize other tasks. One approach emerging from the responses to mitigate these negative factors is by providing better recognition for developers' testing efforts. Motivation, Survey, Software Testing, Software Engineering, Empirical Study ## I Introduction Testing plays a pivotal role in the software development process, serving as the primary technique to ensure or improve software quality. If testing is not given due consideration from the outset, the resulting software applications can suffer from fundamental faults and failures, leading to a bad user experience, crashes, incorrect computations, flawed data, and even project failure. This concern is especially pertinent given the sheer scale of today's large software projects with hundreds of thousands of lines of code in companies and open-source. Regrettably, the existence of software quality problems [1] suggests that testing is inadequately applied in practice. There has been much research and speculation about the reasons for this. Notably, it has been observed that the role of a "software tester" is not always viewed as desirable, as testers often receive limited recognition within companies, and testing tasks are commonly regarded as challenging, with outcomes that are much less tangible than those achieved when writing code [2, 3, 4]. Testing should thus not only be the responsibility of dedicated testers, but it would be preferable for it to also be an integral part of the work of software developers, who should not only implement but also test features [5, 6]. Unfortunately, evidence suggests that developers tend to engage much less with testing than one might hope for [7]. The lack of developer testing raises the question: Are developers hindered by their work environment, such as time constraints or inadequate testability, do they deliberately choose not to test because they do not enjoy doing it, or do they maybe not have the skills and training to perform better testing? To shed light on these questions we conducted a survey targeting developers in practice, aiming to understand their testing habits and aspirations. Using data collected from 284 respondents to a comprehensive survey comprising 21 questions, we first aim to understand how developers perceive their engagement with testing, the nature of their testing efforts, and whether they feel these efforts receive appropriate recognition. Consequently, our first research question is as follows: _RQ 1: How do developers engage with testing in practice?_ Our findings indicate that developers often feel they have insufficient time for testing, resulting in inadequate testing efforts. Furthermore, testability emerges as a practical challenge, leading to intricate tests that are cumbersome to create. Despite these factors, developers express satisfaction with the adequacy of their testing efforts in over half of the projects. To gain a deeper understanding of the role of developers' motivation in their testing practices, allowing us to differentiate whether they would actually like to test more but are inhibited by their work environment, or whether they would prefer to test even less, our second research question is: _RQ 2: How would developers like to engage with testing?_ The responses reveal that developers are split regarding their desire to test more or less: On one hand, the majority express either a reluctance to increase testing or a preference to invest less time in it, often citing testing as a mundane task or deeming it less important than other responsibilities. On the other hand, a substantial number of developers express a desire to engage in more testing if it were technically easier or, notably, if their efforts received greater recognition from management and peers. Some developers are satisfied with their current testing, either having reached a quality goal or believing that further effort would provide no benefits. In summary, the contributions of this paper are as follows: * We present a comprehensive survey aimed at assessing the prevailing and desired testing behaviors of developers. * We quantitatively and qualitatively evaluate the data, providing insights into developers' thoughts and aspirations. * With the evaluated data, we can strengthen existing research and provide new insights into current problems and misbehaviors in software testing. Overall, our results suggest that, while technical and organizational issues pose inhibitive factors, motivation significantly influences developers' testing practices. On the one hand, this reinforces ongoing research on automated testing and test generation to relieve developers of some of the challenges but on the other hand this suggests that future research needs to look into increasing the motivation of developers for testing. ## II Background ### _Developer Testing_ Testing is usually done either by external testers (e.g., quality assurance departments or companies), or by developers while coding. Both are established practices in companies [8], but in this paper, we focus on developers. Depending on the current state of development of an application, developers may apply different approaches for testing, and these may lead to different challenges, perceptions, and motivations. During the early stages of the product or of individual features, unit tests target the smallest components of the software (e.g., methods or classes). Developers may also write automated integration tests, checking the interactions of integrated components and their interfaces. Finally, automated end-to-end tests are a common means to check the functionality and satisfaction of requirements at the system level [9]. Independently of the type of tests, a substantial gap between writing production code and test code has been observed [7, 10], with developers spending about 75 % of their time for writing code and only about 25 % on testing. Interestingly, developers tend to overestimate the time used for writing tests, often claiming a 50 to 50 ratio, while in truth some developers do not test at all [7, 10], sometimes resulting in entire projects without automatically executed tests. In this paper, we aim to understand whether this is caused by technical or organizational challenges, or rather just a lack of motivation. ### _Motivation and Engagement in Software Engineering_ We aim to investigate factors of developer motivation while testing, but the term "motivation" is overloaded, and we thus need to consider several different dimensions in our analysis. In everyday language, the terms motivation and satisfaction are used synonymously, but according to the new theory of work motivation and job satisfaction of software engineers [11, 12] they are not the same: Motivation needs to be awakened before the work starts, while satisfaction is caused by results. Motivation and satisfaction are connected because being satisfied by the previous task can motivate the developer for the next task. Motivation can be intrinsic, referring to the inner willingness to do an activity for personal satisfaction, or extrinsic, referring to a separable outcome that comes with or after completing a task, such as recognition for a person's work [13]. The term engagement in the context of software engineering is defined as commitment, hard-working, and interest in the person's current work, which might go beyond the simple motivation to satisfy a task [14] and instead invest extra effort exceeding what is required [15, 16]. In our study, we focus on motivation as the initial factor for developers, as both satisfaction and engagement can only be achieved once they are motivated to test. While factors influencing motivation have been observed for software engineering in general [11, 12], it is not known yet whether these also apply to software testing. ## III Survey Design To answer RQ1 on the current and RQ2 on the desired engagement with testing (see Section I), we designed a survey based on the guidelines by Linaker et al. [17]. ### _Questions_ The survey consists of a total of 21 mandatory questions and one optional one, divided into four categories. The questions are either single or multiple-choice questions, with or without an 'others' free-text option, Likert scale choices, and stand-alone free-text questions. Designing a survey involves a trade-off between asking many questions and the resulting difficulty in acquiring survey responses and their costs. We chose and refined questions through an iterative process, where we based them on our research questions (RQs) and refined them through multiple steps within our research group and a pilot study involving different researchers. We tried to keep textual explanations brief and clear to ensure valid responses. This resulted in the revised questions shown in Table I and their answer options included in the artifacts (Section VI). The survey starts with demographic questions. Since we use Prolific (cf. Section III-B1), an established provider of survey respondents, we do not require questions about information already provided by Prolific, in particular age, country of residence, employment status, sex, and student status. Beyond this, we ask the questions with the ID "UD" listed in Table I regarding the participant's degree (UD1), experience (UD2), number of employees (UD3), and role in the company (UD4). The second category of the questionnaire consists of questions about the software projects the respondents work in (Table I, IDs with PD). In particular, we query context information such as the project size (PD2, PD3), the working domain (PD1), and quality metrics used in the participant's projects (PD4-6). Together, the demographic and project questions provide context for the testing-related questions. The third category asks about the current state and efforts for testing in the respondents' projects to answer RQ1. In particular, we ask developers about their daily test behavior (CS1-3), recognition (CS4), and struggles while testing (CS5-7 in Table I). These questions are of special importance for RQ2 likewise, since they serve as a baseline to see the deviation between desires and current testing practices in companies. In order to learn how the participants would like to test (RQ2) and what they want to change (DS1-4), questions with ID "DS" in Table I enable direct comparison to the previous set of questions (IDs with UD). An additional optional question at the end asks whether participants have any other information to share about testing in their company that has not been sufficiently covered by the previous questions (AE1). ### _Survey Tools_ We implemented the survey using _Profilic_ to recruit participants and _SoSci Survey_ to host the survey itself. #### Iv-B1 Prolific Prolific1 is an online platform to recruit participants for different kinds of studies like interviews and surveys. Together with Mechanical Turk (MTurk)2, Prolific is one of the biggest recruitment platforms for participants. Prolific has some clear advantages to MTurk, since there are clear rules for both researchers and participants. All involved parties know about payments, obligations, and rights, and researchers also have better insights into the pool of possible participants [18]. MTurk provides more participants (over 250,000), but most of them are located in the US [19] while the more than 150,000 participants of Prolific are better distributed globally (Section IV-A). There is also empirical evidence that Prolific provides data of higher quality with less cheating and higher attention rates than MTurk [20]. Footnote 1: [https://www.prolific.co/](https://www.prolific.co/) Recruiting participants with Prolific is not free, since both the participants and Prolific itself require payment. Our survey is set for an estimated completion time of ten minutes with an hourly rate of 10.50 \(\upmu\), which means every respondent received 1.75 \(\upmu\) for completing the survey. The advantages of participants acquired by Prolific are that only preselected participants are permitted to take part in the survey and that they are motivated by their payment and approval score, which influences their future commissioning. Participants receive payment only after approval of their answers by the client. #### Iv-B2 SoSci Survey SoSci Survey3 is a powerful online platform to compose questionnaires with flexibility and individual design. SoSci Survey was designed for university research in 2003, has been under constant development ever since, and is free to use for researchers. The platform provides an easy, yet powerful editor for different kinds of questions and the collected data can be exported in various ways. Footnote 2: [https://www.mturk.com/](https://www.mturk.com/) ### _Participants_ The target population of our survey consists of software developers since we want to understand their current and desired testing behavior. Prolific allows to pre-screen users based on demographic information as well as their self-declared expertise. We excluded users who do not work full- or part-time, and selected 17 terms related to software engineering (e.g., debugging, version control) out of the hundreds of possible terms provided by Prolific to filter by relevant expertise. To increase trust in the participants' answers, we only accepted participants with an approval rate of 100 %. This rate is maintained by Prolific to keep track of how satisfied study conductors are with their respondents. After pre-screening, a pool of 9,156 eligible participants who had been active on Prolific in the last 90 days remained. Since we are only interested in (1) professional software developers who (2) currently work on a software project and (3) write code or tests, we used a pre-study (Table I, IDs with PS) to filter the possible participants further. As users also receive payment for completing the pre-study, we requested 600 responses to the pre-study from Prolific, of which 284 answered all three pre-study questions in the affirmative. The final data is based on the responses of these 284 participants to the main survey, all of whom answered it completely. ### _Analysis of Responses_ To analyze open-ended questions (PD6, CS5, DS2, AE1), we used _qualitative content analysis_[21]. For each free-text question, we defined an empty set of codes/categories. While going through the answers manually, we added new categories whenever we encountered a new idea or perspective. If more than one participant mentioned the same idea, we used the same code for both (Table II). Each code means that the participants mention this category in their answers. The coding was independently done by two researchers, who discussed and resolved all points of disagreement. A summary of the analysis is given in Sections IV-A2 (Requirement for quality), IV-B (Tested well enough), and IV-C (More or less time). The questions with ID "CS" (Table I) are used to answer RQ1, while the one with ID "DS" is for RQ2. Closed-ended questions are analyzed by visualizing and bringing them into context with the research questions. We also measured the Spearman rank correlation matrix [22] as well as the multiple linear regression [23] and ordinal logistic regression [24] matrices for all our variables to find dependent and significant variables. ### _Threats to Validity_ Threats to _internal validity_ arise since the participants are distributed across several countries, the questions asked may be misunderstood or misinterpreted because of local differences in the language or because English is not their first language. This risk is reduced by our pilot study through which ambiguities in the questions were removed. The participants received the remuneration regardless of the time needed to finish the questionnaire, which may impact the quality of responses; however, respondents receive payment only if their answers are approved. Possible inconsistencies in the categorization of free-text responses were addressed by two researchers independently coding and resolving disagreements. There may also be threats to _external validity_ as the participants may not be distributed globally well enough to generalize the results, and the sampling algorithm of Prolific may be biased. Answers may not be from the perspective of a developer but from, e.g., a manager or consultant, since they may take different roles within their teams. However, since all participants stated they are currently coding or testing in a project, we assume they have insights of a developer. Threats to _construct validity_ arise from the design of the survey. The questions may not be specific enough to measure relationships between them and answer options may be missing. Despite the evaluation of the questionnaire in a pilot study, some questions and possible answers can nevertheless be misinterpreted by the participants because of missing explanations about tools and metrics asked. There may also be topics related to the research questions (e.g., whether the participants formal training or use test automation tools) that were not asked in the survey but could have given more insights into the subject, which is why an optional answer in the end was added (AE1). ## IV Survey Results ### _Demographics_ The survey questions provide demographic information at the level of individual participants as well as their projects. #### Iv-A1 Participants The participants in our study exhibit a diverse range of ages, spanning from 18 to 52 years old. The majority of participants fall within the age range of 22 to 31, with a decreasing number of participants as age increases. A plausible explanation for this trend could be that as individuals progress in their software development careers, they may transition into managerial roles with reduced involvement in coding activities. This trend aligns with the Developer Survey conducted by Stack Overflow [25], where the majority of developers were between 18 and 34 years old, with few participants exceeding the age of 44. Approximately 31 % of our participants are students, which is expected considering a significant portion of our participants are under 30 years old. Interestingly, only 58 % of the students reported working part-time in a company, while the remaining 42 % work full-time while simultaneously pursuing their studies. The majority of our participants have already graduated with a university degree, with only 42 individuals reporting employment in a company with a High School diploma or equivalent (Table III). With 80 % of the participants possessing a Bachelor's or Master's degree, the respondents slightly exceed the average qualification reported by the Developer Survey, where about 75 % reported a similar degree [25]. The diverse range of experience levels from less than one year (9.5 %) to more than ten years (21.1 %) allows for a comprehensive exploration of different perspectives on testing. Additionally, representatives from various companies and company sizes (Table III) providing insights into a wide array of development and testing processes. Current research [26, 27, 28, 29] and practice [30, 31] suggest that testing already needs to be done during the development phase, which implies that developers should at least perform unit testing to ensure the software's quality [32]. Consequently, we targeted developers specifically and not testers in this work, and indeed only 15 of our participants stated in their answers that there is a dedicated QA department rather than developer testing [33] in their company. While the majority of participants (62 %) identify their current role as developers, our dataset also includes consultants, testers, and managers. While these are not strictly software developers, we accepted them nevertheless with our pre-study criteria because of their active involvement in software development. The participants come from diverse geographical locations worldwide, with a particular emphasis on Europe and North America. The distribution of participants is detailed in Table III, which also includes additional demographic information. Interestingly, over 25 % of the participants are from Portugal, but the study encompasses participants from countries outside of Europe, such as South Africa, New Zealand, and Chile, contributing to a globally representative sample. Our study includes data from 20 % female participants. While this figure falls below the average of women in computer science with 25 % reported previously [34], the same study found a general decline in female computer science graduates by 8 % to 17 %. This suggests that our study achieves an average or potentially higher representation of female participants following the reported decline [34]. #### Iv-A2 Projects The global distribution of participants results in a wide variety of software projects spanning different domains. Table IV provides an overview of project demographics and highlights that the majority of projects are concentrated in the communication and finance domains. However, numer ous other domains, such as education, robotics, fashion, and real estate, are also represented in the dataset. A significant proportion of projects (70 %) are realized by teams with fewer than ten members, while only 6 % of the projects involve more than 50 staff members. This suggests that many projects might have adopted agile methodologies using small teams [35]. ### _RQ 1: How do developers engage with testing in practice?_ #### Iv-B1 How much time do developers invest in testing? The time invested in testing activities among participants (CS1) reveals that a majority (more than 80 %) dedicate relatively limited time to testing (less than 40 %) during their work hours (Fig. 1). Additionally, a great portion (more than 44 %) allocates even less time to testing (less than 20 %), which will be further analyzed and investigated in Section IV-C. #### Iii-B2 What kind of tests do developers write? The respondents apply a variety of testing approaches (CS2). Table V illustrates that both unit and manual testing are carried out by over half of the participants (51.8 %), making them the primary test types followed by integration tests with 42.6 %. Other types of tests, such as smoke or sanity tests, are less commonly applied, each accounting for less than 10 %. The reasons for this disparity could be multifaceted, such as a lack of training or the perception that functional testing is more important. #### Iii-B3 How much effort does it take to write tests? The level of effort required for writing tests in participants' projects is illustrated in Fig. 2 (CS6). The data reveals that only 13 % of participants consider testing to be easy while more than half find testing to be challenging, indicating moderate effort involved. On the other hand, 35 % face significant issues in testing, as it demands great effort to test their project. The complexity of tests is influenced by the effort required (CS7). Among the participants, 44 % primarily write simpler tests, maybe aiming to achieve quality goals with minimal effort (Fig. 3). Roughly one-third write both complex and simple tests, while only 21 % develop complex tests to ensure quality and address edge cases (based on answers of CS5). Figure 4 highlights the relationship between the complexity of tests and the effort expended to write them, indicating that as the complexity of tests increases, the effort to write them increases, too. An intuitive explanation would be that poor design amplifies the effort required for writing tests, resulting in more intricate tests and discouraged developers. #### Iii-B4 Do developers follow certain quality metrics and requirements? Code coverage is a widely utilized metric for quality [36] and a first indicator of a project's testing state. Nevertheless, only 45 % of the participants' projects use code coverage (PD4). In certain cases, answers indicate that measuring coverage may be infeasible, such as with embedded software, or too computationally expensive [37]. Overall, developers may lack awareness of the true quality of their projects due to insufficient or inadequate metrics, which we believe can influence their motivation to test. Furthermore, only 27 % of participants' projects have specific quality requirements or goals, as indicated by question PD6 (Table I): 31 participants mention code coverage as their quality goal, while others emphasize the need for minimal defects or the basic functionality of the software, which may not be easily measurable. Some participants rely on management approval or feedback from quality assurance departments, while others prioritize customer satisfaction or peer reviews. Two projects even use mutation scores as their testing goal. Different forms of reviews, evaluations by superiors, or input from customers, are used, too. In certain projects, timely completion plays a crucial role to meet deadlines, even at the expense of quality. Projects that undergo requirements engineering in the first place rely on the fulfillment of those or adherence to a test plan as their indicators of quality. Moreover, the lack of specific quality goals may result in misguided assumptions about code quality and the overall necessity of testing since their perception of quality is based on their personal feeling rather than a quality metric. #### Iii-B5 Do developers believe their projects are sufficiently tested? Approximately 60 % of the developers expressed confidence in the level of testing conducted in their projects (CS5). This percentage is notably high considering the limited amount of time dedicated to testing by the participants. However, many developers also highlighted various factors that contribute to perceived inadequacies in testing. One common concern raised by developers in the free-text field Fig. 1: Proportion of the working time that is used for testing with full- and part-time jobs by the participants (CS1) Fig. 4: Complexity and effort combined from questions CS6 and CS7 Fig. 3: Complexity of the tests written by participants (CS7) Fig. 2: Effort needed for testing (CS6) of CS5 is the undervaluation of testing within their companies. Developers often assign higher priority to other tasks and allocate their time accordingly, neglecting comprehensive testing. Another factor mentioned to influence the perceived lack of testing is the absence of well-defined test processes, which is thought to lead to issues such as poor documentation, insufficient communication between developers and testers, and overall inadequate testing practices. Nominal client involvement was cited as a reason for minimal engagement with testing or a focus on new features because there is no requirement to write tests. Additionally, some developers believe their software lacks proper testing because evidence shows that the software still contains bugs, unresolved defects, or uses artificial data for testing. Projects with untested components or pending evaluation of critical edge cases are also cited as requiring more testing. The lack of time and resources emerges from CS5 as a recurring constraint that hampers proper testing. Many participants express a desire to engage in testing activities but are constrained by factors beyond their control. Overall, developers' perceptions of sufficient testing vary and are influenced by factors such as prioritization, development process maturity, client engagement, and resource limitations. #### Iii-B6 How is testing recognized? Approximately one-third of developers receive an acknowledgment from both management and colleagues (Fig. 6), demonstrating that testing is valued in their respective companies and projects while notable 25 % of developers receive monetary rewards for their testing efforts in addition (CS4). Furthermore, different types of external recognition, or the lack thereof, are stated by the participants in question AE1. One participant stated that more code coverage in the project proved to lead to fewer bugs in the remaining project and that a "reputation of writing 'bug-free' software (or as close as possible)" (P65) was given to the participant for it. Enough testing does not only improve quality but also shows that the software is working as intended efficiently and the risk of failing the project is minimized (P193). Some describe that their company understands the importance of software testing (e.g., P33), but there are many more who do not (e.g., P95). Nearly half of the participants (around 50 %) claim to be intrinsically motivated and test for personal satisfaction, even though only a minority of the developers see testing as a crucial part of the development process and are engaged in testing and enjoy it. For example, one participant compared testing with "a fun puzzle to figure out" (P65) and another is satisfied with the little effort required for testing because their software is designed for it (P11). Discovering hidden bugs and issues that would have gone into production without sufficient testing are also mentioned as motivating (P66), as well as the resulting time savings (P48). Thoroughly testing the participant's software gives the developers peace of mind when they are sure "they did not mess things up" (P61). A good test suite also improves the ability to refactor the code base when the application slowly evolves over the years, because bugs introduced during maintenance can be found by the existing test suite (P65). It also gives the developers personal satisfaction when their code works smoothly and users can work without frustration (P65). We believe that this intrinsic motivation serves as a driving force for writing tests and investing effort beyond what is formally required. On the other hand, it is concerning that approximately one-third of developers do not receive any form of external recognition or have no intrinsic motivation for testing (e.g., P95). This lack of acknowledgment and motivation can potentially result in a reduced inclination to write tests, ultimately impacting the overall software quality. In addition, the majority of developers (58 %, Fig. 5 prefer writing code over tests (CS3). We applied both correlation and regression analysis to the demographic variables about the participants (Table I, IDs with UD) and the projects (IDs with PD), and the variables of the current state of testing in their project (IDs with CS). Unfortunately, no significant or dependent variables could be found to explain our findings. Consequently, we cannot report whether the current state of testing is dependent on any of the participant or project demographics we considered. **Summary (RQ 1):** Testing is not the favorite task of developers. The time invested in testing is limited and sometimes takes great effort. Even though many believe their projects are tested well enough, common demotivating factors are higher prioritized tasks, bad communication, and missing recognition. ### _RQ 2: How would developers actually like to engage with testing?_ The participants are split (DS1) between those who want to test more, those who do not, and those who are satisfied Fig. 5: The preference to write code or tests (CS3) Fig. 6: Recognition for testing (CS4) with their current testing efforts (Fig. 7). While 42 % of the participants want to test more, there are 58 % who do not, either because they want to test less (24 %) or because they are satisfied with their current testing effort (34 %). #### V-A1 Why do developers want to test more? Most of the 42 % who want to test more estimate they would use 10 % more of their time for testing. The more time participants already use for testing, the less they want to reduce it. Most of those respondents not testing at all do not want to start with it. In contrast, the more time the participants already invest in testing, the less they want to reduce this amount. This could mean the more they test the more they understand that testing increases the quality and decreases their workload later. To investigate why developers want to test more in more detail, we analyze the free-text answers to the mandatory question of why they want to test more or less (DS2), where we observe the following reasons: * Find more issues, blockers, and edge cases (37) * Ensure and improve quality (35) * Like to test their own and their colleagues' code (17) * Decreased maintenance expenses (15) * Satisfied to show others that their code is working fine and better than other's (9) * Improve their coding and testing skills (5) * Try new aspects of development and new technologies (1) * Improve strategic thinking and solve problems (1) * Increase variety in their daily work (1) * Find areas for improvements and refactoring (1) * Incorporate new team members faster (1) The observation that lack of recognition inhibits motivation is confirmed by question (DS4), which shows that nearly 60 % of the participants would test more if it were better recognized (Fig. 8). Overall, the environment seems to have a greater impact on developers than their personal motivation. **Suggestion 1.** The recognition of testing within companies should be increased for both developers and at the management level to motivate more engagement with testing. The answers to question AE1 provide further insights into why developers have a desire to test more: They appreciate the positive outcomes of testing, such as improved quality (P239), as well as the opportunity to solve bugs (P67). In one particular company, the quality assurance team is larger than the development team (P17), which leads to management valuing the time, resources, and effort dedicated to testing (P17). Many participants are motivated to test because they have access to a testing environment that closely resembles the production environment (P11), there is effective communication between developers and testers (P215), and they receive feedback from end-users (P268). Some participants have already completed software testing courses provided by their company and see the benefits of their involvement in the project they are working on (P45). These participants express a need and desire for training in writing software tests and managing test suites and infrastructure, as they recognize the potential benefits (e.g., P45). One participant describes that they have a very good policy for testing, where everyone gets trained to write tests (P33). Some participants see testing as an integral task inside the development cycle (P27) where all code should be accompanied by unit tests (P194), part of a review system (P194), and all tests should run continuously (P218). #### V-A2 Why are developers satisfied with their current testing efforts? Almost one-quarter of the participants are satisfied with the amount of time currently invested in testing (Fig. 7). Of these 66 developers, only eight do not write tests and also do not want to. Participants mention a well-set-up and well-handled testing environment or that they are already in maintenance without developing new features. Three participants also point out that there is a dedicated tester in their team or even a separate quality assurance department for testing. There are also projects which currently meet the given quality goal or the developers can choose their used time for testing. Those answers are reflected in the free-text answers to DS2: * Amount of testing is adequate (33) * Another person/department responsible for testing (3) * Quality goal reached (2) * Well set-up testing process (1) * Project in maintenance (1) #### V-A3 Why do developers want to test less? Out of all respondents, 34 % want to test less than currently, with almost the same number in the categories -10 %, -25 %, and participants who do not want to test at all (DS1). Especially the percentage of participants who do not want to test at all with almost 10 % is high compared to the participants who want to test full-time at about 1 %, which means there is a great contrast between Fig. 8: More or less testing if better recognized (DS4) Fig. 7: Additional testing time the participants want to use (DS1) engagement to write tests and code. Many developers do not mind testing in general but do not want to test the code of their colleagues, because it is not well-written or hard to understand. **Suggestion 2.** Developers should always write unit tests for their own code to support other developers working with the code, who are unlikely to write these tests. In comparison with the currently performed tests in the participants' projects (Table V), the number of tests the participants want to perform decreases (DS3). Unit testing is least affected (3 %) which confirms that unit tests are the most popular kind of tests performed by developers, while manual testing is the least popular one decreasing more than 50 %. In the free-text responses, we identified several reasons why developers want to engage less with testing (DS2): * Like coding better and dislike testing (70) * No working testing process and infrastructure (30) * Do more important things and meet deadlines (21) * Testing as boring, frustrating, and repetitive activity (19) * Lack of testing skills (15) * Missing resources to test well enough (13) * No proper training in testing (4) * Project is tested well enough (1) * Project is too small or old to be tested (1) * Software will be replaced soon (1) * There are other developers in charge of testing (1) * Find manual testing exhausting and unnecessary (1) * Lack of communication (1) During the analysis of the answers to question AE1, we discovered more detailed insights into why developers want to test less. For example, manual tests are considered boring and time-wasting by the respondents (e.g., P49), especially when they have to be done over and over again when versions change (P27). Since many participants consider testing as boring, they think "it takes time away from more important issues" (P125) like implementing new features (P137). Because the participants do not want to test these new features either, more bugs may be introduced (P218). The development process in general is depicted as not thought through (P68) which causes the developers to de-prioritize testing (P140) and dislike the current process (P121). Another demotivating factor is a lack of infrastructure like continuous integration (P49), and that tests are badly implemented by colleagues and predecessors (P196), which causes disengagement from testing. **Suggestion 3.** Whenever feasible, software tests should be designed to be automatically executed and supported by a continuous integration infrastructure. Furthermore, managers are mentioned to be interested in testing only when "a major or critical bug that is disruptive to the business is found" (P12), only to get back to old habits when the crisis is averted. Everything the customer does not recognize or value with money will be neglected or de-prioritized (P252). Especially in start-ups or small companies, each employee has more than one task in the company (P86), which can cause the testing to be forgotten (P86). Others have to use "in-house developed tools" (P104) which have low usability or simply do not work as needed. This prevents developers from engaging with testing as they want to (P90). **Suggestion 4.** Awareness at the management level needs to be raised about the importance of testing activities to increase and maintain the quality of software. Lack of test data and scenarios is mentioned as a source of problems, for example when there is insufficient test data that is as close as possible to production data (P164). Since many of the participants only use fake, sample, or mocked data, their tests are not considered "completely trustworthy" (P4) to them. The testers also need a variance of test data to include edge cases in their test suites, but most of the projects and customers only provide generic test data without covering special cases (P164). Since the developers do not know the interactions of the end-users with the application, either because the product is not released yet (P42) or because user data is not collected (P73), they cannot create test scenarios that are as close as possible to reality. The lack of these scenarios can lead to users breaking the system even if it has been tested well, and developers then receive the blame from management (P42). In addition, the lack of important information and skills (P26) makes it hard to write good and sustainable test suites (P26), and developers get frustrated by all the failures and changes because of missing data and communication (P138). **Suggestion 5.** Requirements, common scenarios and problems should be communicated among all stakeholders, including developers, to decrease the time for troubleshooting. Software in some domains seems particularly difficult to test, like games (P74), simulators (P74), embedded devices (P200), or software with hardware requirements (in particular when the required hardware is not available, P128). There are also old projects that contain many bugs and lack a testing infrastructure (P78), which are mostly replaced with new ones including an automated test suite and documentation (P78). **Suggestion 6.** Developers require sufficient test data and robust infrastructure for automated test executions to write tests and avoid demotivational factors. In addition to identifying various blockers, we also observed different attitudes among the developers. For instance, one participant believes that others who work fewer hours should be responsible for writing tests ("There are people working less than me so they should be doing the testing", P69). Some participants consider their projects too small to warrant testing at all(P3). On the other hand, one of the participants stated to prioritize simplicity and reliability in their code rather than creating a complex and robust product (P121). **Suggestion 7.** Developers should be made aware of the significance of testing, ensuring they understand the advantages of testing as well as the drawbacks of neglecting it. We also noticed that many developers are frustrated by the lack of time (P140) and training (P38) they are given. Not only are lack of testing skills in general mentioned, but some think it is especially hard to write tests for edge cases (P185). Some participants would like to have more or better training in writing good tests (e.g., P38) which may result in better test suites and an increase in the quality of their products. Another important issue is the lack of time for writing tests given by the management or the customer (P65). Some participants consider this as the main problem when the quality of the software does not meet the excepted one (P44). Others have enough time for testing and a test suite, but it takes too long to execute all of them (P187). **Suggestion 8.** Developers would benefit from software testing training programs to gain the necessary skills, knowledge and mindset to execute effective testing practices. We applied both correlation and regression analysis to the demographic variables about the participants (Table I, IDs with UD) and the projects (IDs with PD), and the variables of the current (IDs with CS) and desired state of testing in their project (IDs with DS). Unfortunately, also in this case no significant or dependent variables could be found to explain our findings. This means that we cannot make any statements about how the desired state of testing depends on the current state of testing or any of the participant or project demographics we asked for. **Summary (_RQ 2_):** Developers claim they would write more tests to ensure quality and increase personal satisfaction, but would like to receive better recognition for this. On the other hand, many developers want to test less than they currently do since they perceive testing as boring compared to other tasks. ## V Related Work Most of the time developers spend in their IDEs, they are reading, writing, and modifying their code. Only about 9 % of their time is used for writing and executing tests, which was found in a study with 40 students [26]. Since the study contained students, it may not generalize to companies, which is why almost 2,500 developers were monitored in companies for 2.5 years in a follow-up study, showing that developers spend a quarter of their work for testing--while believing they test half of their time [7]. Our respondents appear to be slightly more realistic, estimating their testing at 40 % or less. Developers would like to execute their tests more frequently but are handicapped by difficult testing frameworks as well as too little time given by management. This insight and that there is only a weak correlation between writing code and executing tests were found during a study with subsequent interviews [38]. While this study focused on executing rather than writing tests, their conclusions about testing frameworks, conditions, and lack of time are confirmed by our survey. A recent survey on unit testing practices [27] found that the developers are primarily driven by their conviction to test and management requirements, which matches our results. In addition, these survey results show that developers focus on writing, refactoring, and fixing code instead of writing tests because they do not enjoy testing, which can also be seen in the answers of our participants. In contrast to this work, we do not focus on unit tests, but on testing in general. According to a survey about thoughts on the career of a software tester [3], both students and professional testers think that testing is an important part of the software development cycle, but also tedious, frustrating and that they are missing developing software. Another study [4] also came to the same conclusion that missing recognition is one of the main reasons why testers are not satisfied with their job. In this work, we set our focus on the aspect of developer testing, but our findings are in agreement with these findings on software testers. Developers and testers in Brazilian companies have been reported to lack both training and knowledge in fundamental testing concepts [39]. Moreover, testing itself apparently is generally not viewed as an important and prioritized activity in the Brazilian companies involved in this study, which aligns with our global findings. ## VI Conclusions Insufficient testing is known to affect software quality. In order to better understand whether developers do not engage with testing because of technical, organizational, or motivational challenges, we conducted a survey with 284 participants. We find evidence that all three factors inhibit testing, and the details of how these factors affect testing can inform future research on how to improve testing practices. Of the many reasons that inhibit effective testing, we found evidence of a lack of intrinsic and extrinsic motivation in developers. Consequently, one potential opportunity to motivate developers to write more tests would be the application of external recognition systems, such as gamification, i.e., the inclusion of game elements to non-game related tools and contexts [40]. Such tools and approaches have been shown to provide benefits towards the motivation of developers in practice and educational scenarios to better engage with software testing [41]. To increase the generalization of the finding of this paper, it would be useful to replicate the survey with different audiences. To support replication, we provide a replication package containing all data and information: [https://doi.org/10.6084/m9.figshare.23212562](https://doi.org/10.6084/m9.figshare.23212562) ## Acknowledgments We would like to thank Marco Kuhrmann for his support while finding the right questions for our survey as well as all colleagues at the Chair of Software Engineering II at the University of Passau for their valuable input. This work is supported by the DFG under grant FR 2955/2-1, "QuestWare: Gamifying the Quest for Software Tests".
2307.06397
When Edge Meets FaaS: Opportunities and Challenges
The proliferation of edge devices and the rapid growth of IoT data have called forth the edge computing paradigm. Function-as-a-service (FaaS) is a promising computing paradigm to realize edge computing. This paper explores the feasibility and advantages of FaaS-based edge computing. It also studies the research challenges that should be addressed in the design of such systems, which are 1) the quick decomposing and recomposing of applications, 2) the trade-off between performance and isolation of sandbox mechanisms, and 3) distributed scheduling. The challenges are illustrated by evaluating existing FaaS-based edge platforms, AWS IoT Greengrass, and OpenFaaS.
Runyu Jin, Qirui Yang, Ming Zhao
2023-06-29T19:20:40Z
http://arxiv.org/abs/2307.06397v1
# When Edge Meets FaaS: Opportunities and Challenges ###### Abstract The proliferation of edge devices and the rapid growth of IoT data have called forth the edge computing paradigm. Function-as-a-service (FaaS) is a promising computing paradigm to realize edge computing. This paper explores the feasibility and advantages of FaaS-based edge computing. It also studies the research challenges that should be addressed in the design of such systems, which are 1) the quick decomposing and recomposing of applications, 2) the trade-off between performance and isolation of sandbox mechanisms, and 3) distributed scheduling. The challenges are illustrated by evaluating existing FaaS-based edge platforms, AWS IoT Greengrass, and OpenFaaS- Function-as-a-Service, Edge Computing, Cloud Computing, function ## I Introduction The proliferation of edge devices and the rapid growth of edge data challenged the traditional cloud computing. In early settings, edge devices produced data and sent it to cloud for computation and storage. The high costs of cloud services and the network latency caused by data transportation outweigh the high-performance cloud can provide. Edge computing emerged to solve the problem. In edge computing, edge devices and edge servers are considered as important resources that can help with computation and data storage; data generated on edge devices is processed locally to avoid network latency. This can greatly reduce application response time which is especially important for time-sensitive edge applications. Also, many edge devices generate data concerning user privacy. The data is more secure to be stored on local devices than on a shared cloud. A proper abstraction is key to enabling computing across the heterogeneous resources on the edge and across the multipletiers of resources from the edge to the cloud. In this paper, we argue that Function-as-a-Service (FaaS), an emerging cloud computing model can provide the much needed abstraction for edge computing. In FaaS, the unit of computation is a function. When a request is received, the platform starts an ephemeral execution sandbox for the function. When load increases, it quickly and dynamically increases the number of execution units. As soon as the function finishes, the sandboxes are terminated. To investigate the opportunities and challenges, we considered the existing commercial FaaS platforms for edge computing such as AWS IoT Greengrass [5]. It enables the edge workloads to be computed locally using AWS Lambda [6]. We also developed our own prototype based on OpenFaaS [4], which is an open-source FaaS platform that supports edge devices. Using these platforms we study the challenges of FaaS for edge computing. We conducted the experiments on local edge devices and a edge server. ## II Motivations We advocate FaaS as the abstraction for edge computing because it can greatly help reap the benefits of edge computing and address its challenges by providing the following. **Faster Responses:** First, FaaS improves edge applications' response time by doing more computation closer to data sources. Compared to complex applications, functions are small in size and resource-conserving. They can better fit into the limited resources on edge devices. Second, FaaS can better exploit the heterogeneous resources available at the edge to deliver faster responses. Applications often contain functions that have different workloads. Functions with I/O intensive workloads can gain optimal performance running on flash storage. Functions with computation intensive workloads can boost the performance using hardware accelerators such as GPUs and FPGAs. Third, FaaS can reuse functions for different applications to save function initialization time. Various applications contain the same function. For example, many personal virtual assistant applications contain the text-to-speech function to interact with users. The frequently used functions can be reused by different applications to avoid the cold start. **Better Privacy:** FaaS isolates each function to its own address space. A function generally involves less user data compared to an application. This facilitates better privacy since FaaS prevents data from being leaked or modified at a large scale on edge devices. **Higher Productivity:** Edge devices are highly heterogeneous with diverse software and hardware. This prevents the utilization of distributed edge resources. FaaS provides a nice abstraction to hide the heterogeneity and enables the productive use of the diverse edge resources. **More Reliability:** One key benefit of FaaS is to provide more reliability by employing function-based sandboxes. On edge devices, due to the continuous interaction with the physical world, they are more prone to failures that can bring severe consequences (think of a camera failure on an autonomous driving car). FaaS restraints the failure within a function sandbox to improve the reliability. **Lower Cost:** FaaS can lower the cost of users by utilizing more edge resources. As discussed above, FaaS can better fit functions in edge resources to reduce the demand of cloud servers. More requests are served locally and the cost is reduced. ## III Challenges To deliver effective FaaS-based edge computing, we identify the following key challenges. **From Applications to Functions:** As illustrated in Section II, FaaS edge computing benefits largely from the finer-grained execution unit. However, benefits arise with challenges. Many functions need to be chained to work in a pipelined manner according to the application logic. The overhead of executing sequential functions to get final results can be high if the system is not cautiously designed. In Section IV, we show that function chaining overhead is not significant for functions deployed within an edge device. However, when the functions are distributed across devices or across edge and cloud, both computation and communication overhead needs to be carefully considered. **Sandbox Mechanisms:** Sandbox mechanisms is a key component of FaaS, which includes virtual machines (VM), containers [7, 2] and other lightweight virtualization systems [1] that isolate functions from each other to provide performance and security guarantees. While providing necessary isolation, sandboxes unavoidably add extra overhead to run functions. There are two aspects to look at the sandbox mechanisms: performance and isolation. Performance represents how fast a function runs and responds. Isolation refers to performance isolation which maintains performance when other functions run together with the functions inside the sandbox, failure isolation which restraints the failure within the sandbox, and security isolation that secures the data of a sandbox. In edge computing, time-sensitive edge applications demand both the small performance overhead and strong performance isolation. The balance point of the two aspects needs to be figured out to achieve the best trade off for the target application and scenario. We evaluate existing popular sandboxes for edge computing in Section IV and study the trade off between performance and isolation. **Distributed Scheduling:** FaaS-based edge computing increases the scheduling flexibility by providing smaller scheduling units, which leads to the complexity of distributed scheduling. To achieve distributed scheduling, at least two factors need to be considered: scheduling horizontally within edge devices and scheduling vertically across the edge and cloud. The various constraints that scheduling must consider for resources (e.g., power/battery capacity, availability) and applications (e.g., privacy) on the edge further adds to its complexity. The challenges in horizontal scheduling is in how to efficiently utilize the edge resources. Applying FaaS decouples the application logic and the hardware which executes the program. How to effectively map functions to edge devices equipped with heterogeneous hardware resources to accelerate applications remains a question. The challenges in vertical scheduling is in how to fully utilize every tier bottom up from the edge to the cloud. Given that devices on the path from the edge to the cloud are usually less powerful than cloud servers, scheduling functions vertically to the cloud can speedup the response time for some functions. ## IV Evaluation ### _Sandbox Mechanisms_ AWS Greengrass supports three sandbox mechanisms to isolate the function executions which are Greengrass container (GGC), a lightweight container that makes use of cgroups and namespaces for isolation, docker container (DOCK) and Greengrass no container (GNC). We first evaluate the run time overhead of each sandbox mechanism. We used a python function, image recognition, as an example for evaluation. It recognizes objects in the image using the SqueezeNet deep learning model [8] executed on top of the MXNet framework [3]. We recorded the timestamps of sending the function trigger and the start and end of the computation to get the function's run time and compute time. We used the default MQTT topic pub/sub system for function triggers. The experiments were done on a Raspberry Pi 3B+ single board computer which is equipped with 4 cores and 1GB memory. Figure 1 shows the results. The error bars describe the standard deviation. We compare the sandbox results with the result of the same python code running on the bare metal (BASE). We can see for GNC and GGC, the run time overhead is quite small, at 3.8% and 4.2%, respectively. This is because these two sandboxes are light weight with limited isolation. However, We found that GNC and GCC do not provide good isolation. For GNC, functions can read and write to any Fig. 1: Performance overhead of different sandbox mechanisms Fig. 2: Performance isolation of different sandbox mechanisms files that belong to the user as long as the user's uid and gid is provided to AWS Greengrass. For GGC, functions cannot write to local files but can still read any files belonging to this user. For DOCK, which is heavier weight than GGC and GNC, has an overhead of 78.3%. However, it is more secure than the previous two sandboxes since functions do not have access to host files. We then evaluate the isolation of sandbox mechanisms. We use a clone of the function which runs within a busy loop as the stress test. We compare the run time of running the function alone to running the function together with the stress test. Figure 2 shows the result. From the result, we can see that GGC fails to isolate the interference. The response time is slowed down twice that of running the function alone. For DOCK, which has a better isolation mechanism, the overhead is much smaller at 40%. However, the interference is still big and for some time-sensitive functions, the overhead is unacceptable. Finally, we evaluate the cold start overhead of functions and the communication overhead of function chaining. We sequentially ran two functions using the default AWS Greengrass container. The first function uses the on-board camera to take a picture, stores it to the local storage and triggers the second function, which then loads the picture and conducts image recognition. We ran the functions using both cold containers and warm containers and compared the total run time of the two functions to the sum of two functions' compute time. We used the default MQTT topic pub/sub system for function communications. From Figure 3, we can see that for cold containers, the total run time is significantly more (5.3 times) than the actual time required for function executions. The cold start of containers has a huge impact on the application's run time which needs to be further optimized. For warm containers, the overhead is negligible. But we cannot say the current mechanism used for function communication is good considering the weak isolation that Greengrass container provides. ### _Distributed Scheduling_ We implemented a prototype on OpenFaaS to evaluate distributed scheduling. The evaluation involves three platforms: edge only platform, edge and cloud cooperate platform, and cloud only platform. For edge only platform, the scheduling is provided by OpenFaaS which dynamically scales the number of functions to all the local devices when load increases. For edge and cloud together platform, we modified OpenFaaS to offload the requests to a server following a preset proportion. For cloud only platform, to make a fair comparison, we also used OpenFaaS on the server instead of a commercial FaaS platform and offloaded all the requests to the edge server. During the experiment, each edge device issued 3 requests per second for 30 seconds. Each request contains a sentence of 20 words to be transferred to audio speech. The maximum concurrent running functions are set to be 12 and 6 for the edge cluster and server, respectively. We recorded the run time of each request. Figure 4 shows the CDF results of function run time. We set the proportion of offloading to cloud at 25%, 50%, and 75%, respectively. Edge only platform has an average run time latency of 0.86s, whereas a cloud only platform on average takes 0.44s. With the increased amount of offloaded requests, the run time latency decreases. In this example, the naive scheduling policy is by no means showing the best performance of distributed scheduling but it can still confirm the benefits of vertical scaling to nearby servers. In the meanwhile, the reduced performance saves the costs of the cloud service. With a fully-functional scheduling policy, we believe the performance can be further improved and the costs can be further reduced.
2310.03620
PeaTMOSS: Mining Pre-Trained Models in Open-Source Software
Developing and training deep learning models is expensive, so software engineers have begun to reuse pre-trained deep learning models (PTMs) and fine-tune them for downstream tasks. Despite the wide-spread use of PTMs, we know little about the corresponding software engineering behaviors and challenges. To enable the study of software engineering with PTMs, we present the PeaTMOSS dataset: Pre-Trained Models in Open-Source Software. PeaTMOSS has three parts: a snapshot of (1) 281,638 PTMs, (2) 27,270 open-source software repositories that use PTMs, and (3) a mapping between PTMs and the projects that use them. We challenge PeaTMOSS miners to discover software engineering practices around PTMs. A demo and link to the full dataset are available at: https://github.com/PurdueDualityLab/PeaTMOSS-Demos.
Wenxin Jiang, Jason Jones, Jerin Yasmin, Nicholas Synovic, Rajeev Sashti, Sophie Chen, George K. Thiruvathukal, Yuan Tian, James C. Davis
2023-10-05T15:58:45Z
http://arxiv.org/abs/2310.03620v1
# PeaTMOSS: Mining Pre-Trained Models in Open-Source Software ###### Abstract Developing and training deep learning models is expensive, so software engineers have begun to reuse pre-trained deep learning models (PTMs) and fine-tune them for downstream tasks. Despite the wide-spread use of PTMs, we know little about the corresponding software engineering behaviors and challenges. To enable the study of software engineering with PTMs, we present the PeaTMOSS dataset: Pre-Trained Models in Open-Source Software. _PeaTMOSS_ has three parts: a snapshot of (1) 281,638 PTMs, (2) 27,270 open-source software repositories that use PTMs, and (3) a mapping between PTMs and the projects that use them. We challenge _PeaTMOSS_ miners to discover software engineering practices around PTMs. A demo and link to the full dataset are available at: [https://github.com/PurdueDualityLab/PeaTMOSS-Demos](https://github.com/PurdueDualityLab/PeaTMOSS-Demos). ## I High-Level Overview **Motivation:** Deep Neural Networks (DNNs) have become a common component in software systems over the past decade. While some software engineers develop DNNs from scratch, many others integrate DNNs into their software following a typical re-use pattern: (1) pre-trained DNN models are published to registries such as Hugging Face, similar to traditional software package registries (_e.g.,_ NPM, PyPI); and (2) other software depends on these Pre-Trained Models (PTMs), accessed via libraries or web APIs. _Despite wide-spread adoption of PTMs, we know relatively little about how PTMs are integrated into software systems._ **Challenge:** We propose the _PeaTMOSS_ challenge to learn more about **P**re-**T**rained **M**odels in **O**pen-**S**ource **S**oftware (Figure 1). The _PeaTMOSS_ dataset comprises snapshots of PTMs and open-source repositories utilizing PTMs, as well as a mapping of PTMs to projects. _PeaTMOSS_ contains 281,638 PTM packages, 27,270 GitHub projects that use PTMs as dependencies, and 44,337 links from these GitHub repositories to the PTMs they depend on. For both PTMs and GitHub projects, _PeaTMOSS_ contains metadata (commits, issues, pull requests) and data (_e.g.,_ model architecture and weights; git repositories). A uniform schema for retrieving PTM and project metadata is provided to assist in mining efforts. Most information is indexed; some is stored as blobs. The dataset can be accessed in two formats. The "metadata" version of _PeaTMOSS_ is 7.12 GB and contains only the metadata of the PTM packages and a subset of the GitHub project metadata. The "full" version is 48.2 TB, adding (1) the PTM package contents in each published version, and (2) git history of the main branches of the GitHub projects. ## II PeaTMOSS Dataset Structure The metadata version of _PeaTMOSS_ is stored in a SQLite database. The tables include hyperlinks to tarred copies of the PTM package or GitHub repository. Dataset schemas are described in SSA. The mapping between GitHub projects and PTMs are cases where a GitHub repository is known to depend on a particular PTM. Additional detail is given in SSB-B. ## III Accessing and Working with PeaTMOSS **Access:** The metadata and full versions of _PeaTMOSS_ are available through a Globus share hosted at Purdue University. _PeaTMOSS_ can be downloaded via the official Globus Connect application, which is available for all major operating systems, _e.g.,_ Linux, MacOS, and Windows. We include a Python script to download and configure an SQLite instance of the metadata version. For more instructions, see [https://github.com/PurdueDualityLab/PeaTMOSS-Demos](https://github.com/PurdueDualityLab/PeaTMOSS-Demos). **Working with PeaTMOSS:** To interact with _PeaTMOSS_, we recommend ORM or SQL. Examples are provided in SSC. **Required Skills:**_PeaTMOSS_ is accessible to miners with a range of skills and interests. _PeaTMOSS_ includes both standard mining artifacts from GitHub (_e.g.,_ git repositories, issues, PRs) and unusual artifacts from PTMs (_e.g.,_ neural networks, weights, model cards). Miners interested in program analysis, software repository mining, and natural language processing, etc. may apply these techniques to GitHub data, PTM data, or both. Fig. 1: Data for **P**re-**T**rained **M**odels in **O**pen-**S**ource **S**oftware. _Neither expertise in deep learning nor access to hardware such as GPUs is necessary for use of_ PeaTMOSS. Of course, miners with deep learning expertise or hardware resources could explore more advanced questions about PTM usage on GitHub or delve deeper into the PTM data. **Data Samples:** We offer a subset of samples from _PeaTMOSS_ at: [https://github.com/PurdueDualityLab/PeaTMOSS-Demos](https://github.com/PurdueDualityLab/PeaTMOSS-Demos). ## IV Possible Research Questions Table I presents sample research questions for miners to investigate. This table includes questions focused on the GitHub portion of the dataset, on the PTM portion of the dataset, and on both parts. It notes some prior work as starting points for miners. ## IV Possible Research Questions Table I presents sample research questions for miners to investigate. This table includes questions focused on the GitHub portion of the dataset, on the PTM portion of the dataset, and on both parts. It notes some prior work as starting points for miners. **1:** What kinds of defects are opened related to PTM use in the GitHub projects? How do these defects differ from defects opened on other aspects of the GitHub projects? **6H2:** What do developers on GitHub discuss related to PTM use, _e.g.,_ in issues, and pull requests? What are developers' sentiments regarding PTM use? Do the people do pull requests of PTMs have the right expertise? **6H3:** How commonly do developers change the specific PTM they use to implement a feature? What factors influence these changes? **6H4:** Sometimes a PTM is introduced into a GitHub repository as part of the initial implementation of a software feature, but other times a PTM is introduced to replace part or all of an existing feature implementation. How common are these two modes of PTM adoption? In the second kind, how does the feature's defect types or defect rates change after the PTM is introduced? **PTM1:** What factors predict the popularity of a PTM, and what is their relative importance? Intuition suggests that performance aspects such as accuracy and latency may dominate; what is the role played by other factors such as engineering quality? **PTM2:** Recent qualitative work determined that software engineers struggle to re-use PTMs because of their limited documentation. What are the typical characteristics of this documentation? Can natural-language model cards be automatically parsed into a structured schema? **PTM3:** One aspect of re-use is finding a candidate model. What naming conventions do PTMs follow? Are they consistent enough (within an architecture family? across families?) to support engineers looking for similar models? When do PTM maintainers release a model under a new name, and when do they simply bump the version number? **PTM4:** PTM authors may reuse each others' work, _e.g.,_ building off of model checkpoints or incorporating architectural building blocks. This might be viewed as an extreme form of "forking" from open-source software, but it may also reflect a novel form of software exchange. What is the phylogeny, or perhaps the "supply chain", of the major families of PTMs? **PTM5:** Many research papers describe techniques for identifying DNNs with unexpected behavior, _e.g.,_ hidden malicious behaviors. How common are such DNNs in the PTM dataset? **11:** It can be difficult to interpret model popularity numbers by download rates. To what extent does a PTM's download rates correlate with the number of GitHub projects that rely on it, or the popularity of the GitHub projects? **12:** What are the code smells related to PTM in the downstream projects, and how do they affect theses projects? **13:** When PTMs are used in GitHub repositories, what are engineers' testing practices for the PTMs they add as dependencies? Is there any correlation between the tests of the PTM by its maintainers, and the tests of the PTM by the downstream users? Do practices vary based on the purpose of the PTM, _e.g.,_ computer vision vs. natural language processing? How do PTM downstream users deal with flakiness when testing a PTM? **14:** Updating dependencies is a core software engineering activity. Suppose a GitHub repository depends on a PTM. How often does the GitHub repository update the dependency when that PTM is changed, _e.g.,_ due to (1) PTM deprecation, (2) PTM improvement via a new version, or (3) PTM being made outdated by the release of a new model? What is the typical lag time for such updates? **15:** Software engineers often communicate through filsing issue reports. What are the characteristics of issue reports on the PTM packages, _e.g.,_ in terms of the kinds of questions asked, responsiveness of maintainers, issue density, and issue staleness? How often does the topic of reproducibility come up (cf. the "ML reproducibility crisis")? How do these attributes differ from the characteristics of issue reports in GitHub repositories? **16:** When engineers deploy software applications that make use of PTMs, they may prefer to use a deployment framework, _e.g.,_ the ONNK RunTime, rather than a development framework such as PyTorch. Which of the several competing deployment frameworks (ONNX RunTime, MM-DNN, NNET, TTLite, etc.) is the most popular, and is there any evidence of why? Do GitHub users make the transformation to deployment themselves or do the PTM authors provide the deployment-ready version? \begin{table} \begin{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline **Research questions** & **Related work** \\ \hline **GH1:** What kinds of defects are opened related to PTM use in the GitHub projects? How do these defects differ from defects opened on other aspects of the GitHub projects? **GH2:** What do developers on GitHub discuss related to PTM use, _e.g.,_ in issues, and pull requests? What are developers’ sentiments regarding PTM use? Do the people do pull requests of PTMs have the right expertise? **GH3:** How commonly do developers change the specific PTM they use to implement a feature? What factors influence these changes? **GH4:** Sometimes a PTM is introduced into a GitHub repository as part of the initial implementation of a software feature, but other times a PTM is introduced to replace part or all of an existing feature implementation. How common are these two modes of PTM adoption? In the second kind, how does the feature’s defect types or defect rates change after the PTM is introduced? \begin{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline **PTM1:** What factors predict the popularity of a PTM, and what is their relative importance? Intuition suggests that performance aspects such as accuracy and latency may dominate; what is the role played by other factors such as engineering quality? **PTM2:** Recent qualitative work determined that software engineers struggle to re-use PTMs because of their limited documentation. What are the typical characteristics of this documentation? Can natural-language model cards be automatically parsed into a structured schema? **PTM3:** One aspect of re-use is finding a candidate model. What naming conventions do PTMs follow? Are they consistent enough (within an architecture family? across families?) to support engineers looking for similar models? When do PTM maintainers release a model under a new name, and when do they simply bump the version number? **PTM4:** PTM authors may reuse each others’ work, _e.g.,_ building off of model checkpoints or incorporating architectural building blocks. This might be viewed as an extreme form of “forking” from open-source software, but it may also reflect a novel form of software exchange. What is the phylogeny, or perhaps the “supply chain”, of the major families of PTMs? **PTM5:** Many research papers describe techniques for identifying DNNs with unexpected behavior, _e.g.,_ hidden malicious behaviors. How common are such DNNs in the PTM dataset? **11:** It can be difficult to interpret model popularity numbers by download rates. To what extent does a PTM’s download rates correlate with the number of GitHub projects that rely on it, or the popularity of the GitHub projects? **12:** What are the code smells related to PTM in the downstream projects, and how do they affect theses projects? **13:** When PTMs are used in GitHub repositories, what are engineers’ testing practices for the PTMs they add as dependencies? Is there any correlation between the tests of the PTM by its maintainers, and the tests of the PTM by the downstream users? Do practices vary based on the purpose of the PTM, _e.g.,_ computer vision vs. natural language processing? How do PTM downstream users deal with flakiness when testing a PTM? **14:** Updating dependencies is a core software engineering activity. Suppose a GitHub repository depends on a PTM. How often does the GitHub repository update the dependency when that PTM is changed, _e.g.,_ due to (1) PTM deprecation, (2) PTM improvement via a new version, or (3) PTM being made outdated by the release of a new model? What is the typical lag time for such updates? **15:** Software engineers often communicate through filsing issue reports. What are the characteristics of issue reports on the PTM packages, _e.g.,_ in terms of the kinds of questions asked, responsiveness of maintainers, issue density, and issue staleness? How often does the topic of reproducibility come up (cf. the “ML reproducibility crisis”)? How do these attributes differ from the characteristics of issue reports in GitHub repositories? **16:** When engineers deploy software applications that make use of PTMs, they may prefer to use a deployment framework, _e.g.,_ the ONNK RunTime, rather than a development framework such as PyTorch. Which of the several competing deployment frameworks (ONNX RunTime, MM-DNN, NNET, TTLite, etc.) is the most popular, and is there any evidence of why? Do GitHub users make the transformation to deployment themselves or do the PTM authors provide the deployment-ready version? \begin{table} \begin{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline **Research questions** & **Related work** \\ \hline **GH1:** What kinds of defects are opened related to PTM use in the GitHub projects? How do these defects differ from defects opened on other aspects of the GitHub projects? **GH2:** What do developers on GitHub discuss related to PTM use, _e.g.,_ in issues, and pull requests? What are developers’ sentiments regarding PTM use? Do the people do pull requests of PTMs have the right expertise? **GH3:** How commonly do developers change the specific PTM they use to implement a feature? What factors influence these changes? **GH4:** Sometimes a PTM is introduced into a GitHub repository as part of the initial implementation of a software feature, but other times a PTM is introduced to replace part or all of an existing feature implementation. How common are these two modes of PTM adoption? In the second kind, how does the feature’s defect types or defect rates change after the PTM is introduced? **PTM1:** What factors predict the popularity of a PTM, and what is their relative importance? Intuition suggests that performance aspects such as accuracy and latency may dominate; what is the role played by other factors such as engineering quality? **PTM2:** Recent qualitative work determined that software engineers struggle to re-use PTMs because of their limited documentation. What are the typical characteristics of this documentation? Can natural-language model cards be automatically parsed into a structured schema? **PTM3:** One aspect of re-use is finding a candidate model. What naming conventions do PTMs follow? Are they consistent enough (within an architecture family? across families?) to support engineers looking for similar models? When do PTM maintainers release a model under a new name, and when do they simply bump the version number? **PTM4:** PTM authors may reuse each others’ work, _e.g.,_ building off of model checkpoints or incorporating architectural building blocks. This might be viewed as an extreme form of “forking” from open-source software, but it may also reflect a novel form of software exchange. What is the phylogeny, or perhaps the “supply chain”, of the major families of PTMs? **PTM5:** Many research papers describe techniques for identifying DNNs with unexpected behavior, _e.g.,_ hidden malicious behaviors. How common are such DNNs in the PTM dataset? **11:** It can be difficult to interpret model popularity numbers by download rates. To what extent does a PTM's download rates correlate with the number of GitHub projects that rely on it, or the popularity of the GitHub projects? **12:** What are the code smells related to PTM in the downstream projects, and how do they affect theses projects? **13:** When PTMs are used in GitHub repositories, what are engineers’ testing practices for the PTMs they add as dependencies? Is there any correlation between the tests of the PTM by its maintainers, and the tests of the PTM by the downstream users? Do practices vary based on the purpose of the PTM, _e.g.,_ computer vision vs. natural language processing? How do PTM downstream users deal with flakiness when testing a PTM? **14:** Updating dependencies is a core software engineering activity. Suppose a GitHub repository depends on a PTM. How often does the GitHub repository update the dependency when that PTM is changed, _e.g.,_ due to (1) PTM deprecation, (2) PTM improvement via a new version, or (3) PTM being made outdated by the release of a new model? What is the typical lag time for such updates? **15:** Software engineers often communicate through filsing issue reports. What are the characteristics of issue reports on the PTM packages, _e.g.,_ in terms of the kinds of questions asked, responsiveness of maintainers, issue density, and issue staleness? How often does the topic of reproducibility come up (cf. the “ML reproducibility crisis”)? How do these attributes differ from the characteristics of issue reports in GitHub repositories? **16:** When engineers deploy software applications that make use of PTMs, they may prefer to use a deployment framework, _e.g.,_ the ONNK RunTime, rather than a development framework such as PyTorch. Which of the several competing deployment frameworks (ONNX RunTime, MM-DNN, NNET, TTLite, etc.) is the most popular, and is there any evidence of why? Do GitHub users make the transformation to deployment themselves or do the PTM authors provide the deployment-ready version? \begin{table} \end{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline **17:** What factors predict the popularity of a PTM, and what is their relative importance? Intuition suggests that performance aspects such as accuracy and latency may dominate; what is the role played by other factors such as engineering quality? **PTM2:** Recent qualitative work determined that software engineers struggle to re-use PTMs because of their limited documentation. What are the typical characteristics of this documentation? Can natural-language model cards be automatically parsed into a structured schema? **PTM3:** One aspect of re-use is finding a candidate model. What naming conventions do PTMs follow? Are they consistent enough (within an architecture family? across families?) to support engineers looking for similar models? When do PTM maintainers release a model under a new name, and when do ## Appendix A Dataset schema The detailed schema is shown in Figure 2. The definition in SQL is available to miners. ## Appendix B Data collection ### _PTMs_ #### B-A1 What is a PTM and PTM package? Pre-trained deep learning models (PTMs) are often shared through deep learning model registries, such as Hugging Face. Engineers can directly reuse these PTMs, or fine-tune them for specific downstream tasks. A PTM package typically includes a model card (a form of README), as well as the model's architecture and weights [10]. In recent years, the popularity of PTMs has been steadily rising [32, 33]. As illustrated in Figure 3, the total number of Hugging Face models has seen a consistent increase on a monthly basis. Recent work shows increasing interest from the software engineering mining community in PTMs [10, 11, 34, 35]. These works have identified the potential mining data that the community can take advantage of. In the past year the first mining efforts of PTMs and software engineering practices have begun [32, 33]. #### B-A2 PTM Collection Our PTM data collection includes three parts: (1) We saved 15,250 PTM snapshots. This included the most popular PTM packages (_i.e.,_ with over 10K downloads) on Hugging Face, which resemble a "git-like" structure, and all PTMs on PyTorch. This part of the data can provide a comprehensive view of PTM packages. (2) Among these "full" metadata, 44,337 links from the PTMs to the downstream GitHub repositories have been identified. This part of the data can be connected to downstream GitHub data and allow miners to analyze the relationship between them. (3) For all PTMs hosted on Hugging Face and PyTorch, we retrieved their metadata, resulting in a total number of 281,638 PTM package metadata being included in _PeaTMOSS_. The miners can answer research questions based on metadata only, such as analyzing PTM naming conventions. #### B-A3 Soundness and Completeness _PeaTMOSS_ is comprehensive in terms of popular PTM packages, as it includes snapshots of those with over 10,000 downloads on Hugging Face. This provides a full view of widely-used PTMs and their connections to downstream GitHub projects, facilitating in-depth analysis. Additionally, the dataset includes metadata from all other PTMs on Hugging Face, which can be used for metadata-based analyses. _PeaTMOSS_ enhances the diversity of PTM data by incorporating PTM packages from PyTorch Hub, including all available model repositories and their associated pull requests and issues. #### B-A4 Implementation Metadata is collected using an _Extract-Translate-Load (ETL)_ pipeline for each model hub. The ETL pipeline can be generalized to the following steps: * **Extract**: Obtain metadata that is available from each model hub's API. * **Transform**: Use metadata to collect more information about PTMs (_i.e.,_ examine Github Metadata for a linked repository) as well as download PTM package of GitHub repository snapshot. Transform the data into intermediate representation to simplify Loading. * **Load**: Load the transformed data into a database for long-term storage and retrieval Each model hub has a unique implementation of the Extract stage, but the functionality is the same. * **Hugging Face**: PTM package metadata is downloaded using the HuggingFace_hub Python library. * **PyTorch**: The markdown files in the root of the PyTorch GitHub repository, which correspond to each PTM package's repository, are downloaded and subsequently scraped for metadata. During the Transform stage, data that matches the PeaTMOSS.db metadata schema is transformed into an intermediate representation, while data that doesn't match the schema is transformed into a JSONB blob. This is to allow for both consistency across model hubs, as well as maintaining hub specific metadata. ### _Mapping GitHub projects to PTMs_ To evaluate PTM usage within projects, a PTM must be mapped back to a GitHub project. While it is possible to use Hugging Face and PyTorch Hub hosted projects with libraries outside of the Python ecosystem, we filtered on GitHub projects that utilize Python libraries to interface with PTMs as the vast majority of projects we found utilized Python libraries. As Hugging Face and PyTorch hub do not provide metadata or a method to retrieve PTM usage within projects, by analyzing the source code of a GitHub project, it is possible to map the two. The whole methodology has been described below: #### B-A1 Signature Collection For leveraging Hugging Face PTMs, dedicated functions/methods specific to each library are available. These functions/methods allow loading PTMs by inputting their names as arguments. In this step, we manually retrieve the libraries, along with their corresponding import identifiers and the necessary function/method names essential for PTM loading. This compilation process is guided by Hugging Face's Fig. 3: Evolution of the total number of Hugging Face models per month. This figure is reused from [33]. official documentation.* The culmination of import identifiers and the essential function/method names is referred to as a Signature. For example, when accessing the PTMs provided by the Diffusers library, the corresponding signature encompasses diffusers and from_pretrained. Note that some libraries offer multiple methods to load PTMs, and we have taken all of these methods into account. For instance, the transformers library presents two distinct approaches: from_pretrained and pipeline, both of which can be utilized to load PTMs. Figure 4 shows an example code snippet to load the PTMs provided by Transformers library. Footnote *: [https://github.com/HuggingFace/hub-docs/blob/main/js/src/lib/interfaces/Libaries.Its](https://github.com/HuggingFace/hub-docs/blob/main/js/src/lib/interfaces/Libaries.Its) Throughout this process, we exclude libraries not visible on the website (such as k2, doctr, mindspore), as well as those unsuitable for downstream projects. An example of such incompatibility is when PTMs can only be used via command line or download, or when they lack the "use in library" feature+. After filtering, a total of 23 libraries remain in the compilation. Footnote *: [https://HuggingFace.co/facebook/musicgen-large](https://HuggingFace.co/facebook/musicgen-large) To use PyTorch PTMs, there are two approaches: (1) torch.hub.load function that provides a way to load PTMs by specifying their names as arguments. and (2) Alternatively, one can utilize library-specific classes provided by torchvision or instances provided by torchaudio and torchtext. These classes and instances are tailored to represent individual PTMs. The first approach and alternative approach provided by the torchvision library contain keyword based parameters i.e., pretrained/weights to control the model usage with pretrained weights or random initialization. The above mentioned two approaches give us a total of 163 signatures. Figure 5 shows an example code snippet to load the PTMs provided by the PyTorch library. #### Iv-A2 Preliminary repository collection based on Sourcegraph Search The subsequent task involves locating these signatures within the code. Although we attempted to use GitHub search, it does not facilitate a comprehensive search as it provides the top 1000 results. Instead, we utilize the Sourcegraph command line interface, src++, to detect projects containing pertinent files that employ the gathered signatures to interact with Hugging Face and PyTorch Hub. Footnote *: [https://docs.sourcegraph.com/cli](https://docs.sourcegraph.com/cli) Our search pattern incorporates the signatures gathered earlier, and we match them against the content of files within GitHub repositories. Our search criteria encompass repositories that are not archived (default behavior), not forked (default behavior), and publicly accessible, specifically focusing on Python files. For example, a query for Diffusers is structured as "src select:file visibility:public count:all lang:Python content:'from diffusers' AND from_pretrained(". Our search query accommodates both 'from import' and 'import' statements. Our search results include the corresponding code snippets, file names, and repository names. The projects were identified during July 5-12. Based on the count of the repositories, we select the top 5 Hugging Face libraries for our data collection including Transformers, Spacy, Sentence-Transformers, Diffusers, and Timm. For PyTorch, we consider all of the corresponding signatures. Our dataset comprises well-recognized GitHub repositories with an average star count of 201. We have obtained local copies of the GitHub repositories by using the git clone command. This process took approximately 12 days to complete and resulted in the download of 36,251 repositories, collectively amounting to a total size of 3.5 TB. #### Iv-A3 Extracting PTMs via Static Analysis As Sourcegraph's search feature relies on text-based patterns, the possibility of encountering false positive results exists. To mitigate this concern, we perform static analysis on GitHub repositories with the Scalpel framework [36]. For each relevant source code file associated with a specific function signature, we construct an abstract syntax tree and extract the function calls contained within the file. Subsequently, we retrieve the complete and qualified names of each identified function call and cross-reference them with our predefined signatures which gives us a total of 28,575 repositories. Additionally, we go a step further by extracting both the positional and keyword arguments that are associated with the function calls that match our target signatures. Our analysis is equipped to capture any argument that possesses a static value. We then utilize the list of PTMs from PTM Torrent V2 to identify the repositories that statically call PTMs which gives us a total of 15,129 repositories. We store the corresponding repositories and files for each of the matched PTMs. It is important to note that a single repository can utilize multiple PTMs, and similarly, a single PTM can be employed across multiple repositories. #### Iv-A4 Soundness and Completeness of Collected Repositories For the PTMs hosted on the Hugging Face hub, our dataset provides usage considering the five libraries, i.e., Transformers, Spacy, Timm, Sentence-Transformers, and Diffusers. These libraries were chosen because they comprise the top five libraries used in GitHub projects as shown in Figure 6. For the PTMs from the PyTorch hub, we did not filter by the library. Our dataset comprises torchvision, torchaudio, torchtext, along with PyTorch hub. Fig. 4: An Example Code Snippet to Use PTMs from Hugging Face Hub Fig. 5: An Example Code Snippet to Use PTMs from PyTorch Hub Hub Static analysis was carried out due to the limitations of the text search conducted using Sourcegraph. We resolve the fully qualified names for each function call to accurately identify True Positive results. This results in a total of 28,575 repositories that genuinely contain the practical utilization of the PTMs. Our dataset encompasses projects created up until July 10, 2023. ### _Extracting GitHub Issues and Pull Requests_ By analyzing the discussions that community members have about the project within the project's issue tracker, it is possible to identify not only PTMs of interest w.r.t the project, but the potential future direction of a project w.r.t the techniques implemented by the PTM. To collect the issues and pull requests associated with the GitHub repositories, we use GitHub's official command line interface gh. We consider all states (_i.e.,_ open and closed) while collecting the issues and pull requests associated with each repository. Each of the issue and pull request metadata responses contain all available relevant fields provided by GitHub CLI. Specific response fields are listed in Table II. Example commands to retrieve relevant data can be found in _PeaTMOSS_ GitHub repository. Targeting 28,575 repositories, we retrieve the issues or pull requests resulting in 19,507 repositories with issues and 12,159 repositories with pull requests. Altogether, our dataset encompasses a total of 27,270 repositories, which involve occurrences of issues, pull requests or static utilization of PTMs. ## Appendix C Data Access Examples To answer several of the proposed research questions in Table I, we have released examples on how to interface with _PeaTMOSS_. ORM methods and SQL examples for interfacing with the PeaTMOSS.db database are provided. Code snippets for these examples are made available via the /Examples/ filepath in our GitHub repository. \begin{table} \begin{tabular}{l l} \hline \hline **Request Type** & **Response Fields** \\ \hline Issue Metadata & assignes, author, body, closedAt, comments, createdAt, id, labels, milestone, number, projectCards, reactionGroups, state, title, updatedAt, url \\ \hline Pull Request Metadata & additions, assignees, author, baseRefName, body, changedFiles, closedAt, comments, comnits, createdAt, deletions, files, headRefName, headRepository, headRepository/Owner, id, isCrossRepository, isDraft, labels, maintaineMon/dridge, mergeCommit, mergeStateStatus, mergeable, mergeIdAt, mergedBy, milestone, number, potentialMergeCommit, projectCards, reactionGroups, reviewDecision, reviewRequests, reviews, state, statusCheckRollup, title, updatedAt, url \\ \hline \hline \end{tabular} \end{table} TABLE II: JSON response fields captured when collecting issues and pull requests for 28,575 GitHub repositories. Fig. 6: Number of projects that use a specific library as captured via Sourcegraph code search
2303.04077
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
The main challenge in vision-and-language navigation (VLN) is how to understand natural-language instructions in an unseen environment. The main limitation of conventional VLN algorithms is that if an action is mistaken, the agent fails to follow the instructions or explores unnecessary regions, leading the agent to an irrecoverable path. To tackle this problem, we propose Meta-Explore, a hierarchical navigation method deploying an exploitation policy to correct misled recent actions. We show that an exploitation policy, which moves the agent toward a well-chosen local goal among unvisited but observable states, outperforms a method which moves the agent to a previously visited state. We also highlight the demand for imagining regretful explorations with semantically meaningful clues. The key to our approach is understanding the object placements around the agent in spectral-domain. Specifically, we present a novel visual representation, called scene object spectrum (SOS), which performs category-wise 2D Fourier transform of detected objects. Combining exploitation policy and SOS features, the agent can correct its path by choosing a promising local goal. We evaluate our method in three VLN benchmarks: R2R, SOON, and REVERIE. Meta-Explore outperforms other baselines and shows significant generalization performance. In addition, local goal search using the proposed spectral-domain SOS features significantly improves the success rate by 17.1% and SPL by 20.6% for the SOON benchmark.
Minyoung Hwang, Jaeyeon Jeong, Minsoo Kim, Yoonseon Oh, Songhwai Oh
2023-03-07T17:39:53Z
http://arxiv.org/abs/2303.04077v1
# Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation ###### Abstract The main challenge in vision-and-language navigation (VLN) is how to understand natural-language instructions in an unseen environment. The main limitation of conventional VLN algorithms is that if an action is mistaken, the agent fails to follow the instructions or explores unnecessary regions, leading the agent to an irrecoverable path. To tackle this problem, we propose Meta-Explore, a hierarchical navigation method deploying an exploitation policy to correct misled recent actions. We show that an exploitation policy, which moves the agent toward a well-chosen local goal among unvisited but observable states, outperforms a method which moves the agent to a previously visited state. We also highlight the demand for imagining regretful explorations with semantically meaningful clues. The key to our approach is understanding the object placements around the agent in spectral-domain. Specifically, we present a novel visual representation, called scene object spectrum (SOS), which performs category-wise 2D Fourier transform of detected objects. Combining exploitation policy and SOS features, the agent can correct its path by choosing a promising local goal. We evaluate our method in three VLN benchmarks: R2R, SOON, and REVERIE. Meta-Explore outperforms other baselines and shows significant generalization performance. In addition, local goal search using the proposed spectral-domain SOS features significantly improves the success rate by 17.1% and SPL.by 20.6% against the state-of-the-art method of the SOON benchmark. Project page: [https://rllab-snu.github.io/projects/Meta-Explore/doc.html](https://rllab-snu.github.io/projects/Meta-Explore/doc.html) + Footnote †: This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-04-01190,[SWT.Jan]RobotLearningEfficient, Safe, and Socialy-Acquelle Machine Learning). This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2022-0-00907, Development of AI Bots Collaboration Platform and Self-organizing.AI)/_Corresponding authors: Yoonseon Oh and Songhwai Oh._ ## 1 Introduction Visual navigation in indoor environments has been studied widely and shown that an agent can navigate in unexplored environments [1]. By recognizing the visual context and constructing a map, an agent can explore the environment and solve tasks such as moving towards a goal or following a desired trajectory. With the increasing development in human language understanding, vision-and-language navigation (VLN) [2] has enabled robots to communicate with humans using natural languages. The high degree of freedom in natural language instructions allows VLN to expand to various tasks, including (1) following fine-grained step-by-step instructions [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] and (2) reaching a target location described by goal-oriented language instructions [14, 15, 16, 17, 18, 19, 20]. A challenging issue in VLN is the case when an action is mistaken with respect to the given language instruction [21, 22, 23, 24, 25, 26]. For instance, if the agent is asked to turn right at the end of the hallway but turns left, the agent may end up in irrecoverable paths. Several existing studies solve this issue via hierarchical exploration, where the high-level planner decides when to explore and the low-level planner chooses what actions to take. If the high-level planner chooses to explore, the agent searches unexplored regions, and if it chooses to exploit, the agent executes the best action based on the previous exploration. Prior work [21, 22, 23] returns the agent to the last successful state and resumes exploration. Figure 1: **Hierarchical Exploration.** At each episode, a natural-language instruction is given to the agent to navigate to a goal location. The agent explores the environment and constructs a topological map by recording visited nodes \(\blacktriangledown\) and next step reachable nodes \(\blacktriangledown\). Each node consists of the position of the agent and visual features. \(o_{\text{r}}\) denotes the observation at time \(t\). The agent chooses an unvisited local goal to solve the regretful exploration problem. However, such methods take a heuristic approach because the agent only backtracks to a recently visited location. The agent does not take advantage of the constructed map and instead naively uses its recent trajectory for backtracking. Another recent work [26] suggests graph-based exploitation, which uses a topological map to expand the action space in global planning. Still, this method assumes that the agent can directly jump to a previously visited node. Since this method can perform a jump action at every timestep, there is no trigger that explicitly decides when to explore and when to exploit. Therefore, we address the importance of time scheduling for exploration-exploitation and efficient global planning using a topological map to avoid reexploring visited regions. We expand the notion of hierarchical exploration by proposing Meta-Explore, which not only allows the high-level planner to choose when to correct misled local movements but also finds an unvisited state inferred to be close to the global goal. We illustrate the overview of hierarchical exploration in Figure 1. Instead of backtracking, we present an exploitation method called local goal search. We show that it is more efficient to plan a path to a local goal, which is the most promising node from the unvisited but reachable nodes. We illustrate the difference between conventional backtracking and local goal search in Figure 2. Based on our method, we show that exploration and exploitation are not independent and can complement each other: (1) to overtake regretful explorations, the agent can perform exploitation and (2) the agent can utilize the constructed topological map for local goal search. We also highlight the demand for imagining regretful explorations with semantically meaningful clues. Most VLN tasks require a level of understanding objects nearby the agent, but previous studies simply encode observed panoramic or object images [2, 3, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. In this paper, we present a novel semantic representation of the scene called _scene object spectrum_ (SOS), which is a matrix containing the arrangements and frequencies of objects from the visual observation at each location. Using SOS features, we can sufficiently estimate the context of the environment. We show that the proposed spectral-domain SOS features manifest better linguistic interpretability than conventional spatial-domain visual features. Combining exploitation policy and SOS features, we design a navigation score that measures the alignment between a given language instruction and a corrected trajectory toward a local goal. The agent compares local goal candidates and selects a near-optimal candidate with the highest navigation score from corrected trajectories. This involves high-level reasoning related to the landmarks (e.g., bedroom and kitchen) and objects (e.g., table and window) that appear in the instructions. The main contributions of this paper are as follows: * We propose a hierarchical navigation method called Meta-Explore, deploying an exploitation policy to correct misled recent actions. The agent searches for an appropriate local goal instead of reversing the recent action sequence. * In the exploitation mode, the agent uses a novel scene representation called _scene object spectrum_ (SOS), which contains the spectral information of the object placements in the scene. SOS features provide semantically meaningful clues to choose a near-optimal local goal and help the agent to solve the regretful exploration problem. * We evaluate our method on three VLN benchmarks: R2R [2], SOON [16], and REVERIE [17]. The experimental results show that the proposed method, Meta-Explore, improves the success rate and SPL in test splits of R2R, SOON and val split of REVERIE. The proposed method shows better generalization results compared to all baselines. ## 2 Related Work ### Vision-and-Language Navigation In VLN, an agent encodes the natural language instructions and follows the instructions, which can be either (1) a fine-grained step-by-step instruction the agent can follow [2, 3, 4], (2) a description of the target object and location [16, 17], or (3) additional guidance given to the agent [18, 27]. These tasks require the agent to recognize its current location using some words in the natural-language instructions. Prior work [2, 28, 29, 30, 31] show that an agent can align visual features to language instructions via neural networks and use the multimodal output embeddings to generate a suitable action at each timestep. Most VLN methods utilize cross-modal attention, either with recurrent neural networks [2, 28] or with transformer-based architectures [29, 30, 31]. For sequential action prediction, Hong _et al._[32] further use recurrent units inside transformer architectures, while Pashevich _et al._[33] and Chen _et al._[34] use additional transformers to embed past observations and actions. Figure 2: **Local Goal Search for Exploitation.** The local goal is likely to be chosen as the closest node to the global goal. Existing methods only backtrack to a visited node (left). We expand the searchable area by including unvisited but reachable nodes (right). ### Exploration-Exploitation In an unseen environment, the agent must maximize the return without knowing the true value functions. One of the solutions to this problem is to switch back and forth between exploration and exploitation [36]. In the exploration mode, the agent gathers more information about the environment. On the other hand, the agent uses information collected during exploration and chooses the best action for exploitation. Ecoffet _et al_. [37] reduced the exploration step by archiving the states and exploring again from the successful states. Pislar _et al_. [38] addressed the various scheduling policies and demonstrated their method on Atari games. Recent work [39, 40] successfully demonstrates the effectiveness of hierarchical exploration in image-goal navigation. Like commonly used greedy navigation policies, VLN tasks also deal with the problem of maximizing the chance to reach the goal without knowing the ground truth map. Several VLN methods employ the concept of exploitation to tackle this problem. Ke _et al_. [35] look forward to several possible future trajectories and decide whether to backtrack or not and where to backtrack. Others [21, 22, 23] estimate the progress to tell whether the agent becomes lost and make the agent backtrack to a previously visited location to restart exploration. However, previous studies do not take into account what should be done in the exploitation mode. In order to handle this problem, we propose a hierarchical navigation method which determines the scheduling between exploration and exploitation. ### Visual Representations Popular visual encoding methods via ResNet [41] and ViT [42] can be trained to learn rotation-invariant visual features. Both methods learn to extract visual features with high information gain for global and local spatial information. The high complexity of the features leads to low interpretability of the scene and therefore requires the agent to use additional neural networks or complex processing to utilize them. On the other hand, traditional visual representation methods such as Fourier transform use spectral analysis, which is highly interpretable and computationally efficient. One drawback of the traditional methods is that they fail to maximize the information gain. Nonetheless, an appropriate use of essential information can be helpful for high-level decision making and enables more straightforward interpretation and prediction of the visual features. One traditional navigation method, Sturz _et al_. [43] used Fourier transform to generate rotation-invariant visual features. However, no research has transformed the spectral information of the detected objects to represent high-level semantics from visual observations. Focusing on the fact that 2D Fourier transform can extract morphological properties of images [44], we can find out the shape or structure of detected objects through 2D Fourier transform. In this paper, we decompose the object mask into binary masks by object categories and perform a 2D Fourier transform on each binary mask. ## 3 Method ### Problem Formulation We deal with VLN in discrete environments, where the environment is given as an undirected graph \(G_{e}=\{V,E\}\). \(V\) denotes a set of \(N\) navigable nodes, \(\{V_{i}\}_{i=1}^{N}\), and \(E\) is the adjacency matrix describing connectivity among the nodes in \(V\). We denote the observation at node \(V_{i}\) as \(O_{i}\). The agent uses a panoramic RGB image observation \(o_{t}\) and current node \(v_{t}\), which are collected at time \(t\). The agent either moves to a neighboring node or executes a stop action. \(a_{t}\) denotes the action at time \(t\). The objectives of VLN are categorized as follows: (1) to follow language instructions [2] and (2) to find a target object described by language instructions in a fixed time \(T\)[16, 17]. We present a general hierarchical exploration method that can be applied to both tasks. We also enhance the navigation policy by extracting cross-domain visual representations from the environments, i.e., spatial-domain and spectral-domain representations. To balance the information loss and interpretability of the visual feature, we adopt multi-channel fast Fourier transform (FFT) to encode semantic masks of the detected objects into category-wise spectral-domain features. ### Meta-Explore We design a learnable hierarchical exploration method for VLN called Meta-Explore, which decides (1) when to _explore_ or _exploit_ and (2) a new imagined local goal to seek during exploitation. The overall network architecture of the proposed Meta-Explore is shown in Figure 3. Given a language instruction \(L\), the agent navigates in the environment until it finds the target described in \(L\). Meta-Explore consists of a mode selector and two navigation modules corresponding to two modes: exploration and exploitation. At each timestep, the mode selector chooses to explore or exploit. At \(t=0\), the mode is initialized to exploration. In the _exploration mode_, the agent outputs an action toward a neighboring node to move the agent toward the goal. When the mode selector recognizes that the agent is not following the instruction successfully, the mode is switched to exploitation. In the _exploitation mode_, the agent seeks a new _local goal_ with the highest correspondence against the language instructions from the previously unvisited candidate nodes using spectral-domain visual features. The agent moves toward the local goal by planning a path. After the agent arrives at the local goal, the mode is reset to exploration. The explore-exploit switching decision occurs through the mode selector by estimating the probability to explore. The agent repeats this explore-exploit behavior until it determines that the target is found and decides to stop. #### 3.2.1 Mode Selector At time \(t\), the agent observes visual features about the current node \(v_{t}\) and several reachable nodes. We call the nodes reachable at the current timestep as candidate nodes. denotes the number of candidate nodes. We use a cross-modal transformer with \(n_{L}\) layers to relate visual observations to language instructions. The cross-modal transformer takes the visual features of nodes in the constructed topological map at time \(t\), \(G_{t}\), and outputs cross-modal embedding \(H_{t}\) to encode visual observations with \(L\). We concatenate location encoding and history encoding [24] to the visual features as node features to consider the relative pose from \(v_{t}\) and the last visited timestep of each node, respectively. Each word is encoded via a pretrained language encoder [45], which is used for general vision-language tasks. The cross-modal transformer consists of cross-attention layer L2V_Attn\((\hat{W},\hat{V})=\text{Softmax}(\hat{W}\Theta_{q}^{\dagger}(\hat{V} \Theta_{k}^{\dagger})^{T}/\sqrt{d})\hat{V}\Theta_{v}^{\dagger}\) and self-attention layer \(\text{SelfAttn}(X)=\text{Softmax}~{}((X\Theta_{q}(X\Theta_{k})^{T}+A\Theta_{e}+b _{c})/\sqrt{d})X\Theta_{v}\), where \(\hat{W}\), \(\hat{V}\), \(X\), and \(A\) denote word, visual, node representations and adjacency matrix of \(G_{t}\), respectively. The (query, key, value) weight matrices of self-attention and cross-attention layers are denoted as \((\Theta_{q},\Theta_{k},\Theta_{v})\) and \((\Theta_{q}^{\dagger},\Theta_{k}^{\dagger},\Theta_{v}^{\dagger})\), respectively. The final cross-modal embedding at time \(t\) after passing through \(n_{L}\) transformer layers is denoted as \(H_{t}\). To encourage the monotonic increasing relationship between language and visual attentions at each timestep, we define a correlation loss \(L_{corr}=\sum_{t=1}^{T}\|\text{L2V\_Attn}-I_{n_{t}}\|_{1}\) for training the cross-modal transformer, where \(n_{x}\) denotes the dimension of the \(H_{t}\) and \(I_{n_{x}}\) denotes an identity matrix of size \(n_{x}\times n_{x}\). As illustrated in Figure 4, the mode selector estimates the probability to explore \(P_{explore}\) given the cross-modal hidden state \(H_{t}\). We denote the mode selector as \(S_{mode}\) and use a two-layer feed-forward neural network. Given \(H_{t}\), \(S_{mode}\) outputs the exploration probability as \(P_{explore}=1-S_{mode}(H_{t})\). If \(P_{explore}\geq 0.5\), the exploration policy outputs a probability distribution for reachable nodes at the next step. At time \(t+1\), the agent moves to the node with the highest probability. If \(P_{explore}<0.5\), the agent determines that the current trajectory is regretful, so the agent should traverse to find a local goal, which is the most likely to be the closest node to the global goal. The exploitation policy mainly utilizes object-level features to search for the local goal with high-level reasoning. After the local goal is chosen, the path planning module outputs an action following the shortest path to the local goal. To train the mode selector, we require additional demonstration data other than the ground truth trajectory, such that it switches between exploration and exploitation. We generate the demonstration data from the ground truth trajectories, with additional detours. For the detours, we stochastically select candidate nodes other than the ground truth paths and add the trajectory that returns to the current viewpoint. The imitation learning loss for training the mode selector is defined as \(L_{mode}=\sum_{t=1}^{T}\mathbbm{1}(m_{t}=\text{gt}_{t})\), where \(m_{t}\) is the mode of the agent, 0 for exploitation and 1 for exploration. \(\text{gt}_{t}\) is 1 if the current node is in the shortest ground truth trajectory and \(\text{gt}_{t}=0\), otherwise. #### 3.2.2 Exploration Module In the exploration mode, the agent follows the following sequential operations: topological map construction, self-monitoring, and an exploration policy. To improve the exploration, we adopt self-monitoring [21] to predict the current progress of exploration to enhance the exploration policy itself. Prior work [21, 22] has shown that auxiliary loss using self-monitoring can regularize the exploration policy. Topological Map Construction.The agent constructs graph \(G_{t}\) by classifying nodes into two types: (1) visited nodes and (2) unvisited but observable nodes. At current Figure 3: **Network Architecture.** Three types of visual features: panoramic (yellow), object image (aquamarine), and object spectrum (red) are encoded. The color in each parenthesis denotes the color describing the corresponding feature. The cross-modal transformer encodes language and spatial visual features as hidden state \(H_{t}\). A mode selector gives _explore_ or _exploit_ command to the agent by predicting the explore probability \(P_{explore}\). The selected navigation module outputs an action \(a_{t}\) from the possible \(n_{cand}\) candidate nodes. time \(t\), the agent at node \(v_{t}\in\{V_{i}\}_{i=1}^{N}\) observes \(N(v_{t})\) neighbor nodes as next step candidates at time \(t+1\). The visited nodes consist of visual features of their own and the neighboring nodes from panoramic RGB observations. The unvisited nodes can be observed only if they are connected to at least one visited node. The topological map records the positions and visual features of observed nodes at each timestep. By knowing the positions of nodes in \(G_{t}\), the agent can plan the shortest path trajectory between two nodes. **Self-Monitoring.** We use a progress monitor to estimate the current navigation progress at each episode. Self-monitoring via estimating current progress helps the agent choose the next action that can increase the progress. The estimated progress \(\hat{p}_{t}=F_{progress}(H_{t})\) is the output of a feed-forward neural network, given \(H_{t}\) as input. We measure the ground truth progress \(p_{t}\) as the ratio between the current distance to the goal and the shortest path length of the episode subtracted from \(1\), described as \(1-\frac{d_{geo}(v_{t},v_{goal})}{d_{geo}(v_{0},v_{goal})}\), where \(d_{geo}(a,b)\) is the geodesic distance between \(a\) and \(b\). \(v_{0},v_{t}\), and \(v_{goal}\) denote initial, current, and goal positions, respectively. We add progress loss \(L_{progress}=\sum_{t=1}^{T}(\hat{p}_{t}-p_{t})^{2}\) to train the progress monitor while training the exploration policy. **Exploration Policy.** The exploration policy \(F_{explore}\) estimates the probability of moving to the candidate nodes at the next step. The agent chooses the action \(a_{t}\) at time \(t\) based on the estimated probability distribution among candidate nodes, described as \(a_{t}=\operatorname*{arg\,max}_{V_{i}}(F_{explore}([H_{t}]_{i}))\). \(F_{explore}\) is implemented via a two-layer feed-forward network with the cross-modal hidden state \(H_{t}\) given as input. The output of \(F_{explore}\) becomes a probability distribution over possible actions. To only consider unvisited nodes, we mask out the output for visited nodes. For training, we sample the next action from the probability distribution instead of choosing a node with the highest probability. We describe the training details in Section 3.3. #### 3.2.3 Exploitation Module In the exploitation mode, the agent requires high-level reasoning with identifiable environmental clues to imagine regretful exploration cases. To find clues in an object-level manner, we present a novel visual representation by capturing object information in the spectral-domain. The novel representation is more easily predictable than spatial features such as RGB image embeddings. The agent can take advantage of the predictability by expanding the searchable area to find a local goal. We choose the local goal as the closest node to the global goal in the feature space. **Spectral-Domain Visual Representations**. Common navigation policies can lead the agent toward the node with the highest similarity to the target. However, even with a good learned policy, the agent can act in a novice manner in unseen environments. In this paper, we seek extra information from the environment for generalizable high-level reasoning to resolve the issue. As illustrated in Figure 5, _scene object spectrum_ (SOS) incorporates semantic information observed in a single panoramic image by generating a semantic mask for each object category and applying Fourier transform to each semantic mask. The semantic mask for object class \(k\) at time \(t\) is calculated as a binary mask \([m_{t}^{k}]_{ij}\) that detects the object at pixel \((i,\ j)\). Suppose there are a total of \(K\) object categories. When multiple objects are detected for one object category, the binary mask appears as a union of the bounding boxes of the detected objects. We define \(\mathbf{FFT}\) as a channel-wise 2D fast Fourier transform that receives \(K\) binary semantic masks and outputs \(K\) spectral-domain features, where \(K\) is the number of object classes. Then, SOS feature \(\vec{\mathcal{S}}_{t}=[s_{t}^{1},...,s_{t}^{K}]^{T}\) can be defined as \(s_{t}^{k}=\log|\mathbf{FFT}(m_{t}^{k})|\). For simplicity, we perform mean pooling on the vertical spectral axis and normalize the output. The final SOS feature has shape \((K,\eta)\), where \(\eta\) is the maximum horizontal frequency. **Local Goal Search Using Semantic Clues.** We argue that returning to a previously visited node does not guarantee the agent escapes from the local optima. Instead of backtracking Figure 4: **Navigation Modules.** Mode selector estimates \(P_{explore}\), i.e., the probability to explore, and chooses between exploration and exploitation modules. The selected navigation module outputs the next action \(a_{t}\). Figure 5: **Scene Object Spectrum (SOS).** The agent calculates _scene object spectrum_ (SOS) features for efficient exploitation. SOS features incorporate semantic information observed in a single panoramic image by performing category-wise 2D FFT. to a previously visited node, the agent searches for a local goal to move towards. If the agent plans a path and moves towards the local goal, the agent does not need to repeat unnecessary actions in visited regions after the exploitation ends. Additionally, searching for a local goal takes full advantage of the topological map by utilizing the connections among the observed nodes. To expand the searchable area further, we let the agent choose the local goal from previously unvisited and unchosen candidate nodes. To choose a local goal, we first score the corrected trajectories to measure the alignment with the language instruction \(L\). We use SOS features as semantic environmental clues to estimate the navigation score \(S_{nav}\) of the corrected trajectory, which is the shortest path trajectory from the initial node to the local goal in the constructed topological map. To simplify, we convert the language instruction into a list of objects \(W^{o}=[w^{o}_{1},...,w^{o}_{B}]\) consisting of \(B[\leq K]\) object categories (e.g., desk, cabinet, and microwave). We approximate the corresponding reference SOS features as \([\hat{\delta}(w^{o}_{1}),...,\hat{\delta}(w^{o}_{B})]\) where the \(i^{th}\) row of \(\hat{\delta}(w^{o}_{t})\) is defined as \([\hat{\delta}(w^{o}_{k})]_{i}=\mathbb{1}(k=i)\lambda(\delta(w^{o}_{k}))\, \text{sinc}(\frac{j}{2}\ -\ \frac{n}{4})\). \(\lambda(\delta(w^{o}_{k}))\) denotes the average width of detected bounding boxes of object \(w^{o}_{k}\) in the environment. A detailed approximation process is explained in the supplementary material. To simulate a corrected trajectory \(\mathcal{T}^{\prime}=(v^{\prime}_{1},...,v^{\prime}_{t^{\prime}})\), we calculate the SOS features \([\vec{S}^{\prime}_{1},...,\vec{S}^{\prime}_{t^{\prime}}]\) corresponding to the nodes in \(\mathcal{T}^{\prime}\). We measure the similarity between two object spectrum features via the cosine similarity of the flattened vectors. Finally, the navigation score \(S_{nav}\) of \(\mathcal{T}^{\prime}\) is computed as: \[S_{nav}(\mathcal{T}^{\prime})=\frac{\sum\limits_{i=1}^{B}\sum\limits_{j=1}^{t^ {\prime}}(\frac{\hat{\delta}(w^{o}_{j})}{|\delta(w^{o}_{j})|}\cdot\frac{\vec{S }^{\prime}_{j}}{|\vec{S}^{\prime}_{j}|})((\hat{\delta}(w^{o}_{j})-\hat{\delta}( \overline{w}^{o}))\cdot(\vec{S}^{\prime}_{j}-\overline{\hat{S}^{\prime}}))}{ \sqrt{\frac{t^{\prime}}{\mathcal{T}}\cdot\sum\limits_{i=1}^{B}(\hat{\delta}(w^{ o}_{i})-\hat{\delta}(\overline{w}^{o}))^{2}\sum\limits_{j=1}^{t^{\prime}}(\vec{S}^{ \prime}_{j}-\overline{\hat{S}^{\prime}})^{2}}}, \tag{1}\] where \(\hat{\delta}(\overline{w}^{o})\) and \(\overline{\hat{S}^{\prime}}\) denote the average values of SOS features \(\hat{\delta}(w^{o}_{i})\) and \(\overline{\hat{S}^{\prime}}_{j}\), respectively. This equation can also be interpreted as a pseudo correlation-coefficient function between object list \(W^{o}\) and trajectory \(\mathcal{T}^{\prime}\). The exploitation policy selects the node with the highest navigation score as the local goal from the previously unvisited candidates. Figure 6 illustrates a simple scenario of entering a room. Suppose \(W^{o}=[\text{sculpture},\text{door},\text{bed}]\) and the agent has to compare two trajectories \(\mathcal{T}_{1}=(v_{1},v_{2},A)\) and \(\mathcal{T}_{1}=(v_{1},v_{2},B)\). Each similarity matrix in Figure 6 has the \((t,j)\) element as the similarity between the SOS feature of \(V_{t}\) and \(\hat{\delta}(w^{o}_{j})\), which is calculated as \(\hat{\delta}(w^{o}_{j})\cdot\vec{S}^{\prime}_{t}\). Notably, the similarity matrix shows monotonic alignment and the navigation score is higher when the next action is chosen correctly. ### Training Details We use [24] for pretraining the visual encoder with panoramic RGB observations. We use the DAgger algorithm [46] to pretrain the navigation policy and the mode selector. To prevent overfitting, we iteratively perform teacher forcing and student forcing to choose the action from the exploration policy. Imitation learning loss is calculated as \(L_{IL}=\sum_{t=1}^{T}-\log p(a^{*}_{t}|a_{t})\) and object grounding loss is calculated as \(L_{OG}=-\log p(\text{obj}^{*}|\text{obj}_{pred})\), where \(\text{obj}^{*}\) denotes the ground truth and \(\text{obj}_{pred}\) denotes the predicted object location. The total loss function is defined as \(L_{total}=L_{mode}+L_{progress}+L_{corr}+L_{IL}+L_{OG}\). We further finetune the agent via A2C [47]. The exploration policy selects the action \(a_{t}\) with probability \(p^{a}_{t}\). Reinforcement learning loss is defined as \(L_{RL}=-\sum_{t}a^{*}_{t}\log(p^{a}_{t})A_{t}-\lambda\sum_{t}a^{*}_{t}\log(p^{ a}_{t})\). To train the mode selector, progress monitor, and exploration policy in an end-to-end manner, we use the total loss function as \(L_{fine}=L_{mode}+L_{progress}+L_{RL}\). The exploitation policy searches the path toward the local goal from the constructed navigation graph. Thus, the exploitation policy is not learned. ## 4 Navigation Experiments ### Experiment Settings We evaluate our method on three VLN benchmarks, Room-to-Room(R2R) [2], SOON [16], and REVERIE [17]. **R2R** evaluates the visually-grounded natural navigation performance of the agent. The agent must navigate to the predefined goal point given image observations and language instructions in an unseen environment. **SOON** is also a goal-oriented VLN benchmark. Natural language instructions in SOON have an average length of 47 words. The agent should locate the target location and detect the location of an object to find the target object. **REVERIE** is a goal-oriented VLN benchmark that provides natural language instruction about target locations and objects. In REVERIE, the agent is given an instruction referring to a remote object with an average length of 21 words. With this instruction and a panoramic observation from the environment, the agent should navigate to the location the instruction describes and find the correct object Figure 6: **Toy Example.** Monotonic alignment between language instruction and visual observation is desirable. Yellow dots \(\mathfrak{e}\) in the nodes describe the ground truth trajectory. Based on the node at \(t\ =\ 3\), the similarity matrix can show either monotonic or non-monotonic alignment between object tokens and SOS features. The green circles \(\overline{\mathfrak{e}}\) describe the possible candidates \(A,B\) for nextaction. bounding box among the predefined object bounding boxes. ### Evaluation Metrics #### 4.2.1 Navigation performance We evaluate algorithms using the trajectory length (TL), success rate (SR), and success weighted by inverse path length (SPL) [48], and oracle success rate (OSR) for the navigation performance comparison. An episode is recorded as a success if the agent takes a stop action within 3 m of the target location. TL is the average path length in meters. SR is denoted as the number of successes divided by the total number of episodes, \(M\). SPL is calculated as \(\frac{1}{M}\sum_{i=1}^{M}S_{i}\frac{l_{i}}{\max(p_{i},l_{i})}\), where \(S_{i}\) denotes the success as a binary value. \(p_{i}\) and \(l_{i}\) denote the shortest path and actual path lengths for the \(i^{th}\) episode. OSR uses the oracle stop policy instead of the stop policy of the agent. #### 4.2.2 Object grounding performance We also evaluate the object grounding performance of the agent by the success rate of finding the target object (FSR) and the target finding success weighted by inverse path length (FSPL) [16, 17]. FSPL is calculated as \(\text{FSPL}=\frac{1}{N}\sum_{i=1}^{N}S_{i}^{nav}S_{i}^{loc}\cdot l_{i}^{nav}/ \max(l_{i}^{nav},l_{i}^{gt})\), where \(S_{i}^{nav}\) is whether the agent navigates to the target, \(S_{i}^{loc}\) is whether the agent finds a target object bounding box, and \(l_{i}^{nav}\) and \(l_{i}^{gt}\) are the navigation trajectory length and ground truth trajectory length, respectively. ### Baselines and Implementation Details We compare our method with several other baselines as follows. For each task, we compare our method with a number of baselines that use various types of memory (recurrent, sequential, and topological map). For methods implemented with a hierarchical navigation framework, we compare the specific exploitation methods: homing, jump, and local goal search. Homing makes the agent backtrack, and jump makes the agent jump to a previously visited node. The hyperparameters and detailed model architecture of Meta-Explore are described in the supplementary material. ### Comparison with Navigation Baselines We compare our method with navigation baselines. We focus on the success rate and SPL. Rendered results and detailed analyses with other evaluation metrics are provided in the supplementary material. **R2R.** Table 1 compares the proposed Meta-Explore with baselines for the R2R navigation task. We categorize the baseline methods based on the type of constructed memory and the type of exploitation. Our method outperforms other exploration-only baselines over all types of validation and test splits in success rate and SPL. Compared with hierarchical baselines SMNA [21], Regretful-Agent [22], FAST [35], and SSM [26], Meta-Explore improves success rate and SPL by at least 16.4% and 8.9%, respectively. The main difference is that Meta-Explore constructs a topological map during exploration and uses the map for local goal search in exploitation. On the contrary, homing exploitation policies in SMNA, Regretful-Agent, and FAST only rely on the current trajectory, instead of taking advantage of the constructed memory. Jump exploitation in SSM uses a topological map to search a successful previous node, but it makes an unrealistic assumption that the agent can directly jump to a previously visited distant node and unfairly saves time. In our approach, we plan a path to the local goal based on the topological map. The experiment results reveal that even if we design a hierarchical navigation framework, exploration and exploitation are not entirely separate but they can complement each other. **SOON, REVERIE.** Table 2 compares Meta-Explore with baselines in the SOON navigation task. While the proposed method does not improve performance in val seen split, Meta-Explore outperforms other baselines in the test unseen split of SOON for success rate by 17.1% and SPL by 20.6%. The result implies that for the goal-oriented VLN task, high performance in train or val seen splits can be the overfitted result. Because the agent can be easily overfitted to the training data, making a generalizable model or providing a deterministic error-correction module for inference is essential. Meta-Explore chooses the latter approach by correcting the trajectory via exploitation in regretful cases. The evaluation results in the REVERIE navigation task are described in the supplementary material. Meta-Explore shows improvement in the val split of REVERIE for success rate and SPL, but the improvement in the test split is lower than the results in R2R and SOON. We found 252 meaningless object categories (e.g., verbs, adjectives, and prepositions) and 418 replaceable object categories (e.g., typographical errors and synonyms) in the REVERIE dataset. Because our exploitation method utilizes object-based parsing of the given instruction to match with the detected object categories, the effectiveness of the proposed method is lessened due to inaccuracies and inconsistencies in the dataset. We expect to have higher performance if the mistakes in the dataset are fixed. ### Local Goal Search using SOS Features To discuss the significance of modeling exploitation policy, we conduct specific experiments about choosing the local goal for R2R and SOON. We evaluate our method using different types of local goal search, as shown in Table 3 and 4. Oracle denotes a method which selects a local goal using the ground truth trajectory. The performance of the oracle provides the achievable performance for each dataset. The results imply that local goal search using either spatial or spectral visual representations is more effective than random local goal search. The results show that local goal search using spectral visual representations, i.e., SOS features, lead the agent to desirable nodes the most. We also compare local goal search with homing and the difference between the performance of the two methods is most noticeable in the test split of the SOON navigation task. As shown in Table 4, choosing the local goal with only spatial-domain features, the navigation performance does not improve compared to homing. On the contrary, spectral-domain local goal search shows significant improvement against homing by 10.4% in success rate, 34.5% on SPL, and 27.4% on FSPL. The results imply that using spectral-domain SOS features helps high-level decision making, thereby enhancing the navigation performance. To further show the effectiveness of SOS features, we provide sample local goal search scenarios in the supplementary material. ### Ablation Study We conduct an ablation study to compare the proposed method against language-triggered hierarchical exploration. Results in the supplementary material show that among the three representation domains, spatial, spectral, and language, the spectral-domain features enhance navigation performance the most. Additionally, to implicate further applications of Meta-Explore in continuous environments, we evaluate our method on the photo-realistic Habitat [50] simulator to solve image-goal navigation and vision-and-language navigation tasks. Implementation details and results are included in the supplementary material. Results show that our method outperforms baselines in both tasks. ## 5 Conclusion We have proposed Meta-Explore, a hierarchical navigation method for VLN, by correcting mistaken short-term actions via efficient exploitation. In the exploitation mode, the agent is directed to a local goal which is inferred to be the closest to the target. A topological map constructed during exploration helps the agent to search and plan the shortest path toward the local goal. To further search beyond the frontier of the map, we present a novel visual representation called _scene object spectrum_ (SOS), which compactly encodes the arrangements and frequencies of nearby objects. Meta-Explore achieves the highest generalization performance for test splits of R2R, SOON, and val split of REVERIE navigation tasks by showing less overfitting and high success rates. We plan to apply Meta-Explore for VLN tasks in continuous environments in our future work. \begin{table} \begin{tabular}{c|c|c c c c|c c c c|c c c c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Memory**} & \multirow{2}{*}{**Exploit**} & \multicolumn{4}{c|}{**Val Seen Innuc**} & \multicolumn{4}{c|}{**Val Seen House**} & \multicolumn{4}{c}{**Test Unseen**} \\ & & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) \\ \hline \hline Human & - & - & - & - & - & - & - & - & - & - & 90.4 & 59.2 & 91.4 & 51.1 \\ \hline Random & Rec & ✗ & 0.0 & 1.5 & 0.1 & 1.4 & 0.1 & 0.0 & 0.4 & 0.9 & 2.1 & 0.4 & 2.7 & 0.0 \\ Speaker-Follower [28] & Rec & ✗ & 97.9 & 97.7 & 97.8 & 24.5 & 61.2 & 60.4 & 69.4 & **9.1** & 7.0 & 6.1 & 9.8 & 0.6 \\ RCM [49] & Rec & ✗ & 84.0 & 82.6 & 89.1 & 10.9 & 62.4 & 60.9 & 72.7 & 7.8 & 7.4 & 6.2 & 12.4 & 0.7 \\ AuxRN [23] & Rec & ✗ & 98.4 & 97.4 & **98.7** & 13.7 & 68.8 & **67.3** & **78.5** & 8.3 & 8.1 & 6.7 & 11.0 & 0.5 \\ GBE w/o GE & Top. Map & ✗ & 89.5 & 88.3 & 91.8 & 24.2 & 62.5 & 60.8 & 73.0 & 6.7 & 11.4 & 8.7 & 18.8 & 0.8 \\ GBE [16] & Top. Map & ✗ & 98.4 & 97.9 & 98.6 & **44.2** & **76.3** & 62.5 & 64.1 & 7.3 & 11.9 & 10.2 & 19.5 & 1.4 \\ GBE\({}^{\dagger}\) & Top. Map & ✗ & & - & - & 19.5 & 13.3 & 28.5 & 1.2 & 12.9 & 9.2 & 21.5 & 0.5 \\ DUET [24] & Top. Map & ✗ & 94.0 & 91.6 & 90.0 & 31.1 & 36.3 & 22.6 & 50.9 & 3.8 & 33.4 & 21.4 & 43.0 & **4.2** \\ \hline **Meta-Explore (Ours)** & Top. Map & **local goal** & **100.0** & **99.1** & 96.0 & 33.9 & 44.7 & 34.8 & 52.7 & 8.9 & **39.1** & **25.8** & **48.7** & 4.0 \\ \hline \end{tabular} \end{table} Table 2: Comparison and evaluation results of the baselines and our model in the SOON Navigation Task. \begin{table} \begin{tabular}{c|c|c c c c|c c c|c c c|c c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Memory**} & \multirow{2}{*}{**Exploit**} & \multicolumn{4}{c|}{**Val Seen**} & \multicolumn{4}{c|}{**Val Unseen**} & \multicolumn{4}{c}{**Test Unseen**} \\ & & & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & T\(\downarrow\) & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & T\(\downarrow\) & NE\(\downarrow\) \\ \hline \hline Random & - & - & 16 & - & 9.58 & 9.45 & 16 & - & 9.77 & 9.23 & 13 & 12 & 9.89 & 9.79 \\ Human & - & - & - & - & - & - & - & - & 11.85 & 1.61 & 86 & 76 \\ \hline Seq2Seq [2] & Rec & ✗ & 6.0 & 39 & 11.33 & - & 22 & - & **8.39** & 7.84 & 20 & 18 & **8.13** & 7.85 \\ VLN-BERT [32] & Rec & ✗ & 72 & 68 & **11.13** & 2.90 & 63 & 57 & 12.01 & 3.93 & 63 & 57 & 12.35 & 4.09 \\ SMNA\({}^{\dagger}\)[21] & Rec & **homing** & 69 & 63 & 11.69 & 3.31 & 47 & 41 & 12.61 & 5.48 & 61 & 56 & - & 4.48 \\ Regerful-Agent [22] & Rec & **homing** & 69 & 63 & - & 3.23 & 50 & 41 & - & 5.32 & 48 & 40 & - & 5.69 \\ FAST (short) [35] & Rec & **homing** & - & - & - & - & 56 & 43 & 21.17 & 4.97 & 54 & 41 & 22.08 & 5.14 \\ FAST (long) [35] & Rec & **homing** & 70 & 04 & 188.06 & 3.13 & 63 & **0** & 224.42 & 4.03 & 61 & 03 & 196.53 & 4.29 \\ HAMT+e2c [34] & Seq & ✗ & 76 & 72 & 11.15 & 2.51 & 66 & 61 & 11.46 & **2.29** & 65 & 60 & 12.27 & 3.93 \\ DUET [24] & Top. Map & ✗ & 79 & 73 & 12.32 & 2.28 & 72 & 60 & 13.94 & 3.31 & 69 & 59 & 14.73 & 3.65 \\ SSM [26] & Top. Map & **jump** & 71 & 62 & 14.7 & 3.10 & 62 & 45 & 20.7 & 4.32 & 61 & 46 & 20.44 & 4.57 \\ \hline **Meta-Explore (Ours)** & Top. Map & **local goal** & **81** & **75** & 11.95 & **2.11** & **72** & **62** & 13.09 & 3.22 & **71** & **61** & 14.25 & **3.57** \\ \hline \end{tabular} \end{table} Table 1: Comparison and evaluation results of the baselines and our model in the R2R Navigation Task. \begin{table} \begin{tabular}{c|c c c c|c c c|c c c c} \hline \multirow{2}{*}{**Local**} & \multicolumn{4}{c|}{**Val Seen House**} & \multicolumn{4}{c}{**Val Seen House**} & \multicolumn{4}{c}{**Test Unseen**} \\ **Goal** & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & OSR\(\uparrow\) & FSPL\(\downarrow\) & SR\(\uparrow\) & FSPL\(\downarrow\) & NE\(\downarrow\) \\ \hline \hline Human & - & - & - & - & - & - & - & - & - & - & 90.4 & 59.2 & 91.4 & 51.1 \\ \hline Random & Rec & ✗ & 0.0 & 1.5 & 0.1 & 1.4 & 0.1 & 0.0 & 0.4 & 0.9 & 2.1 & 0.4
2302.13622
Evolution of topological states in magnetic materials by symmetry-breaking
The magnetism-controllable band topology renders magnetic topological materials as one of the most promising candidates for next-generation electronic devices. Here we first construct three datasets allowing ascertaining the evolutions of topological states (with all enforced band crossings identified) using symmetry-indicators in the 1651 magnetic space groups. We then perform high-throughput investigations based on 1267 stoichiometric magnetic materials ever-experimentally synthesized and the three datasets to reveal a hierarchy of topological states by symmetry-breaking, along all ergodic and continuous paths of symmetry-breaking (preserving the translation symmetry) from the parent magnetic space group to the trivial group. The results are expected to aid experimentalists in selecting feasible and appropriate means to tune topological states towards realistic applications in new paradigm of memory device by topological state switching.
Feng Tang, Xiangang Wan
2023-02-27T09:46:03Z
http://arxiv.org/abs/2302.13622v1
# Evolution of topological states in magnetic materials by symmetry-breaking ###### Abstract The magnetism-controllable band topology renders magnetic topological materials as one of the most promising candidates for next-generation electronic devices. Here we first construct three datasets allowing ascertaining the evolutions of topological states (with all enforced band crossings identified) using symmetry-indicators in the 1651 magnetic space groups. We then perform high-throughput investigations based on 1267 stoichiometric magnetic materials ever-experimentally synthesized and the three datasets to reveal a hierarchy of topological states by symmetry-breaking, along all ergodic and continuous paths of symmetry-breaking (preserving the translation symmetry) from the parent magnetic space group to the trivial group. The results are expected to aid experimentalists in selecting feasible and appropriate means to tune topological states towards realistic applications in new paradigm of memory device by topological state switching. ## Introduction The past nearly two decades have witnessed tremendous advancements of symmetry-protected topological phases and topological materials [1, 2, 3, 4, 5] reshaping our fundamental understanding on the electronic properties in solids. In the well-established paradigm of studying topological phases in condensed matter, one usually utilizes the symmetry/symmetries to classify the protected topological phases. Very recently, topological quantum chemistry (TQC) [6, 7] or symmetry-indicators (SIs) [8, 9], have been applied in discovering topological nonmagnetic [10, 11, 12, 13]/magnetic [14]/superconducting [15] materials routinely and efficiently using the first-principles calculated high-symmetry point (HSP) symmetry-data with respect to the space groups/magnetic space groups (MSGs) of the pristine materials, revealing the ubiquitous band topology, protected by MSG symmetry. Following the conventional paradigm, finding or designing more symmetries beyond MSG symmetry, such as spin-space group symmetry [16], dual symmetry [17] and gauge-field induced projective symmetry [18], is anticipated to unveil more topological phases, as an on-going research theme in the field of topological materials. On the other hand, symmetry-breaking can also be utilized in creating new topological phase. For example, a higher-order band topology is induced by applying strain on SnTe to break the mirror symmetry which originally protects a mirror-Chern topological insulator phase [19]. Quantum anomalous Hall effect was realized through gapping the time-reversal symmetry protected Dirac fermion by magnetic dopants [20, 21]. Unconventional fermions can emerge in crystals which break the Poincare symmetry [22], and so on. Hence, symmetry-breaking could introduce fruitful topological phases and can act as a versatile knob to control topological states since the symmetry-breaking can be easily-manipulated. Furthermore, the advanced techniques on ultrafast light and high magnetic fields have assisted experimentalists recently to stimulate the pristine crystals to tune the topological states. The topological state can be drastically modified leading to giant quantum responses which have been observed in realistic materials, such as the colossal angular magnetoresistance in ferrimagnetic Mn\({}_{3}\)Si\({}_{2}\)Te\({}_{6}\)[23, 24, 25] and the giant anomalous Hall conductivity in Heusler Co\({}_{2}\)MnAl [26], where the topological states are controlled by rotating the magnetic moments. Noticing that the magnetic structure can be manipulated by magnetic fields to lower the symmetry [27, 28], here we focus on the magnetic materials listed in MAGNDATA [29], which have been experimentally synthesized and whose magnetic structures have already been deduced from neutron-scattering experiments, and aim at revealing all topological states by symmetry-breaking using SIs [8, 9, 30] in these materials. The results are expected to guide experimentalists to realize more magnetic topological materials with unprecedentedly fascinating properties driven by the change of topological phase. Note that though concrete symmetry-breakings are not exhaustible, we can classify symmetry-breakings by the resulting subgroups so that various meticulous concrete symmetry-breakings corresponding to one subgroup, are anticipated to share a common topological diagnosing prediction, since only the transformation properties of HSP wavefunctions account [8, 9, 30]. In this work, we first construct three datasets based on the 1651 MSGs and their subgroups, suitable to incorporate the symmetry-breaking to the conventional diagnostic scheme of band topology based on SIs, and then apply them to 1267 magnetic materials in MAGNDATA [29] combined with first-principles calculations. The three datasets will be described along with the description of the work flow below. ## Work flow The work flow is schematically shown in Fig. 1A. The first step is on the first-principles calculations for the materials structures in the database [29] and the second step is to consider the symmetry-breaking. These two steps utilize two datasets we constructed: dataset 1 named: "All_t-subgroups" and dataset 2 named: "All_atomic_insulator_basis,sets". In dataset 1, we exhaust all translationengleiche subgroups (t-subgroups) for each of the 1651 MSGs, which characterize symmetry-breaking in a general sense. Here, t-subgroup means that the translation symmetry is still preserved and only the point group symmetry is broken. To realize the t-subgroups, rotating magnetic moments for magnetic materials is one feasible way, a strategy by which is described in SM using Mn\({}_{3}\)Si\({}_{2}\)Te\({}_{6}\)[23; 24; 25] as an example. The exhaustive list of all t-subgroups is expected to be applied in various fields, not limited to the topological materials here. For example, by dataset 1, we can find supergroups of a given MSG. To realize a material with a small and nonvanishing band splitting which could induce giant Berry curvatures in some MSGs, we can first target materials with a higher MSG symmetry (namely, a supergroup) which enforces band degeneracy (for which the band splitting is vanishing) and then apply symmetry-breaking to materials crystallized in such supergroup to introduce a gap on the originally degenerate bands with the gap proportional to the strength of the symmetry-breaking which can be controllable. By dataset 1, all subgroups with a given translation symmetry-breaking can also be identified once the maximal subgroup for the translation symmetry-breaking is identified [31; 32] (see Supplementary Materials [33] (SM) for details). It is also worth mentioning that, in the materials database, such as the nonmagnetic materials database, Inorganic Crystal Structure Database (ICSD) [34] and the magnetic materials database, MAGNDATA [29], the positions of atoms per unit cell are given based on Wyckoff positions [35] classified by the MSG. Then more crystallographic symmetries might appear for some material, once the tolerance of judging whether the structure is invariant by a candidate symmetry operation, is relaxed. Hence, the corresponding identified MSG can be a supergroup of the original one. The symmetries of the complement of the original MSG in the supergroup can be regarded as one type of hidden symmetry, which has attracted intensive interest very recently [16; 17; 18]. The t-subgroups provided here can be applied in finding such hidden symmetry by directly testing the possible supergroups on the given structure. After the convergence for some magnetic material is reached in the first-principles calculation based on density functional theory in the first step (also requiring that the resulting MSG reproduces that in MAGNDATA [29], called parent MSG hereafter), the related HSP symmetry data can be computed based on the little groups for HSPs. All related data on the HSPs and the little groups are all provided in dataset 2. From the calculated HSP symmetry-data for the parent MSG, the HSP symmetry-data with respect to each t-subgroup can be found using the compatibility relations between the HSPs of the t-subgroup and the parent MSG, provided in dataset 1. In this way, additional realistic calculations considering symmetry-breaking need not to be performed. The topological classification for some t-subgroup can then be obtained by the HSP symmetry-data with respect to the t-subgroup [8; 30], which can be applicable to any concrete symmetry-breaking as along as it leads to the same t-subgroup. Note that we require that the HSP symmetry-data for the t-subgroup can be induced from those for the parent MSG, which is a reasonable approximation (see SM). In dataset 2, we also provide the atomic insulator basis set [8; 9; 30] of each MSG classifying any material into one of three cases [30], case I, II or III, of which case II/case III definitely corresponds to a topological material (see SM for details). Note that the t-subgroup is still an MSG, and thus we need not to construct the atomic insulator basis set for the t-subgroup as long as we fix the convention of the t-subgroup as that adopted in dataset 2. In addition, to characterize the evolution of topological states, we identify all continuous paths for the t-subgroups. We assign an integer to each symmetry-breaking pattern (corresponding to a subgroup of the point group of the parent MSG, see SM for details). We require that 0 represents that all point operations in the parent MSG are preserved while 1 represents that only identity operation (\(E\)) is preserved. The the continuous path can be written in the form of \(0\to i\to j\rightarrow\ldots\to 1\) where for all adjacent pair \(i\to j\), the point operations for \(i\) contain those for \(j\), and there exists no other symmetry-breaking pattern between \(i\) and \(j\). Note that each symmetry-breaking pattern uniquely corresponds to one t-subgroup while there might exist several symmetry-breaking patterns corresponding to a given t-subgroup. We find that the maximal number of such continuous paths from 0 to 1 can be 8476. See an example in Fig. 1A for MSG 12.62 (in the Belov-Neronova-Sminova notation [36]), where all t-subgroups and the corresponding symmetry-breaking patterns are shown in the red circles of a tree-like plot. There are in total 3 continuous paths from 0 to 1: \[0\to 2\to 1,0\to 3\to 1,0\to 4\to 1\] in terms of symmetry-breaking patterns, and \(12.62\to 2.4\to 1.1,12.62\to 5.15\to 1.1,12.62\to 8.34\to 1.1\) in terms of t-subgroups. As listed in dataset 1, the point operations for the symmetry-breaking patterns 0, 1, 2, 3, 4 are, \(\{E,I,\Theta C_{2z},\Theta I\}\), \(\{E\}\), \(\{E,I\}\),\(\{E,\Theta I\}\) and \(\{E,\Theta\sigma_{z}\}\) (\(I\) is spatial inversion, \(\Theta\) is time-reversal, \(C_{2z}\) is 2-fold rotation around \(z\)-axis and \(\sigma_{z}\) is a mirror operation \(=IC_{2z}\)), respectively. Once the topological states are identified for all t-subgroups, we can then obtain the evolution of topological states along each continuous path. The last step is devoted to the identification of all symmetry-enforced band crossings (BCs) in case III. Note that in the earlier applications of SIs/TQC to the high-throughput discovery of topological materials [10; 11; 12; 13; 14], the characteristics of the enforced BCs are not all identified. Very recently, there have been much effort on the complete classification of all the BCs based on \(k\cdot p\) models [37] or combing \(k\cdot p\) models and compatibility relations [38] for the 230 space groups or the 1651 MSGs, respectively. Interestingly, the expansion order of \(k\cdot p\) model around the BC could have to be very large (6 at most) [38] to capture the compatibility relations required nodal structure. Then varying the expansion order introduces a hierarchy of nodal structures around which tiny energy gaps could lead to large Berry curvatures, which has been verified in CoSi experimentally [39]. To this end, we provide dataset 3 (named "Enforced_band_crossings"), where we list all possible enforced BCs for the 1651 MSGs, the compatibility relations required nodal structures and the \(k\cdot p\) models, and how to detect the enforced BCs in high-symmetry lines (HSLs)/high-symmetry planes (HSPLs) by the HSP symmetry-data of the HSPs residing in the HSLs/HSPLs. Using dataset 3, the enforced BCs by HSP symmetry-data and their evolutions in the investigated magnetic materials are all detected in this work. ### High-throughput investigations on 1267 magnetic materials Next we discuss the high-throughput first-principles investigations on the magnetic materials as listed in MAGNDATA [29], all synthesized experimentally whose magnetic structures are measured. Of the 1721 magnetic materials in total which we collected at the time of initiating this work, we first filter out materials with incommensurate magnetic ordering, without a definite MSG, and further requiring that the atoms fully occupy their sites, the lattice parameters are compatible with the MSG and the numbers of ions are compatible with the chemical formula, we finally obtain 1267 "high-quality" magnetic materials as our starting point for subsequent first-principles calculations and topological classification following the work flow shown in Fig. 1A. The experimentally identified MSGs [29] are set as the parent MSGs. The electron correlation is considered by choosing various values of Hubbard \(U\), and thus we have 5883 jobs in total for the 1267 materials (one material with one value of \(U\) is regarded as one job). Finally, 5062 jobs reach convergence in the first-principles calculations. However, the converged magnetic structure might break the original MSG symmetry. We then have 4012 jobs with successfully computed HSP symmetry-data corresponding to 295 MSGs and 1013 magnetic materials. We list all these 295 MSGs in Table S18 of SM where the number of t-subgroups, the number of materials in each MSG and the number of jobs are also listed. For these jobs, we compute the representations for all the energy levels at the HSPs for the parent MSG by the first-principles calculated wavefunctions. In order to detect all topological bands around the Fermi level, we vary the electron filling to be \(\nu=\nu_{0},\nu_{0}\pm 1,\nu_{0}\pm 2,\nu_{0}\pm 3,\nu_{0}\pm 4\) (\(\nu_{0}\) is the intrinsic filling, which is the number of valence electrons per primitive unit cell). Tuning filling might also lead to different topological phases, as has realized in a topological phase change transistor by electrostatic gating [41]. In total, for each filling, we collect 150882 sets of HSP symmetry-data considering all the 4012 jobs and all t-subgroups, all listed in SM. These HSP symmetry-data are then exploited to conduct all subsequent topological classifications including the identification of all enforced BCs. ### Magnetic topological materials statistics The statistics of the numbers of the identified magnetic topological materials in this work is shown in Fig. 2, which monotonically increase and finally almost approach constants with the variations of fillings. Considering the intrinsic filling first, 392 magnetic materials are identified to be topological, belonging to case II or III, and 236 are predicted to be topological for at least 3 different values of Hubbard \(U\), whose ID's in MAGNDATA [29] are listed in Table 1. The proportion of magnetic topological materials is consistent with that revealed in Ref. [14]. The number of topological materials is gradually increased to 945 when the fillings satisfying \(|\Delta\nu|\leq 4\) (\(\Delta\nu=\nu-\nu_{0}\)), indicating that more than 90% of the investigated magnetic materials can be topological. For all considered fillings, we verify that, when all the topmost levels are fully occupied for all HSPs in the parent MSG, the evolution of topological states in all these materials obey the intuitive picture: case I changes to case I, case II changes to case I/II and case III changes to case I/I/III for any adjacent pair \(i\to j\) in each continuous path by symmetry-breaking. To what extent the band topology by symmetry-breaking can be preserved could be meaningful in the optimization of topological materials, which might break some symmetry. In Table 1, we then show the number of t-subgroups in which the nontrivial topology is preserved for at least three values of \(U\) for the listed magnetic materials. Interestingly, for other materials, it might occur that case I can changes to case II/III and case II can changes to case III: When the topmost level at some HSP is not fully occupied and contains more than one (co-)irreps (which might be caused by some hidden symmetry), one need to choose one or several (co-)irreps to be occupied while a lowered symmetry can allow different (co-)irreps to be chosen, as shown in Fig. 1B leading to the counterintuitive result. Note that In Fig. 1C, we also show a possibility that a normal metal phase, where the filling is an odd number while the bands are subject to a Kramers degeneracy in the whole Brillouin zone, changes to a Kramers-Weyl semimetal phase [40] by symmetry-breaking. Considering all t-subgroups, there are 1003 magnetic materials which _can_ be topological at some filling with respect to some t-subgroup and for some value of \(U\). We list all these 1003 materials in Tables S19-S94 of SM, where the corresponding fillings for case II, case III and case II/III only when considering symmetry-breaking are listed for each of these material. We also assemble all results for all the 5883 jobs in Table S95-S287 of SM, where the calculated magnetic moments, the detailed topological properties, the plots of band and density of states (DOS) can be found. We manually select 9 magnetic topological materials whose electronic band plots are depicted in Fig. 3 and describe the results by symmetry-breaking in these materials. These materials own either a noticeable band gap or enforced BC almost at the Fermi level. Consider the intrinsic filling. The materials in Fig. 3 are topological for at least four values of \(U\) other than Sr\({}_{2}\)TblrO\({}_{6}\), for which only two jobs (\(U=\)0, 4 eV) reach convergence. The topological classifications for Sr\({}_{2}\)TblrO\({}_{6}\) with \(U=0\) and 4 eV are identical, with respect to all t-subgroups (including the parent MSG): for t-subgroup MSG 2.4, SI \(=(0,0,0,2)\in\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times \mathbb{Z}_{4}\)[9] while for other t-subgroups, the classifications are case I. Note that different values of \(U\) might correspond to different topological predictions (case II/III) in UASS and Ba\({}_{3}\)CoIr\({}_{2}\)O\({}_{9}\). For others, different values of \(U\) with topological prediction give either case II or case III. By symmetry-breaking, the nontrivial band topology in these materials can be maintained, trans formed or trivialized. For example, 21 of the 35 t-subgroups for UAsS (\(U=\) 0, 2, 4, 6 eV) give case II/case III prediction, indicating the robustness of nontrivial band topology against symmetry-breaking. For CaMnSi (\(U=\) 0, 1, 2, 3 eV), 5 of the 35 t-subgroups give case II while the rest give case I. For CsMnF\({}_{4}\) with \(U=0,1,2,3\) eV, the topological classifications share the same results of topological states with respect to all t-subgroups. For this material, 8 of the 16 t-subgroups give case III prediction and the rest give case I, and the enforced BCs include two pairs of opposite-charged isoenergetic Weyl points and one Weyl nodal line for the parent MSG. The Weyl points of each pair are related by inversion symmetry. By symmetry-breaking, the original Weyl points/nodal line can be gapped: For example, with respect to the t-subgroup MSG 6.18, the Weyl points are all gapped while the Weyl nodal line still exists); With respective to the t-subgroup 18.19, the original inversion related Weyl points are allowed to own different energies. In the following, we take HoB\({}_{2}\) and CeMn\({}_{2}\)Si\({}_{2}\) as examples to show the detailed evolutions of topological states along the continuous paths. Figure 1: **Work flow of investigating the evolution of topological states.** (A) The first step is performing the first-principles calculations on each selected magnetic material, and then the HSP symmetry-data with respect to the parent MSG are computed based on dataset 2, when the converged magnetic structure fulfills the parent MSG. Then by dataset 1, all HSP symmetry-data for all t-subgroups can be obtained. Combining the HSP symmetry-data and atomic insulator basis set (shown in dataset 2) for each t-subgroup, we then quickly classify the material in the t-subgroup into case I, II or III [30]. For case III, we obtain all enforced BCs by scanning all HSPs/HSLs/HSPLs in the Brillouin zone by dataset 3. (B) shows a case where case I can evolve to case II/III: The topmost level (indicated by the dashed ellipse) participating in the computation of the HSP symmetry-data, might be protected by some hidden symmetry and then carries two identical two-dimensional irreps (represented by two gray dots) at some HSP. By symmetry-breaking, the two-dimensional irrep splits to two different one-dimensional irreps, represented by red and blue dots, respectively. Consider two bands are filled. there is only one possibility of HSP symmetry-data before symmetry-breaking (one gray dot). However, by symmetry-breaking, there are three possibilities of irreps chosen to be occupied. As shown in the two panels, the original phase in case I can evolve to case I and case II/III, respectively. (C) demonstrates that the normal metal can be tuned to be a Kramers-Weyl semimetal [40] by symmetry-breaking: In the normal metal, energy bands are subject to two-fold Kramers degeneracy by the symmetry of combination of time-reversal and spatial inversion (\(\Theta I\)) with the filling an odd integer. Breaking the \(\Theta I\) symmetry, the energy bands would split and only be degenerate at HSPs, where the Weyl points are resided. ### Materials example: HoB\({}_{2}\) We first show an example of topological magnetic material, a rare-earth diboride, HoB\({}_{2}\), classified to belong to case II. The ID in MAGNDATA [29] is 0.616. HoB\({}_{2}\) in the paramagnetic state is crystallized in a hexagonal lattice (space group: \(P6/mmm\)) with a very simple crystal structure containing only one chemical formula per primitive unit cell, while a ferromagnetic state breaks the three-fold rotation symmetry, resulting in MSG 12.62, reported to display a gigantic magnetocaloric effect [42, 43]. For MSG 12.62, all energy bands should be nondegenerate, then BCs are not allowed to exist stably at the special \(k\) points, as shown in dataset 3, while it owns a nontrivial SI group [9], \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{4}\). Indeed, for all t-subgroups, HoB\({}_{2}\) can only be case I or case II as shown for different fillings in the lower panel of Fig. 3A (\(\nu=\nu_{0},\nu_{0}-1,\nu_{0}+1,\nu_{0}+2\) and \(\nu_{0}+3\)) which characterize the evolution of topological states by the tree-like plots. The fillings correspond to the gaps shown in the upper panel of Fig. 3A, indicated by different colors. It is interesting to note that there are considerable gaps for the filling \(\nu_{0}+1\) (the region in orange in the upper panel of Fig. 3A), the nontrivial topology of which is expected to bring about noticeable protected boundary states. Such gaps share the same evolution of topological states as those for another three fillings as shown in the lower panel of Fig. 3A, where the colors shown in the inset correspond to the gaps in the respective colors. Furthermore, the quasi-flat bands (whose band indices are \(\nu_{0}+2\) and \(\nu_{0}+3\)) are found to topological by comparing the SIs in two tree-like plots of the lower panel of Fig. 3A. The coexistence of magnetic order, topological quasi-flat bands and multiple topological gaps around the Fermi level makes this material a fascinating platform to study possible exotic quantum excitations. \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l} \hline \hline ID: N & ID: N & ID: N & ID: N & ID: N & ID: N & ID: N & ID: N & ID: N & ID: N \\ \hline [MISSING_PAGE_POST] & & \\ \hline \hline \end{tabular} \end{table} Table 1: **The 236 magnetic topological materials predicted to be topological for at least 3 values of \(U\) at the intrinsic filling.** ID denotes the ID or label in MAGNDATA [29] and N denotes the number of t-subgroups with respect to which the corresponding material is topological for the different choices of values of \(U\). For example, ID = 1.308 in the last column uniquely corresponds to the magnetic topological insulator MnBi\({}_{2}\)Te\({}_{4}\)[28]. Figure 2: **Statistics of identified magnetic topological materials in this work.** ### Materials example: CeMn\({}_{2}\)Si\({}_{2}\) We then show another example for CeMn\({}_{2}\)Si\({}_{2}\) whose ID in MAGNDATA is 1.490 [29] and MSG is 126.386. The Ce ion is found to own vanishing magnetism while the magnitude of the magnetic moments in all Mn ions are 1.9 \(\mu\)B experimentally [44]. We chose the results for the calculations in which all the values of \(U\) on the Ce and Mn ions are \(0\) eV, in which the calculated magnetic moments reasonably reproduce those by experiments. It is noted that the other results for the rest values of \(U\) share similar topological properties discussed here. Consider the intrinsic filling. In the parent MSG, CeMn\({}_{2}\)Si\({}_{2}\) is predicted to be in case III and furthermore, the HSP symmetry-data guarantee that the enforced BCs appear in HSLs, \((0,w,\frac{1}{2})\) and \((w,w,\frac{1}{2})\) by dataset 3, as shown in Fig. 3B, where these BCs are also verified from the representations of all the \(k\) points in a dense \(k\) mesh in the HSLs. It is also found that the BCs in these HSLs actually lie in a nodal line in an HSPL (\((u,v,\frac{1}{2})\)) by dataset 3, with a degeneracy 4, denoted to be Dirac nodal line. Then consider the effect of the symmetry-breaking. These Dirac nodal lines can be gapped in some symmetry-breaking patterns, resulting in a possibly-trivial insulator (in case I) or a topological phase in case II. In other symmetry-breaking patterns, the resulting topological phases are all in case III, and furthermore, the guaranteed BC lies in either Dirac nodal line or Weyl nodal line (whose degeneracy is two). We show the evolution of topological states for this material in Fig. 3C. It is worth mentioning that the tree-like plot in Fig. 3C only contains the symmetry-breaking patterns resulting in case II/III, and the symmetry-breaking patterns following the symmetry-breaking pattern in the end of the tree-like plot (15 and 57) all result in case I, a possibly Figure 3: **Electronic band plots of nine manually selected magnetic topological materials identified in this work.** In all the band plots in this work, the Fermi level is set to be 0 eV. The chemical formula, the ID in MAGNDATA [29], the MSG, the Hubbard \(U\) are all provided in the head of each band plot. Consider the intrinsic filling. Other than CsMnF\({}_{4}\) in (B) and CeMn\({}_{2}\)Si\({}_{2}\), which are classified in case III, the rest are in case II. Sr\({}_{2}\)TbIrO\({}_{6}\), UAsS, CaMnSi, Ba\({}_{3}\)CoIr\({}_{2}\)O\({}_{9}\), EuMg\({}_{2}\)Bi\({}_{2}\), Mn\({}_{2}\)AlB\({}_{2}\) and HoB\({}_{2}\) own SI \(=(1)\in\mathbb{Z}_{2},(0,1)\in\mathbb{Z}_{2}\times\mathbb{Z}_{4},(1)\in\mathbb{ Z}_{2},(1,0)\in\mathbb{Z}_{2}\times\mathbb{Z}_{2},(1)\in\mathbb{Z}_{2},(1)\in \mathbb{Z}_{2},(0,0,2)\in\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{4}\), respectively. For CsMnF\({}_{4}\), the enforced BCs appear in \((-w,0,0),(-w,0,\frac{1}{2}),(0,w,0),(0,w,\frac{1}{2}),(u,0,v)\). For CeMn\({}_{2}\)Si\({}_{2}\), the enforced BCs appear in \((0,w,\frac{1}{2}),(w,w,\frac{1}{2}),(u,v,\frac{1}{2})\). The effect of symmetry-breaking on the topological properties are described in the main text. The topological properties with respect to all t-subgroups (including the parent MSG) can be found in SM. trivial topological phase. ### Conclusion and perspective To conclude, a complete dataset of all t-subgroups for the 1651 MSGs and the compatibility relations between the HSPs of each pair of the parent MSG and its t-subgroup was constructed. The dataset can also be applied in studying the symmetry-breaking of the superconducting pairing belonging to higher-dimensional irreps of point groups [15; 45]. The 1651 atomic insulator basis sets were also provided, which can be applied to any magnetic/nonmagnetic material with the identified MSG, experimentally [29] or theoretically [46]. The HSPs, their little groups and the characters of (co-)irreps were explicitly provided in the 1651 MSGs for both Bradley-Cracknell [47] and Bilbao [35] conventions, two frequently-adopted conventions. The topological classification results for more than one thousand magnetic materials obtained in this work can guide experimentalists to choose their interested material and the way of breaking symmetry, which might pave Figure 4: **Evolution of topological states in HoB\({}_{2}\) and CeMn\({}_{2}\)Si\({}_{2}\).** In the upper panel of (A) are shown the band structure where all the bands are nondegenerate and the plot of DOS. The continuous gaps corresponding to the intrinsic filling \(\nu_{0}\) and the fillings \(\nu_{0}-1,\nu_{0}+1,\nu_{0}+2,\nu_{0}+3\) are indicated by the regions in yellow, cyan, orange, green and magenta, respectively. The lower panel of (A) shows the tree-like plots for the topological state evolution corresponding to the gaps indicated by the colors shown in the insets. The symmetry-breaking patterns are shown: 0, 1, 2, 3 and 4 correspond to the t-subgroups 12.62, 1.1, 2.4, 5.15 and 8.34, respectively. Note that the peaks in the DOS plot correspond to the two bands (the band indices are \(\nu_{0}+2\) and \(\nu_{0}+3\)) indicated by the white double-arrow and such quasi-flat bands are topological (with SI \(=(1,1,3)\)). (B) shows the band structures along two HSLs, \((0,w,1/2)\) and \((w,w,1/2)\), where the orange and blue colors encode two different co-irreps of the corresponding HSLs. Various BCs composed of these two co-irreps can be found and the BCs indicated by the red arrows are deduced by the HSP symmetry-data: Along these two HSLs are the HSPs Z \((0,0,\frac{1}{2})\)-R \((0,\frac{1}{2},\frac{1}{2})\)-Z \((0,1,\frac{1}{2})\), and Z \((0,0,\frac{1}{2})\)-A \((-\frac{1}{2},-\frac{1}{2},\frac{1}{2})\)-Z \((-1,-1,\frac{1}{2})\), respectively. Note that in each of these HSLs, there exists another BC related with the one indicated by the red arrow by an MSG operation. (C) shows the tree-like plot for the evolution of the original Dirac nodal lines in CeMn\({}_{2}\)Si\({}_{2}\). In each circle, the integer denotes the symmetry-breaking patten, and II stands for case II and WNL/DNL stands for Weyl nodal line/Dirac nodal line in case III. the avenue to the realization of highly-sensitive magnetism-controlled band topology and the topological magnets with high operational temperature. They can also be used as the training dataset to develop a simple-to-use heuristic chemical rule of diagnosing band topology such as the topogivity [48] by machine-learning. The enforced of BCs are identified by the HSP symmetry-data, already producing a large number of BCs with various topological characters: We have collected 179 and 2006 independent BCs at HSPs and lying in HSLs, respectively, considering the intrinsic fillings. It should be noted that to identify all BCs, we need to adopt a poor man's strategy: a dense \(k\)-point mesh should be chosen in HSLs/HSPLs and all the symmetry-data at the \(k\) points in the mesh should be evaluated, which is left to the future work. Note that the topological classifications for the t-subgroups here for magnetic materials are irrelevant with the origin of the symmetry-breaking, so that the results are applicable widely in a general sense. Since we identified all continuous paths of t-subgroups by symmetry-breaking, the corresponding evolution of topological states along some continuous path guarantees no omission of topological states detected in a concrete process of symmetry-breaking. Inversely, the topological state evolution might indicate the type of symmetry-breaking. Other than useful in guiding the control of the topological states, the results are expected to aid in the assessment on the robustness of the nontrivial band topology against symmetry-breaking, which could be useful in the optimization of topological materials or the transport measurements which need an application of external stimuli. The correlation of magnetism and nontrivial band topology with other ordering, such as ferroelectric order [49], and novel crystal structure (e.g. Kagome lattice), merits future studies by choosing suitable candidates from our predicted magnetic topological materials. Lastly, a more generic symmetry-breaking could include the translation symmetry breaking enlarging the primitive unit cell and lead to a Brillouin zone folding (e.g. a charge-density wave order [31; 32]), which is not exhaustible, but can still apply the t-subgroups we obtained here after firstly identifying the subgroup(s) by only considering the translation symmetry-breaking, of which the details are described in SM. Another further work that can be based on the t-subgroups exhaustively listed here is to explore the splitting of each Wyckoff position for each MSG in the t-subgroup, which might guide the design for approximate MSG symmetry simply by the atomic positions once the Wyckoff position splitting can be ignored approximately. The t-subgroups for the 1651 MSGs could also be utilized to tabulate novel groups beyond the MSGs by homomorphism theorem. Besides, one can use the t-subgroups (or supergroups) of MSG for the Kagome lattice, Lieb lattice or other interesting lattice showing exotic physics properties to find materials realizations [50]. ###### Acknowledgements. We are very grateful for earlier collaborations on related topics with Ashvin Vishwanath, Hoi Chun Po, Haruki Watanabe and Seishiro Ono. F.T. appreciates insightful discussions with Wei Chen, Qun-Li Lei, Kai Li and Yang-Yang Lv. We also thank very helpful suggestions from Ge Yao on high-performance computing. **Funding:** F.T. was supported by National Natural Science Foundation of China (NSFC) under Grant No. 12104215 and Young Elite Scientists Sponsorship Program by China Association for Science and Technology. F.T. and X.W. were supported by NSFC Grants No. 12188101, 11834006, 51721001, and 11790311, and the excellent program at Nanjing University. X.W. also acknowledges the support from the Tencent Foundation through the XPLORER PRIZE. **Author contributions:** F.T. conceived and designed the project, and performed all calculations. All authors contributed to the writing and editing of the manuscript. **Competing interests:** None declared. **Data and materials availability:** All data are available in the manuscript or the supplementary materials. The three datasets constructed in this work can be found in: [https://box.nju.edu.cn/published/three-datasets/](https://box.nju.edu.cn/published/three-datasets/).
2308.06512
HyperFormer: Enhancing Entity and Relation Interaction for Hyper-Relational Knowledge Graph Completion
Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples, which effectively represent additional fine-grained information about its associated triple. Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown triples while considering its qualifiers. Most existing approaches to HKGC exploit a global-level graph structure to encode hyper-relational knowledge into the graph convolution message passing process. However, the addition of multi-hop information might bring noise into the triple prediction process. To address this problem, we propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple. More precisely, HyperFormer is composed of three different modules: an entity neighbor aggregator module allowing to integrate the information of the neighbors of an entity to capture different perspectives of it; a relation qualifier aggregator module to integrate hyper-relational knowledge into the corresponding relation to refine the representation of relational content; a convolution-based bidirectional interaction module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. realize the depth perception of the content related to the current statement. Furthermore, we introduce a Mixture-of-Experts strategy into the feed-forward layers of HyperFormer to strengthen its representation capabilities while reducing the amount of model parameters and computation. Extensive experiments on three well-known datasets with four different conditions demonstrate HyperFormer's effectiveness. Datasets and code are available at https://github.com/zhiweihu1103/HKGC-HyperFormer.
Zhiwei Hu, Víctor Gutiérrez-Basulto, Zhiliang Xiang, Ru Li, Jeff Z. Pan
2023-08-12T09:31:43Z
http://arxiv.org/abs/2308.06512v1
HyperFormer: Enhancing Entity and Relation Interaction for Hyper-Relational Knowledge Graph Completion ###### Abstract. Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples, which effectively represent additional fine-grained information about its associated triple. Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown triples while considering its qualifiers. Most existing approaches to HKGC exploit a global-level graph structure to encode hyper-relational knowledge into the graph convolution message passing process. However, the addition of multi-hop information might bring noise into the triple prediction process. To address this problem, we propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple. More precisely, HyperFormer is composed of three different modules: an _entity neighbor aggregator_ module allowing to integrate the information of the neighbors of an entity to capture different perspectives of it; a _relation qualifier aggregator_ module to integrate hyper-relational knowledge into the corresponding relation to refine the representation of relational content; a _convolution-based bidirectional interaction_ module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. Furthermore, we introduce a Mixture-of-Experts strategy into the feed-forward layers of HyperFormer to strengthen its representation capabilities while reducing the amount of model parameters and computation. Extensive experiments on three well-known datasets with four different conditions demonstrate HyperFormer's effectiveness. Datasets and code are available at [https://github.com/zhiweihu1103/HKGC-HyperFormer](https://github.com/zhiweihu1103/HKGC-HyperFormer). 2019 Knowledge graphs, hyper-relational knowledge graphs, knowledge graph completion ## 1. Introduction Knowledge Graphs (KGs) (Krishnan et al., 2016) store and organize factual knowledge of the world using triples of the form \((h,r,t)\)(Krishnan et al., 2016), capturing that entities \(h,t\) are connected via relation \(r\). Popular KGs, such as WordNet (Krishnan et al., 2016), Freebase (Bengio et al., 2016), and Wikidata (Wikidata, 2017), are widely used in several tasks, ranging from question answering (Krishnan et al., 2016; Krishnan et al., 2016; Krishnan et al., 2016; Krishnan et al., 2016) to recommendation systems (Krishnan et al., 2016; Krishnan et al., 2016). However, relational KGs have no means for representing additional information of facts. For example, for the triple (_Joe Biden_, _educated at_, _University of Delaware_) represented in Figure 1, it is non-trivial representing the major studied by Joe Biden at the _University of Delaware_. To address this shortcoming, _hyper-relational KGs_ have been proposed, extending binary relational KGs by associating with each triple additional attributes in the form of relation-entity pairs, known as _qualifiers_. Thus in this case a relational fact is composed by the main triple and its qualifiers. For example, for the triple in Figure 1, the qualifier pairs (_academic major, political science_), (_academic degree, Bachelor of Arts_), (_start time, 1961_), and (_end time, 1965_) describe the major and degree information of _Joe Biden_ education at _the University of Delaware_ from _1961 to 1965_. Like standard KGs, hyper-relational KGs are inevitably incomplete. To tackle this problem, several hyper-relational knowledge graph completion (HKGC) approaches have been recently proposed (Krishnan et al., 2016; Krishnan et al., 2016; Krishnan et al., 2016), to examine the impact of the addition of qualifier pairs on the knowledge graph completion task. Most existing methods for HKGC (Krishnan et al., 2016; Krishnan et al., 2016; Krishnan et al., 2016) employ graph convolutional networks (GCNs) to incorporate qualifier pairs information into entity and relation embeddings. In particular, when encoding the content of the graph structure, these approaches use multiple layers of graph convolution operations to incorporate multi-hop information into the representation of entities. Although these methods enrich the representation of entities, they inevitably introduce additional noise by considering information that might not be relevant for an entity. More precisely, the first source of noise comes from entities in the standard KG, i.e., entities occurring in main triples. For instance, in Figure 1 in the _Global-level Graph-based Representation_ part, when trying to predict (_Joe Biden_, _educated at_, _?_) using two layers of graph convolution operations, information about the entities _Columbia University_, _Barack Obama_, _Widener University_, _university teacher_ will be incorporated into the representation of _Joe Biden_. However, the confidence of the true answer _University of Delaware_ will be affected by the information from the entities _Columbia University_ and _Widener University_, as the three schools share a high degree of similarity. A second source of noise comes from the introduction of hyper-relational knowledge. Going back to the previous example, to predict the triple (_Joe Biden_, _educated at_, _?_) with qualifiers (_start time_, _1961_), and (_end time_, _1965_), the neighbor (_Barack Obama_, _educated at_, _Columbia University_) includes the qualifiers (_start time_, _1981_) and (_end time_, _1983_), which will affect the representation of the relation _educated at_ and the prediction of where _Joe Biden_ was educated at. The main objective of this paper is to introduce an alternative to global graph operations for the HKGC task. Note that, as shown in Figure 1 in the _Local-level Sequence-based Representation_ part, the local sequential content does not introduce redundant entity and relation content present in the global graph structure, because the entities and relations involved in a local sequence are directly related to the content to be predicted. With this in mind, we introduce **HyperFormer**, a framework for HKGC that considers local-level sequential information and abandons the global-level structural content. Specifically HyperFormer integrates hyper-relational information into the entity and relation embeddings of a fact by using three modules: an _entity neighbor aggregator_ module allowing to integrate the information of one-hop local neighbors of an entity into its representation to capture different perspectives of it; a _relation qualifier aggregator_ module to integrate hyper-relational knowledge into the corresponding relation representation, so that relations occurring in different facts are contextualized by the qualifier pairs information; a _convolution-based bidirectional interaction_ module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. Furthermore, to increase HyperFormer's capacity while reducing the amount of parameters and calculations, we introduce a _Mixture-of-Experts_ (_MoE_) strategy to leverage the sparse activation nature in the feed-forward layers of transformers. Our contributions can be summarized as follows: * We propose a framework for HKGC that fully exploits local-level sequential information, while preserving the structural information of qualifiers. Further, we integrate the information of one-hop neighbors of an entity to capture different perspectives of it. In addition, the adoption of a bidirectional interaction mechanism strengthens the awareness between entities, relations, and qualifiers. * We introduce a MoE strategy to enhance the representation capabilities of HyperFormer, while reducing the number of parameters and calculations of the model. * We conduct extensive experiments with four different conditions: mixed-percentage mixed-qualifier, fixed-percentage mixed-qualifier, fixed-percentage fixed-qualifier, and different numbers of entity's neighbors. Our results show that HyperFormer achieves SoTA performance for HKGC. We also conducted various ablation studies. ## 2. Related Work Knowledge Graph CompletionThere are mainly two kinds of existing KGC methods: structure-based methods and description-based methods. Depending on the type of embedding space, structure-based methods can be divided into three categories: (i) point-wise space methods, e.g., TransE (Chen et al., 2016), TransR (Chen et al., 2016), HAKE (Zhou et al., 2017); (ii) complex vector space methods, e.g. ComplEx (Zhou et al., 2017), RotatE (Zhou et al., 2017), QuatE (Zhou et al., 2017); Figure 1. The global-level graph-based representation and local-level sequence-based representation based on two triples with qualifier pairs. (iii) manifold space methods, e.g., MuRP (Bartner et al., 2015), AttH (Bartner et al., 2015). Description-based methods leverage text descriptions of entities and relations, e.g., KG-BERT (Krishnan et al., 2017), StAR (Srivastava et al., 2017), CoLE (Krishnan et al., 2018). There are also KGC approaches (Krishnan et al., 2018; Kochelev et al., 2019) considering schema information. Hyper-relational Knowledge Graph CompletionEarlier works on HKGC proposed embedding-based methods to learn and reason with hyper-relational knowledge (Krishnan et al., 2018; Kochelev et al., 2019; Kochelev et al., 2019; Kochelev et al., 2019). However, they assume that hyper-relational knowledge is equally important for all relations, which is often not the case. To encode the contribution of different qualifier pairs, HINGE (Krishnan et al., 2018) adopts a convolutional framework to iteratively convolve every qualifier pair into the main triple, which can naturally discriminate the importance of different hyper-relational facts. StarE (Bartner et al., 2015) extends CompGCN (Zhou et al., 2017) by encoding the qualifier pairs of a triple and further combining it with the relation representation, and then uses a transformer (Zhou et al., 2017) encoder to model the interaction between qualifiers and the main triple. Hy-Transformer (Krishnan et al., 2018) replaces the computation-heavy GCN aggregation module with a layer normalization operation (Bartner et al., 2015), significantly improving the computational efficiency. QUAD (Krishnan et al., 2018), based on the StarE model, proposes a framework that utilizes multiple aggregators to learn better representations for hyper-relational facts. However, the global-level graph structure adopted by both StarE and QUAD integrates the multi-hop neighbor content into the corresponding entity through the graph convolution process, which inevitably introduces noise, because the node content far away from an entity will affect the real representation of such entity. HAHE (Krishnan et al., 2018) introduces the global-level and local-level attention to model the graphical structure and sequential structure, however, the introduction of graph structure information brings a large burden to model computation. GRAN (Srivastava et al., 2017) represents hyper-relational facts as a heterogeneous graph, representing it with edge-aware attentive biases to capture both local and global dependencies within the given facts. In particular, GRAN also considers a local sequential representation structure and captures the semantic information inside hyper-relation facts by using a transformer encoder (Zhou et al., 2017). However, GRAN has three shortcomings. First, it only considers the knowledge directly related to the current statement, fully ignoring any type of information from the neighbors of an entity. Second, the constraining process of the hyper-relational knowledge onto the main triple is simply handed over to a transformer, without capturing the pair-like structure of qualifiers. Third, the transformer uses full connection attention to realize the interaction between each token in the sequence, ignoring the explicit interaction between entities, relations and hyper-relational knowledge. ## 3. Method In this section, we describe the architecture of HyperFormer (cf. Figure 2). We start by introducing necessary background (SS 3.1), and then present in detail its modules. (SS 3.2). ### Background Hyper-relational Knowledge GraphLet \(\mathcal{V}\) and \(\mathcal{R}\) be finite sets of _entities_ and _relation types_, respectively. Furthermore, let \(\mathcal{Q}=2^{(\mathcal{R}\times\mathcal{V})}\). A _hyper-relational knowledge graph_\(\mathcal{G}\) is a tuple \((\mathcal{V},\mathcal{R},\mathcal{T})\), where \(\mathcal{T}\) is a finite set of (_qualified_) _relational facts_. The relational facts in \(\mathcal{T}\) are of the form \((h,r,t,qp)\) where \(h,t\in\mathcal{V}\) are the _head_ and _tail_ entities, \(r\in\mathcal{R}\) is the _relation_ connecting \(h\) and \(t\), and \(qp=\{(q_{r_{1}},q_{e_{1}}),\ldots,(q_{r_{n}},q_{e_{n}})\}\in\mathcal{Q}\) is a set of _qualifier pairs_, with _qualifier relations_\(q_{r_{i}}\in\mathcal{R}\) and _qualifier entities_\(q_{e_{i}}\in\mathcal{V}\). We will refer to \((h,r,t)\) as the _main triple_ of the relational fact. Under this representation regime, we can enrich the main triple (_Joe Biden_, _educated at_, _University of Delaware_) in Figure 1 with the additional semantic information provided by the qualifiers as follows: (_Joe Biden_, _educated at_, _University of Delaware_, (_academic degree_, _Bachelor of Arts_), (_academic major_, _political science_), (_start time_, _1961_), (_end time_, _1965_)). Crucial to our approach is the information provided by the neighbors of an entity: For an entity \(h\), we define its _neighbors_\(\mathcal{N}_{h}=\{(r,t,qp)\mid(h,r,t,qp)\in\mathcal{G}\}\). Hyper-relational Knowledge Graph CompletionFollowing previous work, similar to the KG completion task, the HKGC task aims at predicting the correct head or tail entity in a fact. More precisely, given a relational fact \((h,r,?,qp)\) or \((?,r,t,qp)\) with the tail or head entity of the main triple missing, the aim is to infer the missing entity? from \(\mathcal{V}\). ### Model Architecture We present in this section the three modules of HyperFormer: (_i_) _Entity Neighbor Aggregator_, SS3.2.1; (_ii_) _Relation Qualifier Aggregator_, SS3.2.2; (_iii_) _Convolution-based Bidirectional Interaction_, SS3.2.3. Furthermore, we introduce a Mixture-of-Experts strategy, SS3.2.4. #### 3.2.1. ENA: Entity Neighbor Aggregator We explore a transformer mechanism (previously used for other KG-reasted tasks (Bartner et al., 2015; Kochelev et al., 2019; Kochelev et al., 2019)) for the HKGC task. Given a relational fact \((h,r,t,qp)\), for predicting the tail entity \(t\), similar to the input representation of BERT (Bartner et al., 2015), we build an input sequence \(S=(h,r,\texttt{[MASK]},qp)\), where \(\texttt{[MASK]}\) is a special token used in place of entity \(t\). We randomly initialize each token input vector to feed it into a transformer and get the output representation \(\mathbf{E}^{[mask]}\) of the \(\texttt{[MASK]}\) token: \[(\mathbf{E}^{h},\mathbf{E}^{r},\mathbf{E}^{[mask]},\mathbf{E}^{qp})=\texttt{Trm}(h,r, \texttt{[MASK]},qp) \tag{1}\] The representation \(\mathbf{E}^{[mask]}\) captures the information interaction between \(h,r\) and \(qp\). We will then use \(\mathbf{E}^{[mask]}\) to score all candidate entities and infer the most likely tail entities. Enhanced Representation of Entity NeighborsThe one-hop neighbors of an entity help to describe it from different perspectives. In many cases, simultaneously considering the information from multiple neighbors of an entity is necessary to correctly infer the entities that are related to it. For instance, in Figure 2, using the connections (_position held_, _President_) and (_work location_, _Washington_, _D.C_) from _Joe Biden_, we can infer that _Joe Biden_ is the president of _the United States_, but not the prime minister of _United Kingdom_. To adequately encode the content of the neighbors of an entity, we introduce the Entity Neighbor **G**ngergator (**ENA**) module. Given a masked tuple \((h,r,\texttt{[MASK]},qp)\), besides considering the embeddings of \(h,r\) and \(qp\), we also introduce information about the neighbors of \(h\) as follows: 1. For all relational facts in \(\mathcal{G}\) having \(h\) as the head, we generate masked 4-tuples by using the placeholder \(\texttt{[MASK]}\) in place of \(h\). More precisely, fix entity \(h\), we define the following set of neighbors of \(h\): \(\mathcal{N}_{h}=\{(\texttt{[MASK]},r^{\prime},t^{\prime},qp^{\prime})\mid(h,r^{ \prime},t^{\prime},qp^{\prime})\in\mathcal{G}\}\). Note that for different tuples of the form \((h,r^{\prime},t^{\prime},qp^{\prime})\) with \(h\) as the head, different \([\texttt{MASK}]\) representations will be obtained. For each tuple \(N\in\mathcal{N}_{h}\), the masked token representation of \(N\) is denoted as \(\mathbf{N}_{h}^{[mask]}\). 2. We then sum up all \([\texttt{MASK}]\) representations of the elements in \(\mathcal{N}_{h}\) and further average the results to obtain the aggregated representation of the neighbors of \(h\) as \(\mathbf{E}_{net}^{h}=mean(\sum\limits_{N\in\mathcal{N}_{h}}\mathbf{N}_{h}^{[mask]})\). #### 3.2.2. Rqa: Relation Qualifier Aggregator As discussed, qualifiers allow to describe relational content in a fine-grained manner. For example, in Figure 2 the qualifier pairs (_academic major_, _political science_) and (_academic degree_, _Bachelor of Arts_) can respectively describe the major and degree information of the relation _educated at_ in the main triple (_Joe Biden_, _educated at_, _University of Delaware_). Directly serializing the qualifier content \(qp\) into the main triple (\(h,r,[\texttt{MASK}]\)) can form a tuple representation \((h,r,[\texttt{MASK}],qp)\), but this representation destroys the structural information of the qualifiers content. For example, _political science_ can limit the major that _Joe Biden_ obtained at the _University of Delaware_ when it appears at the same time as _academic major_. To incorporate the sequence \((h,r,[\texttt{MASK}],qp)\) along with the hyper-relational knowledge into the message passing process without damaging the qualifier's structural knowledge, we introduce the **R**elation **Q**ualifier **A**ggregator (**RQA**) module. Specifically, we use the following three steps to obtain the aggregated representation of qualifier pairs for the relation occurring in a main triple: 1. For a relational fact \((h,r,[\texttt{MASK}],qp)\) with \(qp=\{(q_{r_{1}},q_{e_{1}}),\ldots,\)\((q_{r_{n}},q_{e_{n}})\}\), we randomly initialize the embedding of qualifier relations and qualifier entities for each qualifier pair, getting the input embedding: \(\mathbf{qp}=\{(\ \mathbf{q}_{r_{1}},\mathbf{q}_{e_{1}}),\ldots,\)\((\mathbf{q}_{r_{n}},\mathbf{q}_{e_{n}})\}\). 2. Qualifier pairs provide additional complementary relational knowledge, each of them capturing different aspects of it. So, we aim to acquire a representation of \(r\) given the knowledge of a qualifier pair \(q_{r_{1}}\) and \(q_{e_{2}}\). To this end, we consider both a relation \(r\) and a qualifier pair \((q_{r_{i}},q_{e_{i}})\) as a form of pseudo-triple (\(r\), \(q_{r_{i}},q_{e_{i}}\)). We can then use several knowledge representation functions based on embedding methods to get a representation of \(r\): We compose the representations of the qualifier relations \(\mathbf{q}_{r_{i}}\) and qualifier entities \(\mathbf{q}_{e_{i}}\) using an entity-relation function \(\theta\), such as TransE (Bordes and Zemel, 2017), DistMult (Mult, 2018), CompIEx (Mult, 2018) or RotatE (Mult, 2018). We denote this composition as \(\Theta_{i}=\theta(\mathbf{q}_{r_{i}},\mathbf{q}_{e_{i}})\). 3. The representations of different qualifier pairs are aggregated via a position-invariant summation function, which is then averaged: \(\mathbf{E}_{qual}^{r}=mean(\sum_{i=1}^{n}\Theta_{i})\). #### 3.2.3. Cbi: Convolution-based Bidirectional Interaction To obtain enhanced entity and relation representations, we could directly combine \(\mathbf{E}^{h}\) and \(\mathbf{E}_{net}^{h}\), \(\mathbf{E}^{r}\) and \(\mathbf{E}_{qual}^{r}\) using the addition operation. However, this simple fusion method cannot fully realize the deep interaction between entity, relation and qualifier pairs. Indeed, it is not enough to simply pass the message between the three of them to the transformer, because its fully-connected attention layer only captures universal inter-token associations. To address this shortcoming, we propose a novel **C**onvolution-based **B**idirectional **I**nteraction (**CBI**) module to explicitly integrate each state of pairwise representation pairs: _entity-relation_, _entity-qualifiers_ and _relations-qualifiers_. For instance, to obtain the new relational embedding, the information of the entity neighbor embedding \(\mathbf{E}_{net}^{h}\) and qualifier embeddings \(\mathbf{E}_{qual}^{r}\) can be integrated into the relation embedding \(\mathbf{E}^{r}\) using the following four steps: 1. We combine \(\mathbf{E}^{r}\) and \(\mathbf{E}_{net}^{h}\) based on the convolution operation \(\mathbf{Conv}(\circ,\circ)\), and then pass the obtained joint representation to a _perceptron interaction layer_\(\mathbf{PInt}(\circ)\) to obtain \(\mathbf{O}_{r\gets nei}^{r}\) and \(\mathbf{O}_{nei\gets r}^{h}\) as follows: \[[\mathbf{O}_{r\gets nei}^{r}\mathbf{O}_{nei\gets r}^{h}]=\mathbf{PInt}( \mathbf{Conv}(\mathbf{E}^{r},\mathbf{E}_{net}^{h}))\] (2) We analogously fusion \(\mathbf{E}^{r}\) with \(\mathbf{E}_{qual}^{r}\) and \(\mathbf{E}_{net}^{h}\) with \(\mathbf{E}_{qual}^{r}\) to respectively get \(\mathbf{O}_{r\gets qual}^{r}\) and \(\mathbf{O}_{qual\gets r}^{r}\), \(\mathbf{O}_{nei\gets qual}^{r}\) and \(\mathbf{O}_{qual\gets nei}^{r}\): \[[\mathbf{O}_{r\gets qual}^{r}\mathbf{O}_{qual\gets r}^{r}]=\mathbf{PInt}( \mathbf{Conv}(\mathbf{E}^{r},\mathbf{E}_{qual}^{r}))\] (3) Figure 2. An overview of our HyperFormer model, containing three modules: Entity Neighbor Aggregator (§ 3.2.1), Relation Qualifier Aggregator (§ 3.2.2), and Convolution-based Bidirectional Interaction (§ 3.2.3). \[[\mathbf{O}^{h}_{neit\leftarrow\textit{qual}};\mathbf{O}^{r}_{\textit{qual} \leftarrow\textit{nei}}]=\textbf{PInt}(\textbf{Conv}(\mathbf{E}^{h}_{nei}\mathbf{E}^{r}_{ qual})) \tag{4}\] The used \(\textbf{Conv}(\circ,\circ)\) operation, for the initial fusion of two vectors, is similar to the one proposed by InteractE (Tran et al., 2017), but in principle any other vector technique could be used instead. We use a one-layer MLP as our **PInt**(\(\circ\)) operation. Note that the result of **PInt** is divided into two parts to obtain two enhanced vector representations. For example, for the integration of \(\mathbf{E}^{r}\) and \(\mathbf{E}^{r}_{qual}\) after the two above operations are performed, we obtain the qualifier-aware relation representation \(\mathbf{O}^{r}_{\textit{re}\leftarrow\textit{qual}}\) and relation-aware qualifier representation \(\mathbf{O}^{r}_{\textit{qual}\leftarrow\textit{r}}\). These representations are defined in a bidirectional way, so each of them contributes to the definition of the other. 2. We then employ a gating mechanism to combine both the entity's neighbor-aware relation representation \(\mathbf{O}^{r}_{\textit{re}\leftarrow\textit{nei}}\) and the qualifier-aware relation representation \(\mathbf{O}^{r}_{\textit{re}\rightarrow\textit{qual}}\). The final representation of relation \(r\) is denoted as \(\mathbf{O}^{r}_{\textit{re}\leftarrow(\textit{nei},qual)}\); (5) \[\mathbf{O}^{r}_{\textit{re}\leftarrow(\textit{nei},qual)}=\alpha\odot\mathbf{O}^{r}_{ \textit{re}\rightarrow\textit{nei}}+(1-\alpha)\odot\mathbf{O}^{r}_{\textit{re} \rightarrow\textit{qual}}\] (6) \[\alpha=\sigma(W_{1}\mathbf{O}^{r}_{\textit{re}\rightarrow\textit{nei}}+W_{2}\mathbf{O}^ {r}_{\textit{re}\rightarrow\textit{qual}}+b_{1}+b_{2})\] (7) where \(\alpha\) is the reset gate that controls the flow of information from \(\mathbf{O}^{r}_{\textit{re}\rightarrow\textit{nei}}\) to \(\mathbf{O}^{r}_{\textit{re}\rightarrow\textit{qual}}\), \(\sigma\) is the sigmoid function and \(W_{1},W_{2},b_{1},b_{2}\) are the parameters to be learned. 3. Similarly, we get the relation-aware and qualifier-aware entity's neighbor representation \(\mathbf{O}^{h}_{\textit{nei}\leftarrow(r,qual)}\), and the relation-aware and entity's neighbor-aware qualifier representation \(\mathbf{O}^{r}_{\textit{qual}\leftarrow(r,nei)}\). Then we add up \(\mathbf{O}^{r}_{\textit{re}\leftarrow(\textit{nei},qual)}\) and \(\mathbf{O}^{r}_{\textit{qual}\leftarrow(r,nei)}\) to obtain the final relational representation \(\mathbf{M}^{r}\). We use \(\mathbf{O}^{h}_{\textit{nei}\leftarrow(r,qual)}\) as the final entity representation, denoted as \(\mathbf{M}^{h}\), which is the result of combining \(\mathbf{O}^{h}_{\textit{nei}\leftarrow r}\) and \(\mathbf{O}^{h}_{\textit{nei}\leftarrow\textit{qual}}\) using Step (2). 4. For the input masked fact \((h,r,\textbf{[MASK]},qp)\), after performing the above three steps, we get enhanced representations \(\mathbf{M}^{h},\mathbf{M}^{r}\) of the entity \(h\) and relation \(r\), which can respectively be used as the input initialization (together with the randomly initialized \(\textbf{[MASK]}\) and \(qp\)) to the transformer encoder. The output \(\mathbf{E}^{[mask]}\) is then used to score all candidate entities. More precisely, we use a standard softmax classification layer to predict the target entity, and use cross-entropy between the one-hot label and the prediction as training loss, defined as: \[p^{[mask]}=\textit{softmax}(\mathbf{E}^{ent}\mathbf{E}^{[mask]})\] (8) \[\mathbf{L}=-\sum_{mask}y^{[mask]}\text{log}\,p^{[mask]}\] \(\mathbf{E}^{ent}\) represents the embedding matrix of all entities in the dataset, \(p^{[mask]}\) is the predicted distribution of the \(\textbf{[MASK]}\) position over all entities, \(y^{[mask]}\) is the corresponding one-hot label of the \(\textbf{[MASK]}\) position. #### 3.2.4. Transformer with Mixture-of-Experts Although transformers can achieve good results in many fields, a recognized challenge for them is that the model parameters grow quadratically as the embedding dimension increases. However, it has been noted that two-thirds of the parameters of a transformer are concentrated in the feed-forward layers (FFN), and that not all of them are necessary (Kip from Freebase. These datasets have the following two characteristics: i) only certain percentage of main triples contain qualifiers, 13.6% in WD50K, 2.6% in WikiPeople and 45.9% in JF17K; ii) each triple contains a different number of qualifiers, 0-7 for WikiPeople, and 0-4 for JF17K, where the qualifier number means that the main triple does not contain hyper-relational knowledge. We refine these datasets from two perspectives, based on the percentage of triples containing hyper-relational knowledge and on the number of qualifiers associated to triples. So, we construct three datasets with different conditions: _Mixed-percentage Mixed-qualifier_, _Fixed-percentage Mixed-qualifier_, _Fixed-percentage Fixed-qualifier_, where Mixed-percentage and Mixed-qualifier respectively indicate that the number of triples with qualifiers is arbitrary (not fixed) and that the number of qualifiers per triple is not fixed. The Fixed condition is defined as expected, and clearly, there is no Mixed-percentage in the Fixed qualifier scenario. In addition, we also construct the datasets in which all entities have low degree. The four scenarios are specifically described as follows: 1. **Mixed-percentage Mixed-qualifier.** These datasets are directly taken from (Bordes et al., 2017). They aim at verifying the generalization performance in the scenario where the percentage of triples with qualifiers and the number of qualifiers associated with each triple is arbitrary. 2. **Fixed-percentage Mixed-qualifier.** We construct subsets of existing datasets in which the percentage of triples with qualifiers is fixed. For example, for WD50K we construct: WD50K (33), WD50K (66) and WD50K (100), with the number in parentheses representing the percentage of triples with qualifiers. We construct similar subsets for the WikiPeople and JF17K datasets. The corresponding datasets statistics are presented in Table 1. 3. **Fixed-percentage Fixed-qualifier.** Due to the sparsity of higher qualifier facts in WikiPeople and JF17K datasets, we follow GETD (Kang et al., 2018) to filter out the triples with 3 and 4 associated qualifiers, obtaining WikiPeople-3, WikiPeople-4, JF17K-3, and JF17K-4, respectively. The corresponding datasets statistics are presented in Table 2. 4. **Entities with Low Degree.** To evaluate the performance of the tested models depending on the node degrees (number of neighbors), we construct subsets of existing datasets in which all nodes have the low degree. In this case, we select as the basic datasets data in which _all_ triples have qualifiers. For example, for WD50K (100), we construct four subsets with node degrees from one to four, denoted as WD50K (100) #1, WD50K (100) #2, WD50K (100) #3, and WD50K (100) #4. The corresponding datasets statistics are presented in Table 3. #### 4.1.2. Baselines We compare HyperForm with various state-of-the-art methods for hyper-relational knowledge graph completion: m-TransH (Wang et al., 2017), RAE (Wang et al., 2017), NaLP-Fix (Wang et al., 2017), HINGE (Wang et al., 2017), StarE (Bordes et al., 2017), Hy-Transformer (Wang et al., 2017), GRAN (Wang et al., 2017), and QUAD (Wang et al., 2017). Note that GRAN contains three variants, i.e., GRAN-hete, GRAN-homo and GRAN-complete. If there is no special suffix, GRAN denotes GRAN-hete. There are two variants of QUAD: QUAD and QUAD (Parallel). If there is no special suffix, QUAD denotes QUAD (Parallel). #### 4.1.3. Evaluation Protocol We evaluate the model performance using two common metrics: MRR and Hits@_N_ (abbreviated as H@_N_). MRR is the average of reciprocal ranking, and Hits@_N_ is the proportion of top \(N\) (we use _N_=[1,3,10]). For both metrics, the larger the value, the better the performance of a model. #### 4.1.4. Implementation Details All experiments are conducted on six 32G Tesla V100 GPUs. Our method is implemented with PyTorch. We employ AdamW (Kingmare et al., 2015) as the optimizer and a cosine decay scheduler with linear warm-up is used for optimization. We determine the hyperparameter values by using a grid search based on the MRR performance on the validation dataset. We select the neighbors of an entity in {2, 3, 4}, the qualifier pairs of a relation in {5, 6, 7}, the learning rate in {3e-4, 4e-4, 5e-4, **6e-4**, 7e-4}, the label smoothing factor in {0.3, 0.5, 0.7, **0.9**}, the number of layer in a Transformer in {2, 4, 8, 16}, the head number in {1, 2, 4, 8}, the Transformer input dropout rate in {0.6, **0.7**, 0.8}, the Transformer hidden dropout rate in {0.**0.1**, 0.2, 0.3}, the dimensions of the embedding size in [80, 200, 320, 400], the number of convolution channels in [64, 96, 128], the convolutional kernel size is 9, the convolutional input dropout rate in {0.1, **0.2**, 0.3}, the convolutional hidden dropout rate in {0.4, **0.5**, 0.6}, the number of experts in the MoE module in {8, 16, 32, 64}, the number of top experts in the MoE module in {2, 4, 6, 8}. ### Main Results **Mixed-percentage Mixed-qualifier.** Table 4 reports the results on the Hyper-relational KGC task with mixed-percentage mixed-qualifier on the WD50K, WikiPeople, and JF17K datasets. We can \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Datasets** & **Train** & **Valid** & **Test** & **Entity** & **Relation** \\ \hline WikiPeople-3 & 20656 & 2582 & 2582 & 12270 & 66 \\ WikiPeople-4 & 12150 & 1519 & 1519 & 9528 & 50 \\ JF17K-3 & 27635 & 3454 & 3455 & 11541 & 104 \\ JF17K-4 & 7607 & 951 & 951 & 6536 & 23 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of datasets under fixed-percentage fixed-qualifier scenarios. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Datasets** & **Train** & **Valid** & **Test** & **Entity** & **Relation** \\ \hline WD50K (100) \#1 & 2191 & 3279 & 5297 & 10375 & 189 \\ WD50K (100) \#2 & 4382 & 3279 & 5297 & 11241 & 200 \\ WD50K (100) \#3 & 6547 & 3279 & 5297 & 11985 & 207 \\ WD50K (100) \#4 & 8506 & 3279 & 5297 & 12649 & 210 \\ \hline WikiPeople (100) \#1 & 1253 & 1181 & 1173 & 4212 & 83 \\ WikiPeople (100) \#2 & 2498 & 1181 & 1173 & 4711 & 85 \\ WikiPeople (100) \#3 & 3647 & 1181 & 1173 & 5040 & 87 \\ WikiPeople (100) \#4 & 4515 & 1181 & 1173 & 5338 & 89 \\ \hline JF17K (100) \#1 & 2492 & 3152 & 4142 & 7320 & 253 \\ JF17K (100) \#2 & 4984 & 3152 & 4142 & 7930 & 255 \\ JF17K (100) \#3 & 7294 & 3152 & 4142 & 8367 & 257 \\ JF17K (100) \#4 & 9219 & 3152 & 4142 & 8688 & 259 \\ \hline \hline \end{tabular} \end{table} Table 3. Statistics of datasets with different number of node degrees. The value behind # indicates that the entity in the training set only contains the number of neighbors with the corresponding value. observe that HyperFormer significantly outperforms all baselines on the WD50K and JF17K datasets. Specifically, HyperFormer respectively achieves performance improvements of 1.0% / 0.7% / 1.6% in MRR / Hits@1 / Hits@10 on WD50K, compared to the best performing baseline, Hy-Transformer. It gets analogous improvements of 4.7% / 6.2% / 1.7% on JF17K. On WikiPeople its performance is slightly below the SoTA. These results can be explained by the fact that both WD50K and JF17K contain a relatively high percentage of triples with qualifier pairs: 13.6% and 45.9%, respectively. However, WikiPeople has a much lower percentage of triples with qualifiers, 2.6%, so the triple-only facts dominate the overall score. Hyperformer successfully exploits the interaction between entities, relations and qualifiers to improve the performance on the HKGC task, especially on datasets with a rich amount of hyper-relational knowledge. **Fixed-percentage Mixed-qualifier.** We also investigate the effectiveness of HyperFormer and the baselines under different ratios of relational facts with qualifiers. For each of the used datasets, we obtained three subsets (as described in Point 2 in Section 4.1) containing approximately 33%, 66%, and 100% of facts with qualifiers. Table 5 presents an overview of the obtained results. We observe that HyperFormer gets larger improvements over the baselines when the percentage of available facts with qualifiers is higher. Specifically, on WikiPeople, HyperFormer respectively achieves improvements over QUAD of 0.9% / 1.6% / 4.1% in the 33% / 66% / 100% variants. This shows that an important reason of why Hyperformer could not surpass GRAN-het in the mixed-percentage mixed-qualifier HKGC task on the WikiPeople dataset (cf. Table 4) is that it only contains a very small amount of triples with hyper-relational knowledge. Indeed, the main strength of Hyperformer is in the integration of hyper-relational knowledge by capturing the interaction of entities, relations and qualifiers. **Fixed-percentage Fixed-qualifier.** We investigate the performance on hyper-relational data with fixed number of qualifiers in Table 6. We observe that HyperFormer consistently achieves state-of-the-art performance on all datasets in Table 6. At the same time, we find out that the performance of all models is significantly lower in the mixed-percentage mixed-qualifier datasets than in scenarios with a fixed number of hyper-relational knowledge, which is consistent for both WikiPeople and JF17K. This might be explained by the fact that uneven distributions among different quantities of hyper-relational facts may affect the stability of model training. **Different Numbers of Neighbors.** We also investigate the performance of the models on entities with few neighbors in Table 7. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**WD50K (13.6)**} & \multicolumn{3}{c}{**WikiPeople (2.6)**} & \multicolumn{3}{c}{**JF17K (45.9)**} \\ \cline{2-11} & **MRR** & **H@1** & **H@10** & **MRR** & **H@1** & **H@10** & **MRR** & **H@1** & **H@1** & **H@10** \\ \hline m-TransH [39] & – & – & – & 0.063 & 0.063 & 0.300 & 0.206 & 0.206 & 0.463 \\ RAE [44] & – & – & – & 0.059 & 0.059 & 0.306 & 0.215 & 0.215 & 0.469 \\ NaLP-Fix [26] & 0.177 & 0.131 & 0.264 & 0.420 & 0.343 & 0.556 & 0.245 & 0.185 & 0.358 \\ HINGE [26] & 0.243 & 0.176 & 0.377 & 0.476 & 0.415 & 0.585 & 0.449 & 0.361 & 0.624 \\ StarE [9] & 0.349 & 0.271 & 0.496 & 0.491 & 0.398 & **0.648** & 0.574 & 0.496 & 0.725 \\ Hy-Transformer [43] & 0.356 & 0.281 & 0.498 & 0.501 & 0.426 & 0.634 & 0.582 & 0.501 & 0.742 \\ GRAN-homo [37] & – & – & – & 0.487 & 0.410 & 0.618 & 0.611 & 0.533 & 0.767 \\ GRAN-complete [37] & – & – & – & 0.489 & 0.413 & 0.617 & 0.591 & 0.510 & 0.753 \\ GRAN-hete [37] & – & – & – & **0.503** & **0.438** & 0.620 & 0.617 & 0.539 & 0.770 \\ QUAD [29] & 0.348 & 0.270 & 0.497 & 0.466 & 0.365 & 0.624 & 0.582 & 0.502 & 0.740 \\ QUAD (Parallel) [29] & 0.349 & 0.275 & 0.489 & 0.497 & 0.431 & 0.617 & 0.596 & 0.519 & 0.751 \\ HyperFormer & **0.366** & **0.288** & **0.514** & 0.473 & 0.361 & 0.646 & **0.664** & **0.601** & **0.787** \\ \hline \hline \end{tabular} \end{table} Table 4. Evaluation of different models with mixed-percentage mixed-qualifier on the WD50K, WikiPeople and JF17K datasets. All baseline results are collected from the original literature. Best scores are highlighted in bold, the second best scores are underlined, and ’-’ indicates the results are not reported in previous work. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{6}{c}{**WD50K**} & \multicolumn{6}{c}{**WikiPeople**} & \multicolumn{6}{c}{**JF17K**} \\ \cline{2-11} & **33\%** & **66\%** & **100\%** & **33\%** & **66\%** & **100\%** & **33\%** & **66\%** & **100\%** & **33\%** & **66\%** & **100\%** \\ \cline{2-11} & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** & **MRR** & **H@1** \\ \hline StarE [9] & 0.308 & 0.247 & 0.449 & 0.388 & 0.610 & 0.543 & 0.192 & 0.143 & 0.259 & 0.205 & 0.343 & 0.279 & 0.290 & 0.197 & 0.302 & 0.214 & 0.321 & 0.223 \\ Hy-Transformer [43] & 0.313 & 0.255 & 0.458 & 0.397 & 0.621 & 0.557 & 0.192 & 0.140 & 0.268 & 0.215 & 0.372 & 0.316 & 0.298 & 0.204 & 0.325 & 0.234 & 0.361 & 0.266 \\ GRAN [37] & 0.322 & 0.269 & 0.472 & 0.419 & 0.647 & 0.593 & 0.201 & 0.156 & 0.287 & 0.244 & 0.403 & 0.349 & 0.307 & 0.212 & 0.326 & 0.237 & 0.382 & 0.290 \\ QUAD [29] & 0.329 & 0.266 & 0.479 & 0.416 & 0.646 & 0.572 & 0.204 & 0.155 & 0.282 & 0.228 & 0.385 & 0.318 & 0.307 & 0.210 & 0.334 & 0.241 & 0.379 & 0.277 \\ HyperFormer & **0.338** & **0.280** & **0.492** & **0.434** & **0.666** & **0.611** & **0.213** & **0.161** & **0.298** & **0.255** & **0.426** & **0.373** & **0.352** & **0.254** & **0.411** & **0.325** & **0.478** & **0.396** \\ \hline \hline \end{tabular} \end{table} Table 5. Evaluation of different models with fixed-percentage mixed-qualifier on the WD50K, WikiPeople and JF17K datasets. Best scores are highlighted in bold. In this case we look at training datasets in which all entities have one, two, three or four neighbors (see Point 4 in Section 4.1), while the validation and test sets remain unchanged. We found that the baseline models perform very poorly on these subsets. For example, in WD50K_100 (#1), StarE, and QUAD can respectively obtain an MRR metric of 10.4% and 6.5%, while HyperFormer can achieve 19.3%. This is explained by the way that these two models use the global-level structure to encode qualifier knowledge into the relation representation, which is suitable for scenarios in which all nodes have several neighbors. GRAN achieves 12.5%, since it does encode qualifiers into the main triple in a local-level fashion like HyperFormer, but it ignores the structural content of qualifier pairs. Differently, HyperFormer proposes a new integration method that realizes the interaction between entities, relations and qualifiers. ### Ablation Studies We verify the contribution of each component of HyperFormer and the effect of different hyperparameters on the performance. First, we explore the impact of different translation operations on the performance, cf. Table 8. Then, we look at different variants of the model, and different hidden sizes, number of experts, and values of label smoothing, cf. Figure 3. Finally, we show in Table 9 the amount of parameters and calculations with and without MoE. **Different Translation Methods.** Table 8 shows detailed results of selecting different translation methods to compose the qualifier entity and qualifier relation. Specifically, we adopt four translation methods, i.e., TransE (Chen et al., 2017), DistMult (Wang et al., 2018), ComplEx (Wang et al., 2018), and RotatE (Wang et al., 2018). We find that the selection of translation methods has little impact on the performance. This may be because different translation methods are only used to convert entities and relations in the qualifier pairs into single vector, while the subsequent CBI module is used to determine the combined representation of qualifier pairs and its impact on entities and relations. **Different Variants of HyperFormer.** Figure 3(a) presents the results on the impact of each component of HyperFormer. We consider four variants with/without MoE in the transformer part: (i) without any modules, denoted _None_; (ii) with the ENA module as the only component, denoted _w_/_ENA_; (iii) with the RQA module as the only component, denoted _w_/_RQA_; (iv) with both ENA, RQA, and CBI, denoted as HyperFormer. We observe that the introduction of the ENA or RQA module in some cases can bring an improvement in the performance compared to _None_. The addition of the CBI module brings a consistent improvement, because CBI realizes the bidirectional interaction between entities, relations, and hyper-relational knowledge. In addition, after adding the MoE mechanism to the transformer, a further improvement is obtained. **Different Hidden Sizes.** Figure 3(b) presents results showing the influence of the hidden sizes. We observe that the increase of the hidden size helps to capture a large amount of messages, which can improve the model performance. However, after the hidden size is set to 320, the results show a stable trend. This indicates that it is not the case that the higher the embedding dimension, the better the performance of the model, which is consistent with the finding by (Han et al., 2017). Note that setting a larger hidden size requires more video memory, so in practice, after the performance is stable the minimum value is set as the final one. **Number of Experts.** We also investigate the impact of the number of experts on the performance of HyperFormer, cf. Figure 3(c). The selection of the number of experts is an important factor as it directly affects the size of the model. We observe that by selecting a larger number of experts a slight improvement is obtained. In practice, it is necessary to balance the number of video memory and experts, while ensuring that the number of experts remains as \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**WikiPeople-3**} & \multicolumn{4}{c}{**WikiPeople-4**} & \multicolumn{4}{c}{**JF17K-3**} & \multicolumn{4}{c}{**JF17K-4**} \\ \cline{2-13} & **MRR** & **H@1** & **H@3** & **H@10** & **MRR** & **H@1** & **H@3** & **H@10** & **MRR** & **H@1** & **H@3** & **H@10** & **MRR** & **H@1** & **H@3** & **H@10** \\ \hline StarE (Chen et al., 2017) & 0.401 & 0.310 & 0.434 & 0.592 & 0.243 & 0.156 & 0.269 & 0.430 & 0.707 & 0.635 & 0.744 & 0.847 & 0.723 & 0.669 & 0.753 & 0.839 \\ Hy-Transformer (Wang et al., 2018) & 0.403 & 0.323 & 0.436 & 0.569 & 0.248 & 0.165 & 0.275 & 0.422 & 0.690 & 0.617 & 0.725 & 0.837 & 0.773 & 0.717 & 0.806 & 0.875 \\ GRAN (Wang et al., 2018) & 0.397 & 0.328 & 0.429 & 0.533 & 0.239 & 0.178 & 0.261 & 0.364 & 0.779 & 0.724 & 0.811 & 0.893 & 0.798 & 0.744 & 0.830 & 0.904 \\ QUAD (Wang et al., 2018) & 0.403 & 0.321 & 0.438 & 0.563 & 0.251 & 0.167 & 0.280 & 0.425 & 0.730 & 0.660 & 0.767 & 0.870 & 0.787 & 0.730 & 0.823 & 0.895 \\ HyperFormer & **0.573** & **0.511** & **0.603** & **0.693** & **0.393** & **0.336** & **0.415** & **0.496** & **0.832** & **0.790** & **0.855** & **0.914** & **0.857** & **0.811** & **0.884** & **0.937** \\ \hline \hline \end{tabular} \end{table} Table 6. Evaluation of different models with fixed-percentage fixed-qualifier on WikiPeople and JF17K datasets. Best scores are highlighted in bold. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**WD50K (100)**} & \multicolumn{4}{c}{**WikiPeople (100)**} & \multicolumn{4}{c}{**JF17K (100)**} \\ \cline{2-13} & **\#1** & **\#2** & **\#3** & **\#4** & **\#1** & **\#2** & **\#3** & **\#4** & **\#1** & **\#2** & **\#3** & **\#4** \\ \hline StarE (Chen et al., 2017) & 0.104 & 0.208 & 0.313 & 0.369 & 0.121 & 0.112 & 0.193 & 0.255 & 0.169 & 0.249 & 0.275 & 0.286 \\ Hy-Transformer (Wang et al., 2018) & 0.071 & 0.167 & 0.315 & 0.374 & 0.091 & 0.148 & 0.186 & 0.233 & 0.137 & 0.241 & 0.299 & 0.318 \\ GRAN (Wang et al., 2018) & 0.125 & 0.235 & 0.327 & 0.374 & 0.119 & 0.186 & 0.242 & 0.273 & 0.203 & 0.267 & 0.284 & 0.301 \\ QUAD (Wang et al., 2018) & 0.065 & 0.134 & 0.284 & 0.371 & 0.075 & 0.140 & 0.186 & 0.255 & 0.228 & 0.241 & 0.280 & 0.306 \\ HyperFormer & **0.193** & **0.303** & **0.374** & **0.410** & **0.194** & **0.252** & **0.303** & **0.328** & **0.305** & **0.338** & **0.350** & **0.374** \\ \hline Absolute improvement (\%) & 6.8\% & 6.8\% & 4.7\% & 3.6\% & 7.3\% & 6.6\% & 6.1\% & 5.5\% & 7.7\% & 7.1\% & 5.1\% & 5.6\% \\ \hline \hline \end{tabular} \end{table} Table 7. MRR results of different node degrees on the WD50K(100), WikiPeople(100) and JF17K(100) datasets. The last line shows the difference between best scores and the second best scores. small as possible without affecting the performance. **Label Smoothing.** The label smoothing strategy has been successfully used in the KGC task (Koo et al., 2017; Wang et al., 2018). It mitigates the bias of the pre-trained data due to random sampling. In Figure 3(d), we observe that setting different label smoothing values brings subtle performance differences, showing that HyperFormer is robust to the label smoothing strategy, and therefore not causing performance gaps due to improper value selection. In addition, we note that there is no unique label smoothing value that works for all datasets. **Parameters and Computational Complexity.** The number of parameters can be used to evaluate the trainable parameters of a model, while computational cost refers to the number of floating-point operations (FLOPs) required during training or inference. Table 9 shows that introducing the MoE mechanism can simultaneously reduce the number of parameters and computational cost, with a more significant reduction in computational cost. This is intuitively explained by the fact that MoE replaces the feed-forward layers in the original transformer. So, when obtaining the final result, only the predictions from experts with higher confidence are considered, suppressing the involvement of irrelevant neurons in the entire computation process. As a result, both the number of parameters and computational cost are reduced simultaneously. ## 5. Conclusion and Future Work In this paper, we proposed HyperFormer, a framework for the HKGC task which strengthens the bidirectional interaction between entities, relations, and qualifiers, while retaining the structural information of qualifiers in a local-level sequence. Experiments under different conditions on the WD50K, WikiPeople, and F17K datasets show that HyperFormer achieves in most cases better results than existing models. The ablation experimental results demonstrate the effectiveness of each module of HyperFormer. For future work, we will integrate other types of data in a KG, e.g., entities's textual descriptions or literals, for better entity representation, and apply HyperFormer into larger scale HKGs like the full WikiData. ###### Acknowledgements. This work has been supported by the National Natural Science Foundation of China (No.61936012, No.62076155), by the Key Research and Development Program of Shanxi Province (No.202102020101008), by the Science and Technology Cooperation and Exchange Special Program of Shanxi Province (No.202204041101016), by the Chang Jiang Scholars Program (J2019032) and by a Leverhulme Trust Research Project Grant (RPG-2021-140).
2301.05510
Application of Causal Inference Techniques to the Maximum Weight Independent Set Problem
A powerful technique for solving combinatorial optimization problems is to reduce the search space without compromising the solution quality by exploring intrinsic mathematical properties of the problems. For the maximum weight independent set (MWIS) problem, using an upper bound lemma which says the weight of any independent set not contained in the MWIS is bounded from above by the weight of the intersection of its closed neighbor set and the MWIS, we give two extension theorems -- independent set extension theorem and vertex cover extension theorem. With them at our disposal, two types of causal inference techniques (CITs) are proposed on the assumption that a vertex is strongly reducible (included or not included in all MWISs) or reducible (contained or not contained in a MWIS). One is a strongly reducible state-preserving technique, which extends a strongly reducible vertex into a vertex set where all vertices have the same strong reducibility. The other, as a reducible state-preserving technique, extends a reducible vertex into a vertex set with the same reducibility as that vertex and creates some weighted packing constraints to narrow the search space. Numerical experiments show that our CITs can help reduction algorithms find much smaller remaining graphs, improve the ability of exact algorithms to find the optimal solutions and help heuristic algorithms produce approximate solutions of better quality. In particular, detailed tests on $12$ representative graphs generated from datasets in Network Data Repository demonstrate that, compared to the state-of-the-art algorithms, the size of remaining graphs is further reduced by more than 32.6%, and the number of solvable instances is increased from 1 to 5.
Jianfeng Liu, Sihong Shao, Chaorui Zhang
2023-01-13T12:24:37Z
http://arxiv.org/abs/2301.05510v1
# Application of Causal Inference Techniques to the Maximum Weight Independent Set Problem ###### Abstract A powerful technique for solving combinatorial optimization problems is to reduce the search space without compromising the solution quality by exploring intrinsic mathematical properties of the problems. For the maximum weight independent set (MWIS) problem, using an upper bound lemma which says the weight of any independent set not contained in the MWIS is bounded from above by the weight of the intersection of its closed neighbor set and the MWIS, we give two extension theorems -- independent set extension theorem and vertex cover extension theorem. With them at our disposal, two types of causal inference techniques (CITs) are proposed on the assumption that a vertex is strongly reducible (included or not included in all MWISs) or reducible (contained or not contained in a MWIS). One is a strongly reducible state-preserving technique, which extends a strongly reducible vertex into a vertex set where all vertices have the same strong reducibility. The other, as a reducible state-preserving technique, extends a reducible vertex into a vertex set with the same reducibility as that vertex and creates some weighted packing constraints to narrow the search space. Numerical experiments show that our CITs can help reduction algorithms find much smaller remaining graphs, improve the ability of exact algorithms to find the optimal solutions and help heuristic algorithms produce approximate solutions of better quality. In particular, detailed tests on 12 representative graphs generated from datasets in Network Data Repository demonstrate that, compared to the state-of-the-art algorithms, the size of remaining graphs is further reduced by more than 32.6%, and the number of solvable instances is increased from 1 to 5. **AMS subject classifications:** 05C69; 68W40; 90C06; 90C27; 90C57 **Keywords:** maximum weight independent set; independent set extension; vertex cover extension; causal inference techniques; reduction algorithm; exact algorithm; heuristic algorithm; Network Data Repository. ## 1 Introduction Let \(G=(V,E,w)\) be an undirected vertex-weighted graph, where each vertex \(v\in V\) is associated with a weight \(w(v)\in\mathbb{R}^{+}\). A subset \(I\subseteq V\) is called an independent set if its vertices are pairwise non-adjacent, and the vertex cover of graph \(G\) is a subset of vertices \(VC\subseteq V\) such that every edge \(e\in E\) is incident to at least one vertex in subset \(VC\). Independent set and vertex cover are two complementary concepts in graph and can be transformed into each other on demand [29]. The maximum weight independent set (MWIS) problem is to find the independent set of largest weight among all possible independent sets and the weight of a MWIS of graph \(G\) is denoted by \(\alpha_{w}(G)\), while the minimum weight vertex cover (MWVC) problem asks for the vertex cover with the minimum weight. Furthermore, if subset \(I\subseteq V\) is a MWIS, then subset \(VC=V\backslash I\) is a MWVC, and vice versa [6, 29]. The MWIS problem is an extension of the maximum independent set (MIS) problem, which is a classic NP-hard problem [13, 9]. It can be applied to various real-world problems, such as information retrieval [4], computer vision [12], combinatorial auction problem [29] and dynamic map labeling problem [17]. Due to its wide range of practical applications, the research on efficient algorithms for computing the MWIS is of great significance. Most previous work are focused on heuristic algorithms to find near-optimal solutions in reasonable time [24, 20, 6, 18], while exact algorithms, usually referring to Branch-and-Bound (B&B) methods [3, 26, 2, 22], become infeasible when the size of problem increases. Recently, it has been well demonstrated that reduction rules (a.k.a. kernelization) are very effective in practice for solving the MIS problem [25]. These rules mine the structural properties of underlying graph and reduce the search space by such as removing vertices, contracting subgraphs, restricting the set of independent sets, etc., to produce a smaller kernel graph such that the MIS of the original graph can be recovered from the MIS of the kernel. After integrating them, some state-of-the-art exact solvers are able to solve the MIS problem on many large real networks [11]. These solvers can be usually divided into two types: One performs the kernelization only once and runs the B&B algorithm [23; 16] on the kernelized instance, while the other joins hands with the Branch-and-Reduce (B&R) algorithm [19] and performs reduction in every branch of the search tree. As for those instances that can't be solved exactly, high-quality solutions can be found by combining kernelization with local search [8; 10]. Moreover, when a vertex is selected for branching in the branching process of the B&R algorithm, if it is assumed to be in all MISs, then its satellite set will also be in all MISs [14], while its mirror set will be removed directly from the graph, if it is assumed not to be in all MISs [13]. Further, a conflict analysis on the assumption that a vertex is in all MISs can be also plugged in to find some contradictions and the concept of "unconfined/confined vertices" was introduced [28]. Later, an auxiliary constraint called packing constraint was proposed to accelerate the B&R algorithm by simply exploring branches that satisfy all packing constraints [1]. The central idea behind all these attempts for the MIS problem involves a state-preserving technique which starts from a vertex, named the starting vertex for convenience, and then finds a vertex set with the same state as the starting vertex to reduce the search space, thereby implying that some subsequent operations can be implemented on the resulting vertex set instead of only on the starting vertex. For the MWIS problem, similar state-preserving techniques are rarely used except for a recent work using unconfined/confined vertices [27], though some simple and fast reduction rules have been used in B&R algorithms [15; 27]. To this end, we devote ourselves into developing state-preserving techniques for the MWIS problem in this work. The state of the starting vertex we consider can be * strongly reducible, meaning that the vertex is included in all MWISs/MWVCs; or * reducible, meaning that the vertex is contained in a MWIS/MWVC. Considering that the assumed state of the starting vertex must be used to analyze its local structure to obtain inference results, these targeted state-preserving techniques are called causal inference techniques (CITs). Inspired by their success in solving the MIS problem, we will systematically develop CITs to solve the MWIS problem by analyzing intrinsic mathematical properties of underlying graph. More specifically, our main contributions are in three aspects as follows. First, by virtue of the upper bound lemma, i.e., the weight of any independent set not contained in the MWIS is bounded from above by the weight of the intersection of its closed neighbor set with the MWIS, two extension theorems are developed. With them, we propose a series of CITs which have been rarely used previously in the MWIS problem. According to the state of the starting vertex, our CITs can be divided into two categories. The first type is a strongly reducible state-preserving technique. We first assume that the starting vertex is strongly reducible, and then try to extend this vertex to obtain a vertex set with the same strong reducibility. If the upper bound lemma is not satisfied in this process, then this contradicts the assumption, and the starting vertex can be removed from the graph directly. Otherwise, combined with the state-preserving result obtained from the previous process, we continue to search for a set called the simultaneous set, which is either included in a MWIS or contained in a MWVC. The second type is a reducible state-preserving technique. Under the assumption that the starting vertex is reducible, a vertex set with the same reducibility can be obtained by extending from this vertex. Moreover, if this vertex is selected for branching in the B&R algorithm, with the upper bound lemma, an inequality constraint called weight packing constraint will be created to restrict subsequent searches. Next, according to the characteristics of the proposed CITs, we integrate them into the existing algorithmic framework. The first type of CIT can be used to design reduction rules to simplify graph. These reduction rules are integrated into the existing reduction algorithm. In the B&R algorithm, when a vertex is selected to branch, a vertex set and a weight packing constraint depending on the assumed state of the vertex can be obtained from state-preserving results of two types of CITs. The vertex set is used to further simplify the corresponding branch, while we can prune branches that violate constraints and simplify the graph by maintaining all created weight packing constraints. During the local search process of the heuristic algorithm, when the state of a vertex needs to be changed, all vertex states in the vertex set obtained by the second type of CIT will also be modified to be the same as that vertex, which expands the area of local search and improves the ability of local search to find better local optima. Numerical experiments on 12 representative graphs generated from datasets in Network Data Repository show that the performance of various algorithms is greatly improved after integrating our CITs. The size of the kernel obtained by the resulting reduction algorithm is greatly reduced. In addition, compared to the state-of-the-art exact algorithm, the number of solvable instances have been increased from 1 to 5. And the ability of the heuristic algorithm to find better local optimal solutions is significantly improved. These experimental results form the third major contribution of this paper. Relevant notations used in this work are given in Table 1 and the rest of the paper is organized as follows. We present two extension theorems in Section 2 and detail CITs in Section 3. How the CITs are combined with existing algorithmic frameworks is described in Section 4. Extensive numerical tests are carried out in Section 5 to verify the performance improvement of integrating our CITs into existing algorithmic frameworks in terms of efficiency and accuracy. The paper is concluded in Section 6 with a few remarks. ## 2 Two Extension Theorems The theoretical cornerstones of CITs in this paper are two extension theorems: independent set extension theorem and vertex cover extension theorem. Before delineating them, we need to have a deep understanding of the local structure of the MWIS and first give the upper bound lemma. **Lemma 2.1** (upper bound lemma).: _Let set \(I_{C}\) be an independent set in the graph._ * _Suppose there is an_ \(I_{w}\in A_{I}\) _such that_ \(I_{C}\not\subseteq I_{w}\)_, then_ \(w(I_{w}\cap N[I_{C}])\geqslant w(I_{C})\) _holds._ * _Assume that_ \(I_{C}\not\subseteq I,\forall I\in A_{I}\) _holds, then it satisfies:_ \(w(I_{C})<w(I\cap N[I_{C}]),\forall I\in A_{I}\)_._ Proof.: Proof We first prove \((a)\) by contradiction. If not, we can obtain an independent set \(I^{\prime}_{w}=(I_{w}\backslash N[I_{C}])\cup(I_{C})\) such that \(w(I^{\prime}_{w})=w(I_{w})+w(I_{C})-w(I_{w}\cap N[I_{C}])>w(I_{w})\), a contradiction. Next, we consider \((b)\). If there is an \(I_{1}\in A_{I}\) such that \(w(I_{C})\geqslant w(I_{1}\cap N[I_{C}]))\), we can construct an independent set \(I^{\prime}_{1}=(I_{1}\backslash N[I_{C}])\cup I_{C}\) satisfying \(w(I^{\prime}_{1})=w(I_{1})+w(I_{C})-w(I_{1}\cap N[I_{C}])\geqslant w(I_{1})\). Then \(I^{\prime}_{1}\in A_{I}\) and \(I_{C}\subseteq I^{\prime}_{1}\), which leads to a contradiction. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(G-\langle\Gamma,E,w\rangle\) & an undirected vertex-weight graph \(G\) with vertex set \(V\), edge set \(E\) and vertex weight function \(w:V\to\mathbb{R}^{2}\) \\ \hline \(N[v]-\{u\in V[u],G\in E\}\) & the weight set \(d\) vertex \(v\) & \(N[v]-N[v]\cup[e]\) & the closed weight set \(d\) vertex \(v\) \\ \hline \(N[S]-\{N[v]\}/N[S]\) & the open neighbor set of set \(S\) & \(N[S]-N[S]\cup[S]\) & the closed neighbor set \(d\) set \(S\) \\ \hline \(S[\) & the set set set \(S\) & \(w(S)=\frac{N[v]}{N[v]}\) & the weight of all vertices in set \(S\) \\ \hline \(d[v]\) & the degree \(d\) vertex \(v\) & \(d[w]\), & the minimum number of edges in the path from vertex \(v\) to vertex \(v\) \\ \hline \(N[v]=[u]\) & the set of vertices at distance \(f\) from vertex \(v\), \(G[S]-S_{E},w\) & the subgraph induced by a non-empty vertex subset \(S\) of \(V\) \\ \hline \(a[G]\) & the size of a MIS of unequal graph \(G\) & \(a_{v}[G]\) & the weight of a MWIS of graph \(G\) \\ \hline \(A_{I}\) & the set of all MWIS in graph \(G\) & \(A_{C}\) & the set of all MWIS in graph \(G\) \\ \hline \(S+A_{I}\) & set \(S\) is an independent set and is included in all MWISs & \(C+A_{C}\) & set \(C\) is contained in all MWISs \\ \hline vertex \(v\) is strongly reachable & writes \(v\) is included in all MWISs & vertex \(v\) is reducible & writes \(v\) is contained in a MWIS/MWC \\ \hline vertex \(v\) is strongly reachable & writes \(v\) is included in all MWISs & vertex \(v\) is strongly identified & write \(v\) is contained in all MWCs \\ \hline vertex \(v\) is inactive & writer \(v\) is included in a MWIS & writer \(v\) is handled & writer \(v\) is contained in a MWVC \\ \hline set \(S\) is strongly reachable & set \(S\) is an independent set and is included in all MWISs & set \(S\) is strongly handled & set \(S\) is centralized in all MWCs \\ \hline set \(S\) is inclusive & set \(S\) is an independent set and is included in a MWIS & set \(S\) is bounded & set \(S\) is contained in a MWVC \\ \hline Independent set \(S\) is strongly exclusive & independent set \(S\) is not contained in all MWIS & independent set \(S\) is exclusive & independent set \(S\) is not contained in a MWIS \\ \hline a set \(S\) called a simultaneous set & & set \(S\) is either included in a MWIS or contained in a MWVC \\ \hline \end{tabular} \end{table} Table 1: Notations used throughout the paper. The upper bound lemma describes such a property: For any independent set that is (strongly) exclusive, the weight of the intersection of its closed neighbor set with the MWIS is the upper bound on its weight. With it, the independent set extension theorem can be introduced as follows. **Theorem 2.2** (Independent Set Extension Theorem).: _Let sets \(IS\) and \(S\) be two independent sets in the graph._ * _Assume that there exists an_ \(I_{w}\in A_{I}\) _such that_ \(IS\subseteq I_{w}\)_. If there is an independent set_ \(IS^{\prime}\subseteq N(IS)\) _such that_ \(w(IS^{\prime})>w(IS\cap N(IS^{\prime}))\)_, then there exists an independent set_ \(IS^{\prime\prime}\subseteq N(IS^{\prime})\backslash N[IS]\) _satisfying the inequality:_ \(w(IS^{\prime})\leqslant w(IS\cap N(IS^{\prime}))+w(IS^{\prime\prime})\)_. In addition,_ \(IS\cup IS^{\prime\prime}\subseteq I_{w}\) _if such_ \(IS^{\prime\prime}\) _is unique._ * _Suppose_ \(S\blacktriangleleft A_{I}\)_, then for any independent set_ \(S^{\prime}\subseteq N(S)\)_, there is an independent set_ \(S^{\prime\prime}\subseteq N(S^{\prime})\backslash N[S]\) _such that_ \(w(S^{\prime})<w(S\cap N(S^{\prime}))+w(S^{\prime\prime})\)_. Besides, if such_ \(S^{\prime\prime}\) _is unique, then_ \(S\cup S^{\prime\prime}\blacktriangleleft A_{I}\)_._ Proof.: Proof We first consider the proof of \((a)\), and it is obvious that \(IS^{\prime}\not\subseteq I_{w}\). In view of the fact that the relationship between \(I_{w}\) and \(N[IS^{\prime}]\) satisfies: \(I_{w}\cap N[IS^{\prime}]=I_{w}\cap N(IS^{\prime})=(IS\cap N(IS^{\prime})) \cup(I_{w}\cap(N(IS^{\prime})\backslash N[IS]))\) and by the upper bound lemma, we can get: \(w(IS\cap N(IS^{\prime}))+w(I_{w}\cap(N(IS^{\prime})\backslash N[IS]))=w(I_{w} \cap N(IS^{\prime}))=w(I_{w}\cap N[IS^{\prime}])\geqslant w(IS^{\prime}).\) Thus, the existence of such \(IS^{\prime\prime}\) is proved. Furthermore, assuming that such \(IS^{\prime\prime}\) is unique, then \(IS^{\prime\prime}=I_{w}\cap(N(IS^{\prime})\backslash N[IS])\) and \(IS\cup IS^{\prime\prime}\subseteq I_{w}\). Similar ideas can be used to prove \((b)\). Obviously \(S^{\prime}\not\subseteq I,\forall I\in A_{I}\) holds, so from the upper bound lemma, it can be directly obtained: \(\forall I\in A_{I},w(I\cap N[S^{\prime}])>w(S^{\prime})\). Further, by considering that the relationship between \(I\) and \(N[S^{\prime}]\) satisfies: \(I\cap N[S^{\prime}]=I\cap N(S^{\prime})=(S\cap N(S^{\prime}))\cup(I\cap(N(S^{ \prime})\backslash N[S]))\), we prove the existence of such \(S^{\prime\prime}\). Also, if such \(S^{\prime\prime}\) is unique, the following result holds: \(S^{\prime\prime}=I\cap(N(S^{\prime})\backslash N[S]),\forall I\in A_{I}\), and then \(S\cup S^{\prime\prime}\blacktriangleleft A_{I}\). The independent set extension theorem gives a method for extending independent set that is (strongly) inclusive: Try to find an independent set to add to the extended independent set, and that independent set is the only one that guarantees that the upper bound lemma is satisfied in the local structure of the extended independent set. Next, with the help of the upper bound lemma, the vertex cover extension theorem is given below. **Theorem 2.3** (Vertex Cover Extension Theorem).: _Let sets \(IC\) and \(C\) be two vertex subsets in the graph._ * _Suppose set_ \(IC\subseteq VC_{w}\)_, then the vertices in_ \(IC\) _have the property:_ \(\forall p\in IC\)_,_ \(w(p)\leqslant\alpha_{w}(G[N(p)\backslash IC])\)_. Also, for a vertex_ \(v\in IC\) _and a vertex_ \(u\in N^{2}(v)\)_,_ \(IC\cup\{u\}\subseteq VC_{w}\) _holds if the inequality_ \(w(v)>\alpha_{w}(G[N(v)\backslash(IC\cup N(u))])\) _is satisfied._ * _Assume that set_ \(C\lhd A_{C}\)_, then_ \(\forall p\in C,w(p)<\alpha_{w}(G[N(p)\backslash C])\) _is always satisfied. In addition, if there exists a vertex_ \(v\in C\) _and a vertex_ \(u\in N^{2}(v)\) _such that_ \(w(v)\geqslant\alpha_{w}(G[N(v)\backslash(C\cup N(u))])\)_, then_ \(C\cup\{u\}\lhd A_{C}\)_._ Proof.: Proof We first consider \((a)\) and let set \(I_{w}=V\backslash VC_{w}\). From the upper bound lemma, these results can be directly obtained: \(\forall p\in IC,w(p)\leqslant w(I_{w}\cap N[p])=w(I_{w}\cap N(p))\leqslant \alpha_{w}(G[N(p)\backslash IC])\). Also, based on the assumption about \(u\) in \((a)\), if \(u\in I_{w}\), then \(w(v)\leqslant w(I_{w}\cap N[v])=w(I_{w}\cap N(v))\leqslant\alpha_{w}(G[N(v) \backslash(IC\cup N(u))])\), which leads to a contradiction. Similar methods can be used to prove \((b)\). First, \(\forall p\in C,\forall I\in A_{I},w(p)<w(I\cap N[p])=w(I\cap N(p))\leqslant \alpha_{w}(G[N(p)\backslash C])\) can be obtained from the upper bound lemma. Besides, under given conditions about \(u\) in \((b)\), if there is an \(I^{*}\in A_{I}\) such that \(u\in I^{*}\), a contradiction is deduced from \(w(p)<w(I^{*}\cap N[p])=w(I^{*}\cap N(p))\leqslant\alpha_{w}(G[N(p)\backslash( C\cup N(u))])\). The vertex cover extension theorem describes how to expand a set that is (strongly) sheathed: Attempt to find a vertex that satisfies the condition that after removing its neighbor set, the upper bound lemma is not satisfied in the local structure of the expanded set. If such a vertex is found, it is directly added to the expanded set. ## 3 Causal Inference Techniques In this section, with the help of the upper bound lemma and two extension theorems, we give the CITs used in this paper. Our CITs can be divided into two types: The first type is a strongly reducible state-preserving technique introduced in Section 3.1, while the second type is a reducible state-preserving technique shown in Section 3.2. ### Strongly reducible state-preserving technique The strongly reducible state-preserving technique exploits the assumption that a vertex is strongly reducible, and the assumed state of the vertex can be divided into two cases: The vertex is assumed to be strongly inclusive or is assumed to be strongly sheathed. We first consider the assumption that a vertex is strongly inclusive and give the following definition. **Definition 3.1**.: _Let set \(S\) be an independent set in the graph. If a vertex \(u\in N(S)\) such that \(w(u)\geqslant w(S\cap N(u))\), we call it a child of set \(S\). A child \(u\) is called an extending child if and only if there exists a unique independent set \(S^{*}\subseteq N(u)\backslash N[S]\) such that \(w(u)<w(S\cap N(u))+w(S^{*})\) and vertex set \(S^{*}\) is called a satellite set of set \(S\)._ On the basis of Definition 3.1, with the assumption that a vertex is strongly inclusive, the concept of 'confined/unconfined vertices' is given by the following conflict analysis process: **Definition 3.2**.: _Let \(v\) be a vertex in the graph. Suppose set \(S:=\{v\}\blacktriangleleft A_{I}\), repeating \((i)\) until \((ii)\) or \((iii)\) holds:_ 1. _[label=_\(()\)_]_ 2. _As long as set_ \(S\) _has an extending child in_ \(N(S)\)_, set_ \(S\) _is extended by including the corresponding satellite set into set_ \(S\)_._ 3. _If a child_ \(u\) _such that_ \(w(u)\geqslant w(S\cap N(u))+\alpha_{w}(G[N(u)\backslash N[S]])\) _could be found, that is, the upper bound lemma is not satisfied in the local structure of set_ \(S\)_, then halt and vertex_ \(v\) _is called an unconfined vertex._ 4. _If any child is not an extending child, then halt and return set_ \(S_{v}=S\)_. In this case, vertex_ \(v\) _is called a confined vertex and the set_ \(S_{v}\) _is called the confining set of vertex_ \(v\)_._ Some examples of unconfined vertex are given in Figure 1. By means of the conflict analysis process in Definition 3.2, vertices \(a\) and \(h\) can be found to be unconfined vertices. It is also worth noting that, by the definition of unconfined vertex given in [27], in Figure 1, only vertex \(a\) can be found to be an unconfined vertex. The reason for this is that we further generalize the concept of confined/unconfined vertices in this work. Compared with the definition of extending child \(u\) in [27], which requires \(|N(u)\backslash N[S]|=1\) and \(w(u)<w(N(u)\backslash N(S))\), we can consider the more general case where \(N(u)\backslash N[S]\) is an independent set rather than a single vertex, helping us find more unconfined vertices. Next, we will explore the properties of confined/unconfined vertices. By the conflict analysis process in Definition 3.2 and the independent set extension theorem, set \(S\) can be extended under the assumption: set \(S:=\{v\}\blacktriangleleft A_{I}\), and set \(S\blacktriangleleft A_{I}\) is always satisfied. If vertex \(v\) is a unconfined vertex, then the upper bound lemma is not satisfied in the local structure of set \(S\), which contradicts set \(S\blacktriangleleft A_{I}\). Thus, vertex \(v\) is sheathed. Otherwise, then there is a state-preserving result, i.e., the corresponding confining set \(S_{v}\blacktriangleleft A_{I}\) holds. Furthermore, suppose two confined vertices \(u\), \(v\) and the corresponding confining sets \(S_{u},S_{v}\) such that \(u\in S_{v}\) and \(v\in S_{u}\). If \(\{v\}\blacktriangleleft A_{I}\), then obviously \(\{u\}\blacktriangleleft A_{I}\) holds. If not, vertex \(v\) is sheathed in graph \(G\). Since \(v\in S_{u}\), then vertex \(v\) is included in the satellite set of an intermediate state set \(S^{\prime}\) of \(S_{u}\), which means that in graph \(G[V\backslash\{v\}]\), the upper bound lemma is not satisfied in the local structure of set \(S^{\prime}\). Thus, by Definition 3.2, vertex \(u\) is an unconfined vertex of graph \(G[V\backslash\{v\}]\) and is sheathed in this graph. From these analysis results and the symmetry of the relationship between vertex \(v\) and vertex \(u\), we can know that vertex set \(\{u,v\}\) is a simultaneous set. Therefore, the following properties can be obtained: **Corollary 3.3**.: _Let \(v\) is a vertex in the graph._ * _If vertex_ \(v\) _is an unconfined vertex, then it is sheathed and after deleting it from the graph, the weight of the MWIS in the remaining graph remains unchanged._ * _Suppose vertex_ \(v\) _is a confined vertex, then either it is sheathed or the corresponding confining set_ \(S_{v}\blacktriangleleft A_{I}\)_. Moreover, if a vertex_ \(u\in S_{v}\) _is also a confined vertex with the corresponding confining set_ \(S_{u}\) _and_ \(v\in S_{u}\)_, then vertex set_ \(\{u,v\}\) _is a simultaneous set._ From Corollary 3.3, it can be known that the conflict analysis process in Definition 3.2 can be used to find the vertex that is sheathed or a simultaneous set. These Figure 1: Some examples of unconfined vertices, and a MWIS in this graph is \(\{b,d,g,i\}\). Let set \(S:=\{a\}\), from Definition 3.1, vertex \(b\) is an extending child of set \(S\) and set \(\{c\}\) is a satellite set of set \(S\). Thus, set \(S\) can be extended as: \(\{a,c\}\). At this time, it can be found that a child \(d\) such that \(w(d)\geqslant w(S\cap N(d))+\alpha_{w}(G[N(d)\backslash N[S]])\), then halt and conclude that vertex \(a\) is an unconfined vertex. Similarly, let set \(S:=\{h\}\), then it can be found that vertex \(g\) is an extending child of set \(S\) and set \(\{f,l\}\) is a satellite set of set \(S\). So set \(S\) can be further expanded as: \(\{h,l,f\}\). After that, the child \(i\) satisfied: \(w(i)\geqslant w(S\cap N(i))+\alpha_{w}(G[N(i)\backslash N[S]])\), hence, vertex \(h\) is an unconfined vertex. CITs will be used to design reduction rules in Section 4.1. In addition, by the property of confined vertex, a fact is obvious: If confined vertex \(v\) such that \(\{v\}\blacktriangleleft A_{I}\), then the corresponding confining set \(S_{v}\blacktriangleleft A_{I}\). We will exploit this state-preserving result in the B&R algorithm to design a branching rule to search for a solution in Section 4.2. Next, we proceed to consider the assumption that a vertex is strongly sheathed. In the MIS problem, the notion of mirror is given by means of such an assumption and is very useful in practice [1]. We will generalize the notion of mirror to the MWIS problem: For a vertex \(v\in V\), a mirror of vertex \(v\) is a vertex \(u\in N^{2}(v)\) such that \(w(v)\geqslant\alpha_{w}(G[N(v)\backslash N(u)])\). **Remark 3.4**.: _When the weight of all vertices in the graph is \(1\), then \(\alpha(G[N(v)\backslash N(u)])=\alpha_{w}(G[N(v)\backslash N(u)])\leqslant w(v )=1\). This means that \(N(v)\backslash N(u)\) induces a clique or is an empty set, and this is exactly the definition that vertex \(u\) is the mirror of vertex \(v\) in the MIS problem._ To make the concept of mirror more practical, we further generalize it to the case of set, which leads to the following definitions: **Definition 3.5**.: _Let set \(C\) be a vertex subset in the graph. If a vertex \(v\in C\) satisfies the inequality: \(w(v)<\alpha_{w}(G[N(v)\backslash C])\), we call it a father of set \(C\). Furthermore, if there exists a vertex \(u\in N^{2}(v)\) such that \(w(v)\geqslant\alpha_{w}(G[N(v)\backslash(C\cup N(u))])\), then the father \(v\) is called an extending father of set \(C\) and vertex \(u\) is called a mirror of vertex \(v\). We use \(M(v)\) to denote the set of mirrors of vertex \(v\)._ By means of Definition 3.5, and under the assumption that a vertex is strongly sheathed, the concept of 'covered/uncovered vertices' is given by the following conflict analysis process: **Definition 3.6**.: _Let \(v\) be a vertex in the graph. At the beginning, suppose set \(C:=\{v\}\vartriangleleft A_{C}\) and repeating \((i)\) until \((ii)\) or \((iii)\) are met:_ 1. _When set_ \(C\) _has an extending father, extend set_ \(C\) _by including the corresponding set of mirrors to set_ \(C\)_._ 2. _If there is a vertex_ \(u\in C\) _such that_ \(w(u)\geqslant\alpha_{w}(G[N(u)\backslash C])\)_, in this case, the upper bound lemma is not satisfied, then halt and vertex_ \(v\) _is called an uncovered vertex._ 3. _If set_ \(C\) _has no extending father, then halt and return set_ \(C_{v}=C\)_. In this case, vertex_ \(v\) _is called a covered vertex and vertex set_ \(C_{v}\) _is called the covering set of vertex_ \(v\) An example of uncovered vertex is given in Figure 2 and we find that vertex \(a\) is an uncovered vertex. In addition, the properties of uncovered/covered vertices are worth further study. From the vertex cover extension theorem, in the conflict analysis process of Definition 3.6, for any extending father \(f\) of set \(C\), \(\forall u\in M(f)\), if set \(C\lhd A_{C}\), set \(C\cup\{u\}\lhd A_{C}\) always holds. Thus, under the assumption set \(C:=\{v\}\lhd A_{C}\), if vertex \(v\) is not an uncovered vertex, then a state-preserving result can be obtained: The corresponding covering set \(C_{v}\lhd A_{C}\). Otherwise, the upper bound lemma is not satisfied in the local structure of set \(C\), which contradicts hypothesis set \(C\lhd A_{C}\). So vertex \(v\) is inclusive. Also, assume that the two covered vertices \(u,v\) and the corresponding covering set \(C_{u},C_{v}\) satisfy: \(v\not\in N(u)\), \(u\in C_{v}\) and \(v\in C_{u}\). If vertex \(v\) is inclusive, we first remove \(N[v]\) from graph \(G\). Since \(v\in C_{u}\), then vertex \(v\) is a mirror of an extending father of an intermediate state set \(C^{\prime}\) of set \(C_{u}\) and the upper bound lemma cannot be satisfied in graph \(G[V\backslash N[v]]\) at this time. Thus, vertex \(u\) is an uncovered vertex of graph \(G[V\backslash N[v]]\) and is inclusive in this graph. So there exists a MWIS in graph \(G\) containing both vertex \(v\) and vertex \(u\). Moreover, if \(\{v\}\lhd A_{C}\), \(\{u\}\lhd A_{C}\) is clearly satisfied. Thus, from the symmetry of the relationship between vertex \(u\) and vertex \(v\), it can be known that vertex set \(\{u,v\}\) is a simultaneous set. These properties are summarized as follows. **Corollary 3.7**.: _Let \(v\) be a vertex in the graph \(G\)._ 1. _If vertex_ \(v\) _is an uncovered vertex, then it is inclusive. After deleting_ \(N[v]\) _from the graph, the weight of the MWIS in the remaining graph satisfies:_ Figure 2: An example of uncovered vertex and a MWIS of this graph is \(\{a,e,g,h,j,l\}\). Starting with set \(C:=\{a\}\), from Definition 3.5, it can be seen that vertex \(a\) is an extending father of set \(C\) and set \(\{e,g,h\}\) is the mirrors set of vertex \(a\). Thus, set \(C\) can be extended to: \(\{a,e,g,h\}\). Then, vertex \(h\) is also an extending father of set \(C\) and set \(\{j,k,l\}\) is the mirrors set of vertex \(h\). So set \(C\) can be further expanded as: \(\{a,e,g,h,j,k,l\}\). At this time, we find that \(w(l)\geqslant\alpha_{w}(G[N(l)\backslash C])\), then halt and conclude that vertex \(a\) is uncovered. \(\alpha_{w}(G[V\backslash N[v]])+w(v)\)._ 2. _If vertex_ \(v\) _is a covered vertex. Then, either vertex_ \(v\) _is inclusive or the corresponding covering set_ \(C_{v}\vartriangleleft A_{C}\)_. Also, if another covered vertex_ \(u\) _with the corresponding covering set_ \(C_{u}\) _satisfies:_ \(v\not\in N(u)\)_,_ \(u\in C_{v}\) _and_ \(v\in C_{u}\)_, then vertex set_ \(\{u,v\}\) _is a simultaneous set._ Corollary 3.7 gives the following results: The conflict analysis process in Definition 3.6 can be applied to find the vertex that is inclusive or a simultaneous set. In Section 4.1, we will use these CITs to design reduction rules. Besides, by the property of covered vertex in \((b)\) of Corollary 3.7, we can know a state-preserving result: if the covered vertex \(v\) such that \(\{v\}\vartriangleleft A_{C}\), then the corresponding covering set \(C_{v}\vartriangleleft A_{C}\). ### Reducible state-preserving technique Similar to the first type of CIT, the reducible state-preserving technique utilizes the assumption that a vertex is reducible, that is, assumes that a vertex is inclusive or sheathed. With these assumptions, we can give state-preserving results similar to the first type of CIT. Before that, we give the following definition. **Definition 3.8**.: _Let sets \(IS\) and \(IC\) be two vertex subsets in the graph and set \(IS\) is an independent set._ 1. _A vertex_ \(u\in N(IS)\) _is called an inferred child of set_ \(IS\) _if it holds that_ \(w(u)>w(IS\cap N(u))\)_. Further, if there is only a unique independent set_ \(IS^{*}\subseteq N(u)\backslash N[IS]\) _that satisfies the inequality:_ \(w(u)\leqslant w(IS\cap N(u))+w(IS^{*})\)_, we call the inferred child_ \(u\) _an inferred extending child of set_ \(IS\) _and vertex set_ \(IS^{*}\) _is called an inferred satellite set of set_ \(IS\)_._ 2. _A vertex_ \(v\in IC\) _is called an inferred father of set_ \(IC\) _if it holds that_ \(w(v)\leqslant\alpha_{w}(G[N(v)\backslash IC])\)_. An inferred father_ \(v\) _is called an inferred extending father of set_ \(IC\) _if there exists a vertex_ \(u\in N^{2}(v)\) _such that_ \(w(v)>\alpha_{w}(G[N(v)\backslash(IC\cup N(u))])\) _and vertex_ \(u\) _is called an inferred mirror of vertex_ \(v\)_. Also,_ \(IM(v)\) _is used to denote its set of inferred mirrors._ By virtue of Definition 3.8 and the assumption that a vertex is inclusive or sheathed, we can directly give the definitions of inferred confining set and inferred covering set accordingly. **Definition 3.9**.: _Suppose there are no unconfined vertex in the graph. Let \(v\) be a vertex in the graph. Beginning with the assumption set \(IS:=\{v\}\subseteq I_{w}\)._ * _Only if set_ \(IS\) _has an inferred extending child in_ \(N(IS)\)_, set_ \(IS\) _can be extended by including the corresponding inferred satellite set to set_ \(IS\)_._ * _The above process halts if set_ \(IS\) _has no inferred extending child in_ \(N(IS)\) _and return set_ \(IS_{v}=IS\)_. We call vertex set_ \(IS_{v}\) _is the inferred confining set of vertex_ \(v\)_._ **Definition 3.10**.: _We assume that there are no uncovered vertex in graph. Let \(v\) be a vertex in the graph. Starting with the assumption set \(IC:=\{v\}\subseteq VC_{w}\)._ * _While set_ \(IC\) _has an inferred extending father, extend set_ \(IC\) _by including the corresponding set of inferred mirrors to set_ \(IC\)_._ * _The above process halts if set_ \(IC\) _has no inferred extending father and return set_ \(IC_{v}=IC\)_. We call vertex set_ \(IC_{v}\) _is the inferred covering set of vertex_ \(v\)_._ Examples of inferred confining set and inferred covering set are given in Figure 3. By the process in Definition 3.9, we can find the inferred confining set \(IS_{a}=\{a,c,e,j,g,h,k\}\) of vertex \(a\). Similarly, according to the process in Definition 3.10, we can find the inferred covering set \(IC_{d}=\{b,d,f,i,l\}\) of vertex \(d\). Moreover, from the independent set extension theorem and the vertex cover extension theorem, we can directly obtain the following Corollary: **Corollary 3.11**.: _Let \(v\) be a vertex in the graph._ * _If_ \(\{v\}\subseteq I_{w}\)_, then the corresponding inferred confining set_ \(IS_{v}\subseteq I_{w}\)_._ * _Suppose_ \(\{v\}\subseteq VC_{w}\)_, then the corresponding inferred covering set_ \(IC_{v}\subseteq VC_{w}\)_._ From (a) of Corollary 3.11, under the premise \(\{v\}\subseteq I_{w}\), the state-preserving result can be obtained: \(IS_{v}\subseteq I_{w}\). We will integrate this result into the local search process of heuristic algorithm in Section 4.3. In addition, (b) of Corollary 3.11 also gives a similar state-preserving result result: If \(\{v\}\subseteq VC_{w}\), then the corresponding inferred covering set \(IC_{v}\subseteq VC_{w}\). This result can be used to design a branching rule to search for a solution in Section 4.2. Furthermore, during the branching process of the B&R algorithm, it is assumed that a vertex \(v\) is selected for branching. Inspired by the successful application of packing constraints in the MIS problem, we extend them to the MWIS problem and propose the concept of "weight packing constraint". When assuming that vertex \(v\) is inclusive, if \(\exists u\in N(v)\) such that \(w(u)\geqslant w(v)\), let \(N^{+}(u)=N(u)\backslash N[v]\). To avoid obtaining another MWIS by adding vertex \(u\) to the independent set and removing vertices in \(N(u)\) from the independent set, by the upper bound lemma, the following state-preserving result needs to be guaranteed to hold: \[w(v)+\sum_{z\in N^{+}(u)}w(z)(1-x_{z})>w(u).\] The 0-1 integer variable \(x_{z}\) is used to indicate whether vertex \(z\in N^{+}(u)\) is in the independent set, and \(x_{z}=0\) means it is in the independent set, otherwise it is not. Thus, a weight packing constraint can be created as shown below: \[\sum_{z\in N^{+}(u)}w(z)x_{z}<\sum_{z\in N^{+}(u)}w(z)-(w(u)-w(v)). \tag{3.1}\] When assuming that vertex \(v\) is sheathed, to avoid that a MWIS containing it can be found by modifying its state, by means of the upper bound lemma, the following Figure 3: Examples of inferred confining set and inferred covering set. A MWIS for this graph is \(\{a,c,e,g,h,j,k\}\). We first search for the inferred confining set \(IS_{a}\) of vertex \(a\). Let set \(IS:=\{a\}\), it can be seen from \((a)\) of Definition 3.8 that vertex \(b\) is an inferred extending child of set \(IS\) and set \(\{c\}\) is an inferred satellite set of set \(IS\). Thus, set \(IS\) can be extended to: \(\{a,c\}\). Further, vertex \(d\) is also an inferred extending child of set \(IS\) and set \(\{e,j\}\) is the corresponding inferred satellite set. So set \(IS\) can be further extended to: \(\{a,c,e,j\}\). At this time, it can be found that both vertex \(f\) and vertex \(i\) are inferred extending children of set \(IS\). Then, both vertex set \(\{g,h\}\) and vertex set \(\{k\}\) are the corresponding inferred satellite sets. Finally, the inferred confining set of vertex \(a\) can be found as: \(IS_{a}=\{a,c,e,j,g,h,k\}\). Furthermore, we continue to search the inferred covering set \(IC_{d}\) of vertex \(d\). Let set \(IC:=\{d\}\), according to \((b)\) of Definition 3.8, vertex \(d\) is an extending father of set \(IC\) and set \(\{b,f,i\}\) is its inferred mirrors set. Then, set \(IC\) can be extended as: \(\{b,d,f,i\}\). Next, it can be found that vertex \(b\) is an extending father of set \(IC\) and set \(\{l\}\) is its inferred mirrors set. Thus, set \(IC\) can be further extended as: \(\{b,d,f,i,l\}\). Finally, the inferred covering set of vertex \(d\) can be found as: \(IC_{d}=\{b,d,f,i,l\}\). state-preserving result needs to be satisfied: \[\sum_{z\in N(v)}w(z)(1-x_{z})>w(v).\] So a weight packing constraint can also be created as follows: \[\sum_{z\in N(v)}w(z)x_{z}<\sum_{z\in N(v)}w(z)-w(v). \tag{3.2}\] These constraints will be kept and managed while the algorithm is searching for a solution, and we only need to search all branches satisfying these constraints, since no better solution exists in the remaining branches, thus narrowing the search space. Let \(\sum\limits_{z\in S}w(z)x_{z}<k\) be a weight packing constraint such that set \(S\) is non-empty. When a vertex \(z\) is found to be inclusive, for each constraint that includes variable \(x_{z}\), we delete the variable on the left side of the constraint and keep the right side of the constraint unchanged. When a vertex \(z\) is inferred to be sheathed, for each constraint that contains variable \(x_{z}\), we delete the variable on the left side of the constraint and decrease the weight of vertex \(z\) on the right side of the constraint. In the process of keeping and managing these constraints, some properties of causal inference are mined, which can be divided into the following three cases. * When there is a constraint whose right-hand term \(k\) is less than or equal to 0, then we can directly prune subsequent searches from the current branch vertex. * When there is a constraint whose right-hand term \(k\) is less than or equal to the weight of any vertex in set \(S\), if this set is not an independent set, we can prune subsequent searches from the current branch vertex. If not, the vertices in set \(S\) will be included in the independent set. In addition, some new weight packing constraints can also be introduced. Suppose there is a vertex \(p\in N(S)\) such that \(w(p)\geqslant w(N(p)\cap S)\), let \(N^{+}(p)=N(p)\backslash N[S]\), by the upper bound lemma, the following state-preserving result needs to be guaranteed: \[w(N(u)\cap S)+\sum_{z\in N^{+}(u)}w(z)(1-x_{z})>w(u).\] Therefore, we can introduce the following weight packing constraint: \[\sum_{z\in N^{+}(p)}w(z)x_{z}<\sum_{z\in N^{+}(p)}w(z)-(w(p)-w(N(p)\cap S)).\] (3.3) 3. When there is a constraint whose right-hand term \(k>0\) and there is vertex \(u\in N(S)\) such that \(\sum\limits_{z\in N(u)\cap S}w(z)\geqslant k\), it can be inferred that vertex \(u\) is sheathed to ensure that this constraint holds. In addition, in order to ensure that the current state-preserving result is valid, similar to constraint (3.2), the following constraint needs to be introduced: \[\sum\limits_{z\in N(u)}w(z)x_{z}<\sum\limits_{z\in N(u)}w(z)-w(u).\] (3.4) The above properties of causal inference provide new pruning search techniques for the B&R algorithm and can simplify the graph. We will integrate these techniques into B&R algorithm in Section 4.2. ## 4 Integrate CITs into Existing Algorithmic Frameworks We next describe in detail how CITs in Section 3 are integrated into the existing algorithmic frameworks. Section 4.1 introduces how to apply the first type of CIT to the reduction algorithm. Further, integrating the resulting reduction algorithm and the state-preserving results of two types of CITs into B&R algorithm will be presented in Section 4.2, and Section 4.3 will introduce the application of the state-preserving results of the second type of CIT to the local search process of heuristic algorithm. ### The Causal Reduce We first introduce how to design reduction rules with the first type of CIT and how to integrate them into the existing reduction algorithm. From the property of unconfined vertex in Corollary 3.3 and the property of uncovered vertex in Corollary 3.7, the following reduction rules that can directly determine whether a vertex is reducible are given first: * **Rule I:** Check whether a vertex \(v\) is unconfined or confined by the procedure in Definition 3.2, and if it is unconfined, remove vertex \(v\) directly from the graph. * **Rule II:** Use the procedure in Definition 3.6 to check whether a vertex \(v\) is covered or uncovered, and if it is uncovered, include vertex \(v\) into the independent set and remove \(N[v]\) from the graph. Before further introducing how to utilize the first type of CIT to design reduction rules, we first give an important property about simultaneous set mentioned in [27]: A simultaneous set \(S\) can be contracted by removing all vertices in set \(S\) from the graph and introducing a vertex \(v^{*}\) such that it is adjacent to all vertices in \(N(S)\) with weight \(w(v^{*})=w(S)\), while the weight of the MWIS in the remaining graph remain unchanged. Next, we will design reduction rules on simultaneous set through the first type of CIT, and give the following definitions by the results of the simultaneous set given in \((b)\) of Corollary 3.3 and \((b)\) of Corollary 3.7. **Definition 4.1**.: _Let \(u,v\) be two vertices in the graph._ * _Suppose vertices_ \(u\) _and_ \(v\) _be two confined vertices with confining set_ \(S_{u}\) _and_ \(S_{v}\)_. If_ \(u\in S_{v}\) _and_ \(v\in S_{u}\)_, then set_ \(\{u,v\}\) _is called a confining simultaneous set._ * _Assume that vertices_ \(u\) _and_ \(v\) _be two covered vertices with covering set_ \(C_{u}\) _and_ \(C_{v}\)_. Set_ \(\{u,v\}\) _is called a covering simultaneous set if_ \(u\in C_{v}\) _and_ \(v\in C_{u}\)_._ From Definition 4.1, we have the following rules: * **Rule III:** If there are two confined vertices that constitute a confining simultaneous set, then merge them. * **Rule IV:** Merge two covered vertices \(u\) and \(v\) if they form a covering simultaneous set. Next, we will describe how to integrate our reduction rules into an existing reduction algorithm--**Reduce** proposed by [27]. **Reduce** consists of seven steps. The reduction rules used in these steps exploit the sufficient conditions that a vertex is reducible. It executes these steps incrementally, which means that the next step is only executed when all previous steps are no longer applicable. Thus, if the graph is changed, it will go back to the first step. Notably, our reduction rules **I** and **III** are further generalization of the reduction rules used in step 5 of **Reduce**. So, we can combine our reduction rules **I** and **III** into one step to replace step 5 in **Reduce** and label this step as **Remove Unconfined & Contract Confining**. Similarly, we can also integrate our reduction rules **II** and **IV** into another new step in the reduction algorithm, called **Remove Uncovered & Contract Covering**. * **Remove Unconfined & Contract Confining:** Check whether a vertex is unconfined or confined. If it is confined, apply **Rule I** to remove it; If not, use **Rule III** to contract the corresponding confining simultaneous set when it can be found. * **Remove Uncovered & Contract Covering:** If a vertex is checked to be uncovered, use **Rule II** to reduce it. Otherwise, if the corresponding covering simultaneous set can be found, use **Rule IV** to merge it. Thus, a new reduction algorithm called **Causal Reduce** can be obtained by using **Remove Unconfined & Contract Confining** to replace step 5 of **Reduce** and adding **Remove Uncovered & Contract Covering** between **Remove Unconfined & Contract Confining** and step 6 of **Reduce**, which is shown in Figure 4. We will use **Causal Reduce\((G)=(K,c)\)** to represent the processing of this algorithm on a given input graph \(G\). The processing result of this algorithm consists of two parts: One is the remaining graph called kernel \(K\) and the other is the weight of the vertex set contained in the MWIS obtained by inference. It's worth noting that the reduction algorithm **Causal Reduce** may not resolve all instances directly, but it can be used as a preprocessing for heuristic and exact algorithm. ### The Causal B&R Solver Before introducing how to integrate our CITs into B&R algorithm, we briefly introduce the state-of-the-art exact algorithm **Solve** proposed by [27]. **Solve** is based on the idea of B&R algorithm, which first apply reduction algorithm **Reduce** to reduce the instance. Then, apply branching rule by virtue of the property of the confining set and perform reduction algorithm **Reduce** in every branch of the search tree to Figure 4: Casual Reduce: Given an input graph \(G\), each step of the algorithm is executed sequentially and the graph changes, immediately go back to the first step. When all steps are completed and the graph no longer changes, return to the remaining graph **kernel**. find a solution. During the searching, it uses a standard technique based on finding upper and lower bounds to prune the search tree and take the best solution weight \(W_{b}\) currently found in the algorithm as the lower bound. Initially, let \(W_{b}\) be the weight of the solution obtained by heuristic algorithm on the kernel \(K\), and update \(W_{b}\) once a better solution is obtained in the algorithm. The heuristic algorithm, denoted by \(\textbf{Greedy}(G)\), is a greedy algorithm that iteratively selects a vertex in order of some measure and removes its closed neighbor set from the graph. In each searching branch, it uses a heuristic method to find an upper bound \(W_{ub}\) of the optimal solution weight of the current graph, which is based on weight clique covers and is denoted by \(\textbf{UpperBound}(G)\). If the current best solution weight \(W_{b}\) is not smaller than \(W_{ub}\), then there is no better solution in this searching branch and it can be discarded directly. ``` 0: A vertex weight graph \(G=(V,E,w)\); 0: The weight of a MWIS of \(G\). 1: Initialization of global variable \(W_{b}\): \(W_{b}\gets 0\); 2:if weight packing constraints have been created then 3:while True do 4:\((K,c)\leftarrow\textbf{Causal Reduce}(G)\); 5: check constraints(); 6:if existence constraints are not satisfied then 7:return\(W_{b}\); 8:else if graph is simplified then 9: continue; 10:else 11: break; 12:else 13:\((K,c)\leftarrow\textbf{Causal Reduce}(G)\); 14:\(W_{b}\leftarrow\max\{W_{b},c+\textbf{Greedy}(K)\}\); 15:if\(c+\textbf{UpperBound}(G)\leqslant W_{b}\)then 16:return\(W_{b}\); 17: Pick up a vertex \(v\) of maximum degree and compute the confining set \(S_{v}\) and the inferred covering set \(IC_{v}\); 18: create weight packing constraint (3.1) and \(W_{b}\leftarrow\max\{W_{b},c+w(S_{v})+\textbf{Causal B\&R Solver}(K-N[S_{v}])\}\); 19: create weight packing constraint (3.2) and \(W_{b}\leftarrow\max\{W_{b},c+\textbf{Causal B\&R Solver}(K-IC_{v})\}\); 20:return\(W_{b}\); ``` **Algorithm 1** The **Causal B&R Solver\((G)\)** Our CITs will be integrated into two parts of **Solve**, resulting in a new exact algorithm called **Causal B&R Solver**. The first part is that we will use our reduction algorithm **Causal Reduce** to reduce the instance to get the kernel \(K\), and perform the reduction algorithm on each branch of the search tree. The second part is that we will make use of the state-preserving results of two types of CITs during the branching process. Similar to the idea of **Solve** in [27], using property of confining set to the branching process, when choosing a vertex with the maximum degree to branch, the state-preserving results of \((b)\) of Corollary 3.11 and \((b)\) of Corollary 3.3 will be used in this part. This means that during branching, we either remove the inferred covering set of the branching vertex from the graph or include the confining set of the branching vertex into the independent set. Furthermore, we will create weight packing constraint (3.2) while removing the inferred covering set of branching vertex. Similarly, we will also create weight packing constraint (3.1) when including the confining set of branching vertex into the independent set. We will keep and manage these weight packing constraints when searching for solutions in each branch of the search tree. Specifically, another step called **check constraints** is added after the last step of **Casual Reduce**. In this step, for each weight packing constraint, we will check whether the constraint holds and whether the graph can be simplified by the causal inference properties of that constraint. If any constraint is violated, the searching branch will be skipped. If the graph can be simplified, **Causal Reduce** will continue to execute after reducing the graph. If none of the above conditions are met, the subsequent process will be performed. The main steps of **Causal B&R Solver** are listed in Algorithm 1. ### The Causal Search After taking our reduction algorithm **Causal Reduce** as a preprocessing, we apply the state-preserving result of second type of CIT to the local search process of heuristic algorithm DynWVC2 [6] to solve the complementary problem of the MWIS problem--the MWVC problem, which leads to a new algorithm called **Causal Search**. ``` 1:A vertex weight graph \(G=(V,E,w)\), the cutoff time of the running \(T\); 2:A vertex cover of \(G\). 3:\(VC\leftarrow\)Construct(); 4:\(VC^{*}\gets VC\); 5:while\(elapsed\_time\)\(<\)\(T\)do 6:\(R\leftarrow\) RemoveVertices(VC) 7:while some edge is uncovered by \(VC\)do 8: choose a vertex \(v\) from \(N(R)\); 9:\(VC\gets VC\cup\{v\}\); 10: remove redundant vertices from \(VC\); 11:if\(w(VC)\)\(<\)\(w(VC^{*})\)then 12:\(VC^{*}\gets VC\) ``` **Algorithm 2** The basic framework of DynWVC2 algorithm. The DynWVC2 algorithm proposed by [6], is the state-of-the-art heuristic algorithm for solving MWVC problem. The basic framework of this algorithm is shown in Algorithm 2. The local search process of this algorithm mainly consists of a removing phase and an adding phase, and the specific process can be found in [6]. Our CITs will be considered in the removing phase of the algorithm -- **RemoveVertices** function. In this function, there are two scoring functions \(loss\) and \(valid\_score\) used to select the vertices to remove from the vertex cover \(VC\). The specific definition of these two scoring functions can be seen in [6]. The \(loss\) and \(valid\_score\) functions have fundamentally different effects on the behavior of the algorithm. Vertex selection using \(loss\) function is an "exploratory" selection; in other words, it is quite possible that such a chosen vertex is good for the quality of the solution, but this cannot be determined. Different from "exploratory" vertex selection, \(valid\_score\) is a "deterministic" selection, that is, we can determine whether removing a vertex will have a positive impact on the quality of the solution. For example, if a vertex has a negative \(valid\_score\) value, this means that after removing this vertex and adding its adjacent uncovering vertices, a vertex cover with lower weight than the current vertex cover can be obtained [6]. In removing phase, the vertex with the minimum loss is removed from vertex cover \(VC\) first, and then the second removed vertex is selected by a dynamic vertex selection strategy. The details of dynamic vertex selection strategy can be learned in [6]. After removing the two vertices, if the total degree of the removed vertices does not reach a predetermined value (which is set to 2 times average degree of the graph), another vertex to be selected with the BMS strategy [5], which samples \(t\) (\(t=50\)) vertices from vertex cover \(VC\) and chooses the one with the minimum \(loss\), will be removed to expand the search region. In this way, it solves the problem that when removing two vertices the resulting search area is too small and limits the ability of the adding phase to find better local optima. If the search area obtained by removing two vertices is large enough, in order to balance the search time and search quality, the third vertex will not be selected for removing. The state-preserving result of second type of CIT will be applied to the dynamic vertex selection strategy for selecting the second vertex to be removed. The dynamic vertex selection strategy consists of a primary vertex scoring function \(valid\_score\) and a secondary scoring function \(loss\). When the removed vertex \(v\) is selected by \(valid\_score\) function, it can be seen from the nature of the \(valid\_score\) function: There is a high probability that there exists a MWIS \(I\) containing it. If the vertex \(v\) is indeed included in \(I\), by \((a)\) of Corollary 3.11, the corresponding inferred confining set \(IS_{v}\) also contained in \(I\). Inspired by this result, when selecting the second removed vertex \(v\) by scoring function \(valid\_score\), we will remove the vertices in the inferred confining set \(IS_{v}\) from the vertex cover \(VC\). In this way, the search region can be expanded and the number of times to continue to use the third removed vertex to expand the search area is reduced, which means that the ability of local search to find better local optima is improved. An example of our CITs applied to the vertex removing process is presented in Figure 5. In addition, it can be seen from the calculation process of Definition 3.9 about the inferred confining set: the computational complexity of \(IS_{v}\) for each vertex \(v\) is \(O(|N(IS_{v})||IS_{v}|)\). This means that in the actual application process, since the size of the generally obtained inferred confining set is relatively small, its computational cost is very small. Thus, our CITs is helpful for improving the performance of local search process. ## 5 Experiments We will conduct four experiments to verify the effect of integrating our CITs into current algorithmic frameworks. The first experiment is used to analyze the impact of our CITs for the reduction algorithm. The examination of the performance gain of our CITs in the B&R algorithm is shown in the second experiment. The third experiment is used to test the ability of our **Causal Reduce** as a preprocessing to improve the performance of the heuristic algorithm. The last experiment is conducted to verify the effect of adding our CITs to the local search process of heuristic algorithm. **Experiment environment Setup.** All of our algorithms are implemented in C++, and compiled by g++ with '-O3' option. All experiments are run on a platform with 128G RAM and one Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz. **Compared Algorithms.** In previous studies, most of them only use some simple rules as preprocessing to reduce problem instances, and do not pay attention to the performance of preprocessing. Two recent papers [15, 27] have studied in depth the reduction rules for the MWIS and analyzed their performance. Since the algorithm **Reduce** in [27] outperforms the algorithm in [15] and our **Causal Reduce** Figure 5: Example of our CITs applied to the vertex removing process: When we utilize \(valid\_score\) to select the removed vertex \(c\) from the vertex cover \(VC=\{a,c,d\}\), we can compute the corresponding inferred confining set \(IS_{c}=\{a,c,d\}\) of vertex \(c\) and remove set \(IS_{c}\) from vertex cover \(VC\). is obtained by integrating our CITs into **Reduce**, in this paper, we only use it as a baseline to analyze the impact of our CITs for the reduction algorithm. Additionally, in order to fully understand the role of different CITs on the reduction algorithm, we control the application of CITs in **Reduce** and conduct comparative experiments. Similar to the Causal Reduce shown in Figure 4, we use **Re-Confin** to represent the algorithm obtained after replacing the step 5 of **Reduce** with **Remove Unconfined & Contract Confining** and **Re-Cover** to denote the algorithm obtained by adding **Remove Uncovered & Contract Covering** between step 5 and 6 of **Reduce**. On the basis of the reduction algorithm **Reduce**, the authors of [27] also developed a fast exact algorithm **Solve**, which is the state-of-art exact algorithm in previous work, and our **Causal B&R Solver** is obtained by applying our CITs into it, so it will be used as a baseline to verify the performance improvement of our CITs for the B&R algorithm. Furthermore, we use **Solve-CR** to identify the algorithm obtained by replacing **Reduce** with **Causal Reduce** in **Solve**, **Solve-CR-IC** refers to the algorithm obtained by further simplifying the branch by using the inferred covering set of the branching vertex in the branching process on the basis of **Solve-CR**, and **Solve-Packing** to represent the algorithm obtained by applying our weight packing constraints to the branching process of **Solve**. We will conduct comparative experiments on these algorithms to clarify the impact of different CITs on the B&R algorithm. Two state-of-the-art heuristic algorithms FastWVC (**Fast**) [7] and DynWVC2 (**Dyn**) [6], will be used to verify that our **Causal Reduce** as preprocessing improves the performance of the heuristic algorithm. We will use **Causal Re + Fast** and **Causal Re + Dyn** to denote applying our **Causal Reduce** as preprocessing before executing FastWVC and DynWVC2. In addition, to further verify the superiority of our **Causal Reduce** as preprocessing for improving the performance of the heuristic algorithm, we also conduct comparative experiments using **Reduce** as a preprocessing of the heuristic algorithm. Likewise, we use **Re + Fast** and **Re + Dyn** to indicate the application of the previous reduction algorithm **Reduce** before FastWVC and DynWVC2 are executed. Moreover, our **Causal Search** is obtained by integrating CITs into the local search process of DynWVC2. Therefore, we can verify the effect of this operation by comparing DynWVC2 with **Causal Search**. **Instances.** We evaluate all algorithms on six real graphs which are most representative and most difficult graphs from different domains. These graphs are downloaded from Network Data Repository [21]. All of them have 100 thousands to millions of vertices, and dozens of millions of edges. These instances become popular in recent works for the MWIS problem. Statistics of these graphs are shown in Table 2. In our experiment, the weight of each vertex in the graph will have two random allocation mechanisms *, which are commonly used in previous work [15, 27, 6, 7]. The first allocation mechanism is that the weight of each vertex in the graph is obtained from \([1,200]\) uniformly at random, we will number the six datasets with \(1-6\). The second allocation mechanism is that the weight of each vertex in the graph follows a random uniform distribution of \([20,100]\), and \(7-12\) will be used to number the six datasets. Footnote *: All datasets obtained through these two random assignment mechanisms can be found at [http://lcs.ios.ac.cn/~caisw/graphs.html](http://lcs.ios.ac.cn/~caisw/graphs.html). ### Impact of CITs on Reduction Algorithm We first analyze the impact of our CITs for the reduction algorithm and evaluate the performance of all reduction algorithms by measuring the running time, the size of the remaining graphs (kernel size), and the ratio of the kernel size to the number of vertices in the original graph (We simply refer to it here as the ratio for convenience.). The experimental results of all algorithms are output in Table 3. We can know that all reduction algorithms can significantly simplify the graph, and even reduce the graph to less than \(0.1\%\) of the original size. Besides, we can see that our **Causal Reduce** achieves best reduction effect in all datasets, that is, our **Causal Reduce** results in a much smaller kernel size than other algorithms. Moreover, compared with **Reduce**, **Re-Confin** can achieve better reduction effect in all datasets, while **Re-Cover** has basically no performance improvement. This shows that replacing the step 5 of **Reduce** with **Remove Unconfined & Contract Confining** plays a key role in improving the performance of the reduction algorithm, and combined with **Remove Uncovered & Contract Covering**, the performance of the reduction \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & **inf-road-usa** & **soc-livejournal** & **sc-ldoor** & **tech-as-skitter** & **sc-msdoor** & **inf-roadNet-CA** \\ \hline **Vertices** & 23947347 & 4033137 & 952203 & 1694616 & 415863 & 1957027 \\ \hline **Edges** & 28854312 & 27933062 & 20770807 & 11094209 & 9378650 & 2760388 \\ \hline **NO.** & 1, 7 & 2, 8 & 3, 9 & 4, 10 & 5, 11 & 6, 12 \\ \hline \end{tabular} \end{table} Table 2: All graphs are sorted in descending order regarding the number of edges. In the row headed by “NO.”, each number is used to represent the corresponding number of the dataset generated by the graph according to the corresponding vertex weight allocation mechanism. algorithm will be greatly improved, but only adding **Remove Uncovered & Contract Covering** can hardly improve the performance of the reduction algorithm. More notably, our **Causal Reduce** take less time than other algorithms on half of the datasets. On the rest of the datasets, our **Causal Reduce** only takes a few seconds longer than other algorithms. These phenomena show that integrating our CITs into the reduction algorithm can significantly improve the performance of the algorithm, but the increase in time cost is very small, and they can even reduce the time cost. ### Performance Gain of CITs on the B&R Algorithm We will examine the performance gain of our CITs for B&R algorithm. The running time bound is set as \(1,000\) seconds for all algorithms, and if the algorithm cannot find the optimal solution within the time bound, the best solution found in all search branches is output. We output the numerical results and running times of all algorithms in Table 4. It can be seen from Table 4 that **Solve-CR** and **Solve-CR-IC**, like our **Causal B&R Solver**, can obtain the optimal solution in five data sets, while **Solve-Packing**, like **Solve**, can only obtain the optimal solution in one data set. In addition, on those datasets where the optimal solution cannot be solved within \(1000\) seconds, our **Causal B&R Solver** can basically obtain better numerical solutions than **Solve-CR-IC**, and **Solve-CR-IC** can obtain numerical results that are slightly better than **Solve-CR**, while **Solve-Packing** can generally get better numerical solutions than **Solve**. These results demonstrate that our reduction algorithm, **Causal Reduce**, is critical for the B&R algorithm to obtain optimal solutions on more datasets. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Reduce} & \multicolumn{3}{c|}{Re-Confin} & \multicolumn{3}{c|}{Re-Cover} & \multicolumn{3}{c|}{Causal Reduce} \\ \hline NO. & \(|V|\) & Time(S) & Kernel Size & Ratio(\(\%\)) & Time(S) & Kernel Size & Ratio(\(\%\)) & Time(S) & Kernel Size & Ratio(\(\%\)) & Time(S) & Kernel Size & Ratio(\(\%\)) \\ \hline 1 & 23947347 & \(72.0\) & 431891 & 1.80 & \(71.66\) & 42837 & \(1.7\) & 74.03 & 431888 & \(1.80\) & \(55.88\) & \(275082\) & \(1.15\) \\ \hline 2 & 433317 & \(12.9\) & \(723\) & \(0.18\) & \(16.57\) & 560 & \(0.15\) & 16.20 & 7261 & \(0.18\) & \(17.92\) & 3281 & \(0.08\) \\ \hline 3 & 952203 & \(3.08\) & \(6492\) & \(0.68\) & \(3.89\) & 280 & \(0.29\) & 3.89 & 6447 & \(0.68\) & 3.84 & 1682 & \(0.18\) \\ \hline 4 & 109616 & \(1.78\) & 5904 & \(0.35\) & \(2.32\) & 5613 & \(0.33\) & 2.26 & 5909 & \(0.35\) & 5.03 & 3974 & \(0.23\) \\ \hline 5 & 413863 & \(1.66\) & 6570 & \(1.58\) & 1.84 & 3166 & \(0.76\) & 204 & 6570 & \(1.58\) & \(1.86\) & 2162 & \(0.52\) \\ \hline 6 & 195702 & \(76.14\) & 30540 & \(15.61\) & 6365 & 300135 & \(15.34\) & 76.57 & 385470 & \(15.61\) & 41.12 & 202885 & \(10.37\) \\ \hline 7 & 23947347 & \(11.54\) & 47939 & \(1.83\) & 94.52 & 43066 & \(1.81\) & 113.07 & 48004 & \(1.83\) & 58.02 & 235243 & \(0.98\) \\ \hline 8 & 4031317 & \(11.68\) & 702 & \(0.19\) & 15.90 & 671 & \(0.16\) & 15.55 & 7367 & \(0.19\) & 21.38 & 3806 & \(0.09\) \\ \hline 9 & 952203 & \(8.06\) & 29116 & \(3.66\) & 5.08 & 7966 & \(0.83\) & 8.88 & 29116 & \(3.06\) & 5.06 & 4628 & \(0.49\) \\ \hline 10 & 1049616 & \(1.94\) & 6959 & \(0.41\) & 2.55 & 6203 & \(0.38\) & 253 & 6991 & \(0.41\) & 5.19 & 4429 & \(0.26\) \\ \hline 11 & 415863 & \(4.09\) & 24736 & \(5.95\) & 2.75 & 10088 & \(2.43\) & 4.24 & 24736 & \(5.95\) & 2.66 & 7081 & \(1.70\) \\ \hline 12 & 1957027 & \(19.01\) & 131408 & \(6.72\) & 17.00 & 128637 & \(6.57\) & 19.54 & 131498 & \(6.72\) & 8.64 & 65124 & \(3.33\) \\ \hline \end{tabular} \end{table} Table 3: Impact of CITs for the the reduction algorithm. The **bold** and underlined numbers are the minimum kernel size and shortest running time, respectively. Moreover, both the inferred covering set of the branching vertex and the weighted packing constraints can help B&R algorithm find more promising branches and find better solutions. ### Causal Reduce's Improvement on Heuristic Algorithm Next, we will verify the superiority of our **Causal Reduce** as a preprocessing for improving the heuristic algorithm. Table 5 presents the running time (including preprocessing time) and numerical results. We find that the preprocessed heuristic algorithm with **Causal Reduce** usually stop execution after running for a short time, while the rest of the heuristic algorithms are allowed to run for 1000 seconds. Meanwhile, it can be observed from Table 5 that adding the reduction algorithm as preprocessing is obvious for improving the performance of the heuristic algorithm, and our **Causal Reduce** helps heuristics find better solutions on all instances in less time (essentially within 100 seconds) than **Reduce**. Thus, although our **Causal Reduce** takes no more than 12 seconds longer than **Reduce** on half of the datasets (as can be known from the numerical results in Section 5.1), it can further reduce the size of remaining graph by more than 32.6%, which is critical for subsequent processing of the problem (also be mentioned in Section 5.2), so such processing time cost is worth it! \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**Solve**} & \multicolumn{2}{|c|}{**Solve-CR**} & \multicolumn{2}{|c|}{**Solve-Packing**} & \multicolumn{2}{|c|}{**Solve-CR-IC**} & \multicolumn{2}{|c|}{**Causal B\& Solver**} \\ \hline **NO.** & **Time(S)** & **Result** & **Time(S)** & **Result** & **Time(S)** & **Result** & **Time(S)** & **Result** & **Time(S)** & **Result** \\ \hline **1** & 1000 & 1380579565 & 1000 & 1380810330 & 1000 & 1380579506 & 1000 & 1380980673 & 1000 & 1380980749 \\ \hline **2** & 1000 & 232813323 & 28.5058 & **232828253** & 1000 & 232814520 & 28.8413 & **232828253** & 25.4889 & 232828253 \\ \hline **3** & 3.8108 & **10303506** & 5.3508 & **10303506** & 4.7170 & **10303506** & 4.1507 & **10303506** & 3.9148 & 10303506 \\ \hline **4** & 1000 & 124020452 & 1000 & 124020466 & 1000 & 124020706 & 1000 & 124021474 & 1000 & 124022398 \\ \hline **5** & 1000 & 3904552 & 4.2820 & **3916599** & 1000 & 3904544 & 4.3919 & **3916599** & 4.33729 & 3916599 \\ \hline **6** & 1000 & 100956490 & 1000 & 101259145 & 1000 & 100957090 & 1000 & 101259161 & 1000 & 101288073 \\ \hline **7** & 1000 & 798872105 & 1000 & 799021102 & 1000 & 798911163 & 1000 & **799021209** & 1000 & 799021209 \\ \hline **8** & 1000 & 134613130 & 34.0139 & **134621271** & 1000 & 134613130 & 34.4151 & **134621271** & 32.1868 & 134621271 \\ \hline **9** & 1000 & 7237240 & 6.8162 & **7273973** & 1000 & 7237411 & 6.8372 & **7273973** & 6.9285 & **7273973** \\ \hline **10** & 1000 & 71945454 & 1000 & 71944343 & 1000 & **71946049** & 1000 & 71944343 & 1000 & 71945241 \\ \hline **11** & 1000 & 2707746 & 1000 & **2743962** & 1000 & 2707846 & 1000 & **2743962** & 1000 & **2743962** \\ \hline **12** & 1000 & 61702804 & 1000 & 61818326 & 1000 & 61702794 & 1000 & **61819495** & 1000 & 61818234 \\ \hline \end{tabular} \end{table} Table 4: Performance gain of CITs on B&R algorithm. The **bold** and underlined numbers are the best numerical results of all algorithms and the shortest running time of all algorithms to find the optimal solution, respectively. ### Comparative Experiment on Causal Search On the basis of preprocessing the input graph with **Causal Reduce**, we will compare our **Causal Search** with DynWVC2 to verify the effect of adding CITs to the local search process of DynWVC2 algorithm. The running time for both algorithms (including pre-processing time) is set to 1000 seconds. To avoid randomness, we run each instance 5 times and record the mean and maximum values. Furthermore, in order to estimate the gap between the results obtained by these two algorithms and the MWIS, we need to calculate the upper bound of each instance. The upper bound for the 2nd, 3rd, 5th, 8th, 9th instance is nothing but the weight of the optimal solution obtained by **Causal B&R Solver**, and for the rest of the instances, it is obtained by applying the weighted clique cover method mentioned in Section 4.2 to the remaining graph obtained by **Causal Reduce**. Table 6 outputs the numerical results and the estimated gap. The small gaps there demonstrate that after preprocessing with our **Causal Reduce**, both algorithms can obtain numerical results very close to the optimal solution. In particular, for those instances where the optimal solution is obtained, their gap can basically reach \(10^{-6}\sim 10^{-7}\), and in the remaining instances, the estimated gap can basically reach \(10^{-4}\sim 10^{-2}\). Besides, from the mean and maximum values, our **Causal Search** can basically achieve better performance than DynWVC2, thereby implying that our CITs can help local search find better local optima. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Fast} & \multicolumn{2}{c|}{Re + Fast} & \multicolumn{2}{c|}{Causal Re + Fast} & \multicolumn{2}{c|}{Dyn} & \multicolumn{2}{c|}{Re + Dyn} & \multicolumn{2}{c|}{Causal Re + Dyn} \\ \hline NO. & Time(S) & Result & Time(S) & Result & Time(S) & Result & Time(S) & Result & Time(S) & Result & Time(S) & Result \\ \hline 1 & \(1000\) & \(138864893\) & \(1000\) & \(1381212294\) & \(\underline{250}\) & \(1381215439\) & \(1000\) & \(1310465732\) & \(1000\) & \(1381174854\) & \(100\) & \(1381178355\) \\ \hline 2 & \(1000\) & \(227121250\) & \(1000\) & \(232826881\) & \(\underline{20}\) & \(232827816\) & \(1000\) & \(229760205\) & \(1000\) & \(232827891\) & \(\underline{20}\) & \(232828157\) \\ \hline 3 & \(1000\) & \(1004429\) & \(1000\) & \(10302725\) & \(\underline{5}\) & \(10303465\) & \(1000\) & \(10224463\) & \(1000\) & \(10303168\) & \(\underline{5}\) & \(10303476\) \\ \hline 4 & \(1000\) & \(122468973\) & \(1000\) & \(124025600\) & \(\underline{6}\) & \(124026219\) & \(1000\) & \(12317849\) & \(1000\) & \(124026286\) & \(\underline{6}\) & \(124026433\) \\ \hline 5 & \(1000\) & \(382308\) & \(1000\) & \(3916222\) & \(\underline{3}\) & \(3916534\) & \(1000\) & \(389401\) & \(1000\) & \(3916381\) & \(\underline{3}\) & \(3916568\) \\ \hline 6 & \(1000\) & \(97153884\) & \(1000\) & \(101739247\) & \(\underline{275}\) & \(101745579\) & \(1000\) & \(99122831\) & \(100\) & \(101740261\) & \(\underline{250}\) & \(10174463\) \\ \hline 7 & \(1000\) & \(765381612\) & \(1000\) & \(799847671\) & \(\underline{150}\) & \(799111311\) & \(100\) & \(757212666\) & \(1000\) & \(79058579\) & \(\underline{70}\) & \(799083159\) \\ \hline 8 & \(1000\) & \(131848587\) & \(1000\) & \(134620719\) & \(\underline{25}\) & \(134621064\) & \(1000\) & \(132970128\) & \(100\) & \(134621142\) & \(\underline{25}\) & \(134621249\) \\ \hline 9 & \(1000\) & \(7131770\) & \(1000\) & \(7273626\) & \(\underline{6}\) & \(7273655\) & \(1000\) & \(7252153\) & \(1000\) & \(7273750\) & \(\underline{6}\) & \(7273931\) \\ \hline 10 & \(1000\) & \(70966490\) & \(1000\) & \(71946839\) & \(\underline{6}\) & \(71947488\) & \(1000\) & \(71459210\) & \(1000\) & \(71947516\) & \(\underline{6}\) & \(71947590\) \\ \hline 11 & \(1000\) & \(2680282\) & \(1000\) & \(278925\) & \(\underline{25}\) & \(2748945\) & \(1000\) & \(274368\) & \(1000\) & \(2748982\) & \(\underline{10}\) & \(2749005\) \\ \hline 12 & \(1000\) & \(59619787\) & \(1000\) & \(61850413\) & \(\underline{50}\) & \(61852628\) & \(1000\) & \(60802433\) & \(1000\) & \(61855209\) & \(\underline{75}\) & \(61857313\) \\ \hline \end{tabular} \end{table} Table 5: A comparative experiment of the effect of **Causal Reduce** on improving the heuristic algorithm. The numbers in **bold** and underlined are the corresponding experimental results and running time (including preprocessing time) of each heuristic algorithm using our **Causal Reduce** as preprocessing, respectively. ## 6 Conclusion and Outlook In this paper, we propose a series of causal inference techniques (CITs) for the maximum weight independent set (MWIS) problem by fully exploiting the upper bound property of MWIS. After integrating our CITs, the performance of various existing algorithms, including the Branch-and-Reduce (B&R) algorithm and some heuristic algorithms, is significantly improved. We are now conducting theoretical analysis to find some guarantees on solution quality, developing strategies to help the B&R algorithm analyze the causes of conflicts and perform more efficient backtracking searches, and generalizing the proposed CITs to other combinatorial optimization problems. ## Acknowledgements This research was supported by the National Key R&D Program of China (Nos. 2020AAA0105200, 2022YFA1005102) and the National Natural Science Foundation of China (Nos. 12288101, 11822102). SS is partially supported by Beijing Academy of Artificial Intelligence (BAAI). The authors would like to thank Professor Hao Wu for his useful discussions and valuable suggestions. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{|c|}{**Dyn**} & \multicolumn{4}{|c|}{**Causal Search**} \\ \hline **NO.** & **Upper Bound** & **Mean** & **Gap** & **Max** & **Gap** & **Mean** & **Gap** & **Max** & **Gap** \\ \hline **1** & 1384376268 & 138146498.6 & \(2.103\times 10^{-3}\) & 1381467306 & \(2.101\times 10^{-3}\) & 1381466(33.6 & \(2.102\times 10^{-3}\) & **1381470750** & \(2.09\times 10^{-3}\) \\ \hline **2** & 232828253 & 23282153.4 & \(4.278\times 10^{-7}\) & 232828171 & \(3.522\times 10^{-7}\) & 232828159.2 & \(4.029\times 10^{-7}\) & **232828188** & \(2.792\times 10^{-7}\) \\ \hline **3** & 10303506 & 10303485.2 & \(2.019\times 10^{-6}\) & 10303491 & \(1.456\times 10^{-6}\) & 10303488.6 & \(1.689\times 10^{-6}\) & **10303494** & \(1.165\times 10^{-6}\) \\ \hline **4** & 124076790 & 124026438.6 & \(4.058\times 10^{-4}\) & **124026451** & \(4.057\times 10^{-4}\) & 124026448.4 & \(4.058\times 10^{-4}\) & 124026449 & \(4.057\times 10^{-4}\) \\ \hline **5** & 3916599 & 3916582.2 & \(4.289\times 10^{-6}\) & 3916583 & \(4.085\times 10^{-6}\) & 3916582.6 & \(4.187\times 10^{-6}\) & **3916584** & \(3.830\times 10^{-6}\) \\ \hline **6** & 103562461 & 101846521.0 & \(1.657\times 10^{-2}\) & 101848242 & \(1.655\times 10^{-2}\) & 1018475243.8 & \(1.656\times 10^{-2}\) & **101849650** & \(1.654\times 10^{-2}\) \\ \hline **7** & 800748442 & 799264479.2 & \(1.853\times 10^{-7}\) & 799265827 & \(1.852\times 10^{-7}\) & 79926678.8 & \(1.851\times 10^{-3}\) & **799267573** & \(1.849\times 10^{-3}\) \\ \hline **8** & 134621271 & 13612255.0 & \(1.189\times 10^{-7}\) & 134621257 & \(1.040\times 10^{-7}\) & 134621256.8 & \(1.055\times 10^{-7}\) & **134621265** & \(4.457\times 10^{-6}\) \\ \hline **9** & 7273973 & 2723939.8 & \(4.564\times 10^{-6}\) & 7273945 & \(3.849\times 10^{-6}\) & 7273936.8 & \(4.977\times 10^{-6}\) & **7273947** & \(3.574\times 10^{-6}\) \\ \hline **10** & 71978922 & 71947636.0 & \(4.347\times 10^{-4}\) & 71947639 & \(4.346\times 10^{-4}\) & 71947636.8 & \(4.346\times 10^{-4}\) & **71947642** & \(4.346\times 10^{-4}\) \\ \hline **11** & 2819343 & 2749008.4 & \(2.495\times 10^{-2}\) & 2749009 & \(2.495\times 10^{-2}\) & 2749010.8 & \(2.495\times 10^{-2}\) & **2749019** & \(2.494\times 10^{-2}\) \\ \hline **12** & 62276875 & 61860510.4 & \(6.686\times 10^{-3}\) & 61860616 & \(6.684\times 10^{-3}\) & 61880549.0 & \(6.685\times 10^{-3}\) & **61860717** & \(6.682\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 6: Compare our **Causal Search** with the DynWVC2 algorithm. The **bold** and underlined numbers are better maximum and average values, respectively. In the column headed by “**Upper Bound**”, each number is the upper bound of the MWIS of the corresponding instance.
2308.09621
Canonicity and Computability in Homotopy Type Theory
This dissertation gives an overview of Martin Lof's dependant type theory, focusing on its computational content and addressing a question of possibility of fully canonical and computable semantic presentation.
Dmitry Filippov
2023-08-18T15:23:33Z
http://arxiv.org/abs/2308.09621v1
# Canonicity and Computability in Homotopy Type Theory ###### Abstract This dissertation gives an overview of Martin Lof's dependant type theory, focusing on it's computational content. ## 1 Introduction Type theories are a powerful mathematical tool that has been used for a long time in Computer Science for the purpose of mandating the safe application of functions by limiting their scope to only objects for which their use was intended. In this sense, their use in programming languages is similar to their original intent of limiting the scope of mathematical constructions in order to prevent a series of paradoxes stemming from self-reference. Recently the interest in mathematical applications of type theory has grown thanks to type theory's intimate connection to constructive mathematics and type theory. It has been utilised in proof assistant software and formal proof verification, as well as used as a foundation for constructive mathematics in Univalent Foundations Project. This dissertation is mostly based on the HoTT book [22] and attempts to discuss computational content of Univalence axiom and it's potential judgement handling, in particular presenting the result from [14]. ## 2 Constructive Mathematics Constructive mathematics is a different philosophical approach to the subject that postulates that mathematical objects exist only in so much as us being able to construct and use them, in contrast to classical mathematics that presupposes an existing mathematical reality that a mathematician merely discovers. These differences manifest in a different handling of truth. Classical mathematics sees the truth as an inherent properties of mathematical reality. As such, every proposition is assumed to either be true or to be false (Law of Excluded Middle) and proves by contradiction are allowed. In constructive mathematics, on the other hand, LEM is rejected; it is thought that a proof by contradiction, while showing that a proposition cannot be disproved, does not actually provide an evidence for said proposition. This is most apparent in a case of existential statements - a proof of such statement by contradiction does not give a way to construct a witness for this statement, and since constructivism equates an existence of a mathematical structure with a possibility of it's construction, such argument does not prove existence. ### Constructive Logic Constructive logic can be presented as a Hilbert-style calculus, similar to classical logic, but with LEM removed: **Definition 1** (Constructive firt order logic).: Axioms: * \(\to 1\): \(\phi\rightarrow\chi\rightarrow\phi\) * \(\to 2\): \((\phi\rightarrow\chi\rightarrow\psi)\rightarrow((\phi\rightarrow\chi) \rightarrow\phi\rightarrow\psi)\) * \(\wedge 1\): \((\phi\wedge\chi)\rightarrow\phi\) * \(\wedge 2\): \((\phi\wedge\chi)\rightarrow\chi\) * \(\wedge 3\): \(\phi\rightarrow\chi\rightarrow(\phi\wedge\chi)\) * \(\vee 1\): \(\phi\rightarrow(\phi\vee\chi)\) * \(\vee 2\): \(\chi\rightarrow(\phi\vee\chi)\) * \(\vee 3\): \((\phi\rightarrow\psi)\rightarrow(\chi\rightarrow\psi)\rightarrow(\phi\vee\chi)\rightarrow\psi\) * \(\bot 1\): \(\bot\rightarrow\phi\) A single rule of inference is * Modus Ponens : from \(\phi\) and \(\phi\rightarrow\psi\) deduce \(\psi\) This can be extended to a predicate logic with Following axioms: * \(\forall 1\): \((\forall x\phi(x))\rightarrow\phi(t)\) * \(\exists 1\): \(\phi(t)\rightarrow\exists x\phi(x)\) And rules of inference: * from \(\psi\rightarrow\phi\) deduce \(\psi\rightarrow\forall x\phi\) if \(x\notin FV(\psi)\) * from \(\psi\rightarrow\phi\) deduce \((\exists x\psi)\rightarrow\phi\) if \(x\notin FV(\phi)\) A more natural presentation of constructive logic, however, is given in a style of natural deduction [24], with each propositional connective given introduction and elimination rules. The reason such representation is more natural is due to the connection between constructive logic and Type Theory which we explore below. The Hilbert style presentation of constructive logic can also be seen as having introduction (\(\lor 3\)) and elimination (\(\lor 1\), \(\lor 2\)) rules. From this perspective, implication is being used as a function symbol, "eliminating" the inputs to "introduce" the outputs. This is exactly why the axioms for implication (\(\to 1\), \(\to 2\)) have the same shape as combinators in \(\lambda\)-calculus. ### Type Theory Type Theory is a framework of studying mathematics where every object is given a type which governs which properties we expect the object to have and which operations we expect to be able to apply to it, similar to a type in a programming language. There are two flavours of Type Theories- extrinsic and intrinsic. In an extrinsic type theory, one sees type as merely an assignment of an object to a type; such a theory sees an identity function \(\lambda x.x\) as the same function when applied to any type, be it an integer or a boolean with type being attributed to it at the moment of application as a type derivation of an expression. In an intrinsic Type Theory, \(\lambda x.x:Bool\toBool\) and \(\lambda x.x:Int\to Int\) are two different functions, assigned type on construction. In this Dissertation we are concerned with intrinsic Type Theories. In intrinsic type theory, every object is constructed as an element of a particular type using the **constructors** of that type and has a defined set of ways it can be used defined as **eliminators** of that type. Thanks to this explicitly computational behaviour, intrinsic type theories are easier to implement on a computer, since instead of functions being defined as infinite relations as they are in Set Theory, functions are defined as as procedure using eliminators of a type more akin to how one would define a program. \(\lambda\)-calculus is the prototypical example of type theory, encoding general computable functions as a substitution calculus with function abstraction and application as basic constructor and eliminator. ### Curry-Howard Correspondence Since constructive mathematics treats truth differently, there are different semantics for what it means for a proposition to be true; a truthfulness of a proposition is equated with it's proof and a proof of a proposition is defined by the Brouwer-Heyting-Kolmogorov (BHK) interpretation: * A proof of \(P\wedge Q\) is an ordered pair \(\langle a,b\rangle\) consisting of a proof a of P and proof b of Q * A proof of \(P\lor Q\) is either \(\langle 0,a\rangle\) where a is a proof of P or \(\langle 1,b\rangle\) where b is a proof of Q * A proof of \(P\to Q\) is a function or a procedure f that converts a proof of P into a proof of Q * A proof of \((\exists x\in S)P\) is a pair \(\langle x,a\rangle\) where x is an element of S and a is a proof of P using x * A proof of \((\forall x\in S)P\) is a function F that converts an element x of S into a proof of P * \(\bot\) is the absurdity, proposition with no proofs * The formula \(P\) is defined as \(P\rightarrow\bot\) With this, it is more apparent how constructive logic is related to Type Theory - we can treat each proposition as a type of it's proofs, where ways of constructing a proof of a proposition given proofs of other propositions are seen as type cosntructors. The truth of a proposition then corresponds to the inhabitability of the corresponding type. This correspondence was first noted by Curry and Howard and is hence called the Curry-Howard correspondence [25]. This correspondence shows that the rules for construction and elimination of terms in Type Theories correspond exactly to a natural deduction representation of constructive logics, where a BHK interpretation of proof of propositions is expressed as membership of a particular type, corresponding to a proposition. Example of correspondences between type theories and constructive logics: * implicative part of propositional logic is represented by \(\lambda\)-calculus * proving proposition \(\alpha\) from assumptions \(\Gamma\) (\(\Gamma\vdash\alpha\)) is the same as writing a program that given objects of types \(\Gamma\) constructs an object of type \(\alpha\) * logical axioms correspond to rules of introducing a new variable with an unconstrained type * \(\rightarrow\)introduction rule corresponds to \(\lambda\)-abstraction * \(\rightarrow\)elimination rule (Modus Ponens) corresponds to function applications * provability corresponds to type inhabitability * constructive tautology corresponds to inhibitable type Moreover, Type Theory can be used as a semantic domain for the constructive mathematics; the way Type Theories are defined as a prototypical programming language lends itself nicely to this purpose. As such, Type Theory can serve as both the deductive system and semantic model for constructive mathematics, as opposed to classical mathematics that needs separate logical and semantic models (first order logic and set theory respectively). ### Dependant Types and Predicates Regular \(\lambda\)-calculus corresponds to implicative part of constructive logic, defining the behaviour of implication as a function. Other propositional connectives can be defined as simple types, largely imitating the introduction and elimination rules from Natural Deduction. In order to talk about predicates, however, we need a radically different sort of construction. Predicates in logic can be considered as a map between a collection of objects and a collection of propositions. In constructive logic a predicate maps a type (seen as a collection of objects) to a collection of types (seen as propositions). This corresponds to an idea of dependent types - a type family indexed by the elements of some other type. For example, a family of types corresponding to cyclic groups of a given order: \[n:\mathbb{N}\vdash Cyclic(n)Type\] Usually, such type families are made by a type construction with a free variable of the index type, just like predicates in logic are often formulae with a free variable. Logical quantifiers in the predicate logic bind these free variables by "quantifying" over them. In Type Theory, these quantifiers correspond to dependant types which take a type family indexed over a type and return a type dependant on one less variable. Under BHK interpretation existential quantification is done by a dependant pair and universal quantification is done by the dependant functions. ### Equality Since we interpret every proposition as a type, propositional equality also needs to be a type. This is a ditinctive feature of Type Theory that we will talk more about in next Section. ## 3 Martin Lof Type Theory Martin Lof's theory of types is a more complex type theory than \(\lambda\)-calculus. It incorporates both dependant types and identity types, thus completely expressing the first order constructive logic. ### Intentionally One big deviation of Type Theory from Set Theory is Intentionally. Set Theory is extensional in a sense that if two objects (sets) contain the same elements, then they are identical. This principle is not true in Type Theory; in Type Theory every type and every element of a type has an intent - a specific token it is associated with by construction. One way of illustrating this is to note that a collection of even divisors of 4 and a collection of even numbers smaller than 5 are the same set in Set Theory, but are different types in the Type Theory, since they are constructed differently, with a different intent and possess different computational content. This separation is particularly useful for polymorphic constructions - a function from a type of even divesers of a number can use the computational content present in elements of such type which would be absent from a second type. This is reminiscent of the way datatypes are treated in Programming. Two datatypes that carry the same information might nevertheless be different and might have different overhead structure implemented on them; as such, a function that accepts one of these datatypes as an input might nevertheless reject the other. The Intentionality of Type Theory extends to the elements as well. Martin Lof Type Theory is intrinsic; that is, a type is assigned to the token on creation and any single token can only belong to a single type. This is akin to strongly typed languages, where a given variable can only have one type which cannot change. This, again, insures that the constructions such as functions are only applied to the tokens they are designed for. To conclude, the way Set Theory is designed is more akin to early assembly languages, where all data is treated as a sequence of 0s and 1s and can be interpreted as an encoding of any datatype post-factum. Intentional Type Theory on the other hand is more akin to strongly-typed languages such as Java, where any variable is cast as a given type and typing cannot change. ### Judgements All constructions in type theory are executed as judgements - rules that, given their preconditions are met, allow us to say something about the theory. The judgements are considered to be "external" statements in a meta-theory and as such they cannot interact with the theory itself. One way to think about this is to think of the theory as a game and the judgements as valid rules; you can get to many constructions starting from nothing, but are only ever allowed to make legal moves. There are four types of basic judgements in MLTT * Type judgements, asserting that A is a type \(\vdash A\;Type\) * (Definitional) equality of types, asserting that A and B are equal types \(\vdash A\equiv B\) * Typing judgement, asserting that a is a term of type A \(\vdash a:A\) * (Definitional) equality of terms, asserting that a and b are equal terms of type A \(\vdash\) (\(a\equiv b):A\) Sometimes two further judgements are added: * Context judgement, asserting that \(\Gamma\) is a context \(\vdash\Gamma\;ctx\) * Substitution judgement, asserting that \(\theta\) is a substitution from \(\Gamma\) to \(\Delta\)\(\Gamma\vdash\theta:\Delta\) Since judgements are handled as simple statements with pre-conditions and deterministic effects, the judgemental derivations are decidable. In particular, judgemental equality and type derivations are decidable. All judgements in type theory are made in a context, usually denoted by capital greek letters \((\Gamma,\Delta)\). A context is a collection of previously made valid judgements and represents the preconditions of a given judgement. We write \(\Gamma\vdash J\) for judgement J in a context \(\Gamma\) ### Types To define a type one needs to give 4 judgements for that type: * a way to construct a Type * a way to construct canonical elements of a type * a way to "use up" the canonical elements of a type * a way in which the prior two judgements interact Eliminators and computation rules are often bundled together into **recursors**(often called non-dependent eliminator) and **inductors**(often called dependent eliminator), which are procedures (functions) that give a generic way of constructing any function out of the type. #### 3.3.1 Basic Types Falsity, or bottom \(\bot\), is a type with no introduction rules which by extension has no canonical elements. A recursion principle for this type is that there are no functions with bottom as the domain; there are 0 cases to consider as there are no canonical elements of it. Falsity corresponds to a prototypical false proposition, an uninhabited type. Unit type 1 is a type with precisely one introduction rule - \(*\) is an element of \(\mathbb{1}\). Consequently, the recursion principle for 1 is that to make a function with \(\mathbb{1}\) as a domain you only need to provide an element to which \(*\) is mapped. Unit type corresponds to a prototypical true proposition, with the only canonical proof of it being \(*\). Universes are "large" types of "small" types which are an analogue of Grothendieck universes for type theory. One usually assumes an infinite hierarchy of universes \(U_{0}:U_{1},U_{1}:U_{2},U_{2}:U_{3}...\) with every higher universe type containing all the types in previous universes. With Universe types type families can be thought of as functions \(A()\): \(X\to U\) and universes make some mathematical constructions such as a type of Groups possible. #### 3.3.2 Basic Type Constructors The type constructors of type theory are ways of constructing new types from old ones. Under the lense of propositions-as-types, those correspond to the binary connectives. Product Type Conjunction is represented as a Product Type (\(A\times B\)). A product type consists of ordered pairs of elements, one from A and another from B. As such, a single type constructor for \(A\times B\) is a pair \(a:A,b:B\vdash(a,b):A\times B\) An elimination rule for the product type is that given a canonical element \((a,b)\) of \(A\times B\), we can either return a or return b; these correspond to the two projector functions: \(p_{1}((a,b))=a\) \(p_{2}((a,b))=b\) A recursion principle for product types states that a generic function from \(A\times B\) may use both arguments, and so recursor transforms a function with a type \(A\to B\to C\) into a function with a type \((A\times B)\to C\). This is the un-currying operator. Coproduct Type Disjunction is represented as Coproduct Type \((A+B)\), otherwise known as disjoint union. Coproduct type consists of all the elements of A and B and contains an indicator from which disjunct each element is from. It is usually represented as ordered pairs, with first element either 0 or 1 depending on a type of a second element. There are two type constructors for coproducts: * inl takes an element a of A and makes it into an element (0, a) of \((A+B)\) * inr takes an element b of B and makes it into an element (1, b) of \((A+B)\) A recursor for Coproduct Type states that to construct a function from \((A+B)\) into C, one needs to know what to do with elements of A and what to do with elements of B. As such, recursor takes in two functions of types \(A\to C\) and \(B\to C\) and returns a function of type \((A+B)\to C\). **Function Type** Function types in MLTT are similar to ones in \(\lambda\)-calculus. There is one way to define a canonical element of a function type - \(\lambda\)-abstraction which, given an expression \(\sigma\) of type A with a free variable x of type B (which also could be thought of as a family of terms of A indexed over elements of B) gives a function \(\lambda x.\sigma\). The application of \(\lambda\)-abstraction type constructor binds the free variable from the expression it is applied to, eliminating it from the context. An elimination rule for function types is function application - given a function \(\phi:A\to B\) and a term \(a:A\), we can apply \(\phi\) to get \(\phi(a):B\). A computation rule says that \(\lambda x.\sigma(a)=\sigma[a/x]\). Function type has a uniqueness principle - \(\eta\)-equality \(f\coloneqq\lambda x.f(x))\). This judgemental equality cannot be derived from the rest of type theory,just like in \(\lambda\)-calculus and so is an optional addition. It does, however, make life a lot easier and so we adopt it in this paper. In the context of the univalence axiom, \(\eta\)-equality happens to be equivalent to function extensionality. Note: Negation in constructive systems is represented as proposition implying absurdity, as opposed to having it's own unary logical connective. Hence, the type of "not A" is represented as \(A\rightarrow\bot\). #### 3.3.3 Dependant types As mentioned at the end of Section 2, in order for our Type Theory to be able to handle predicates, we need to include dependant types. In MLTT those are dependant products and dependant functions, corresponding to quantifiers in logic. **Dependant Function Type** Dependant functions are a more general type of functions where the type of codomain varies with the element of the domain. Given a type A and a family of types \(B:A\rightarrow\mathcal{U}\), we can construct a type of dependant functions \(\Pi_{x_{A}}B(x)\). As per BHK interpretation, such dependant functions correspond to universal quantification. The constructor and eliminator for this type are the same as in the non-dependant function case, except the expression to which \(\lambda\)-abstraction is applied needs not have a consistent type. Same thing goes for eliminator being function application and the computation rule. One important feature of the dependant functions is that they allow us to define polymorphic functions. In order to do so, the dependent function might take a Type (as an element of the universe) as the first input and produce a function with that type as a domain. An example of such a polymorphic function is a generic identity: \(\lambda(A:\mathcal{U}).\lambda(x:A).x\) which takes a type as an argument and returns an identity function for that type. #### Dependant Product Type As per BHK interpretation, representing existential quantifiers are the Dependant Products, the types of pairs where a second element of a pair can have a varying type depending on the first element \(\Sigma_{x:A}B(x)\). Similar to the regular products, dependant products have a single element constructor: \(a:A,b:B(a)\vdash(a,b):\Sigma_{x:A}B(x)\) s A recursor for dependant product, is the un-currying operator, for dependent functions. A formal definition of this recursor goes like so: A type of the recursor is \(rec_{\Sigma_{x:A}B(x)}(C,g,(a,b)):\Pi_{C:U}(\Pi_{x:A}B(x)\to C)\to( \Sigma_{(}x:A)B(x))\to C\) Where C is the target type, g is a curried function and (a, b) is the element to which the resulting function is applied. The defining equation for this is \(rec_{\Sigma_{x:A}B(x)}(C,g,(a,b))\coloneqq g(a)(b)\) ### Identity type Identity type is the defining feature of intuitionistic type theory; as such, it warrants it's own chapter. It may seem that constructions above comprehensively cover all the logical connectors of first-order logic, and as such are sufficient for representing the predicate logic in it's entirety. What this assessment, however, misses is that first order logic has another often overlooked primitive symbol - propositional equality. None of the constructions above are able to handle general propositional equality. Judgemental equality equating terms with same canonical representations is too weak of a notion, at least because it must by definition be decidable. It is also only applicable to closed terms in an empty context - there is no way, for example, to show n + 2 judgmentally equals 2 + n. To handle those more complex equalities between terms we must introduce a new predicate - an identity type. We want a type family that, given any 2 elements a and b of type A represents a type of proofs that a = b. We call this type Eq(A, a, b) or alternatively \(a=_{A}b\) or \(Id_{A}(a,b)\). It makes sense to define equality to be the smallest reflexive, symmetric and transitive relation on a type. As such, the only type constructor for the identity type family is reflexivity: \[\text{refl(a) : }a=_{A}a\] Symmetry and transitivity of equality can then be proven as theorems - we can construct functions of types that we will demonstrate later. \[(x=_{A}y)\rightarrow(y=_{A}z)\rightarrow(x=_{A}z)\] \[(x=_{A}y)\rightarrow(y=_{x}z)\] #### 3.4.1 Homotopy Interpretation One distinctive feature of MLTT is that unlike in classical mathematical theories, propositional equalities in MLTT possess computational content; they are not merely an equivalence relation on terms, but a type with it's terms possessing computational meaning defined in the type's recursor. This means that in general the type of equalities has many inhabitants, a notion wholly dissimilar to the classical conception of equality. To give a semantic explanation of this, one needs to think of elements of an identity types as ways of identifying the two elements rather than mere propositions of their equality. To demonstrate this, let's consider a type of \(a+(b+c)=_{\mathbb{N}}(a+b)+c\), representing the associativity of Natural numbers. One can obtain an inhabitant of such a type by unwrapping the inductive definition of addition and doing an induction on c. Alternatively, one can use commutativity to demonstrate that a+(b+c) = (c+b)+a and (a+b)+c = c+(b+a) and do induction on a instead. These two proofs cannot be judgmentally equated and as such they are different elements of a type with different computational content. These two proofs, however, are in their essence "the same proof" except one goes about it in a more roundabout way. This notion can be formalised in an equality type between these two proofs. In fact, in general case we may have an infinite hierarchy of equality types, each layer proving the equality of terms in the previous layer. Moreover, since propositional equality is a type, it makes sense to define identity types on identity types, creating an infinite hierarchy of identifications of different levels. This hierarchy of equalities is usually interpreted as a homotopy type (alternatively, an \(\infty\)-groupoid), with types corresponding to homotopy spaces, element of a type to points in spaces, propositional equalities as paths between points and higher order equalities as homotopies. This interpretation is especially fitting since, as we will see in the next subsection, one cannot in general propositionally distinguish between equivalent elements and hence in particular cannot distinguish between two types with the same homotopy structure. This interpretation gives rise to an extensive cross-pollination between the subjects of type theory and homotopy theory, with type theory being a more natural language to express many homotopical theorems than set theory is and with homotopy theory suggesting many extensions that can be applied to type theory, such as a univalence axiom. #### 3.4.2 Path Induction On one hand it makes sense that we should only equate terms which are the same, but on the other hand it is unclear how this is not a trivial relation and how it is different from judgemental equality. What must be noted, however, is that we do not define these propositional equalities in isolation; we define the entire family of identity types dependant on elements of a given type at the same time. This allows the identity type to pick up the structure in the underlying type and lifting it to the level of equality. A recursion principle for the identity type is called the J-rule or Path Induction and it states that in order to prove something for all elements of \(Id_{A}(a,b)\) where a and b may vary it is enough to prove it for \(refl(a,a)\) for all a. #### 3.4.3 Based Path Induction An alternative and equivalent definition for identity induction principle is Based Path Induction, which is the same as regular Path Induction, except one end of a path is fixed. We will primarily be using this form of path induction, as it requires recursion through one less variable. The proof for equivalence of the two can be seen in HoTT book [22]. Formally, given a family of based paths \[E_{a}=\Sigma_{x:A}(x=_{A}a)\] , in order to define a function on this type family one needs only define this function on \((a,refl_{a})\). So, given an element \[c:C(a,refl_{a})\] We can obtain a function \[f:\Pi_{(x:A)}\Pi_{(p:a={}_{A}x)}C(x,p)\] s.t. it produces the desired result of the base case: \[f(a,refl_{a})\coloneqq c\] This induction principle, when packaged as an inductor function looks like so: \[ind_{=_{A}}:\Pi_{(a:A)}\Pi_{C:\Pi_{(x:A)}(a={}_{A}x)\rightarrow\mathcal{U}}C(a,refl_{a})\rightarrow\Pi_{(x:A)}\Pi_{(p:a={}_{A}x)}C(x,p)\] with defining equality \[ind_{=_{A}}(a,C,c,a,refl_{a})=c\] #### 3.4.4 Properties of Identity Equiped with path induction, we are now prepared to prove the fundamental properties of the Identity type. **Lemma 2**.: _Propositional equality is an equivalence relation._ Proof.: Reflexivity is immediate by the constructor of the identity type. For symmetry, we need to construct a function of the type \(sym:(x=y)\rightarrow(y=x)\) for every type A and every tokens x and y. We do this by pattern matching - to construct such a function, by path induction, we only need to consider one special case of \(refl_{x}:(x=y)\) which we send to \(refl_{y}:(y=x)\), producing a function of the needed type. Transitivity is done in the same way, to construct a function \(tran:(x=y)\rightarrow(y=z)\rightarrow(x=z)\), we apply path induction twice. We need to construct such function for all \(x,y,z:A\) and all \(p:x=y\) and \(q:y=z\). By path induction on p we may assume y is x and p is \(refl_{x}\); then, by induction on q we may assume z is x and q is \(refl_{x}\). Then, we may define a function to be \(tran(refl_{x},refl_{x})=refl_{x}\), finishing the proof. The key to understanding the Identity type and the role it plays in HoTT is to see how it interacts with other type theoretic constructions. Informally, Identity types represent the indistinguishables of Type Theory; all constructs possible in the language of type theory respect the indistinguishability of elements for which the identity type is inhabited in the sense that it is impossible to prove a proposition involving one of the identical elements but not the other. We will later make this notion precise using transports and Univalence axiom. **Uniqueness principle** Every type in MLTT has a uniqueness principle, stating that every element of that type is equivalent to some canonical element of that type. An example of a uniqueness principle for a coproduct type is: **Lemma 3**.: _For every element \(c:A+B\), there exists either an element of type \(c=_{A+B}a\) for some \(a:A\) or an element of type \(c=_{A+B}b\) for some \(b:B\)._ \[\Pi_{c:A+B}((\Sigma_{a:A}c=_{A+B}inl(a))+(\Sigma_{b:B}c=_{A+B}inr(b)))\] Proof.: We use the recursion principle for coproduct types by defining \[uniq_{A+B}(inl(a))=refl_{i}nl(a)\] and \[uniq_{A+B}(inr(b))=refl_{i}nl(b)\] Then, for every element c of \(A+B\) we have \[uniq_{A+B}(c):(\Sigma_{a:A}c=_{A+B}inl(a))+(\Sigma_{b:B}c=_{A+B}inr(b))\] This principle, when viewed through a lens of homotopy interpretation means that every type has a "skeleton" generated by it's constructors, and the entire type is homotopically equivalent to this skeleton; furthermore, it states that it makes sense to in general work "up to homotopy". Similarly, we have a uniqueness principle for identities which is a little tricky to state. In order to state uniqueness principle for identities we want to show that every element of \(a=_{A}b\) is of the form \(refl_{a}:a=_{A}a\), but the two above types are different. To be able to state the uniqueness principle we need to be working in a larger type \(E_{a}\coloneqq\Sigma_{x:A}(a=_{A}x)\) of equalities with a. Then we have: **Lemma 4**.: _Let \(a:A\) and \(E_{a}\coloneqq\Sigma_{x:A}(a=_{A}x)\). Then there is an element_ \[uniq:\Pi_{(x,p):E_{a}}(x,p)=_{E_{a}}(a.refl_{a})\] Proof.: A proof of this lemma is outside the scope of this dissertation as it requires a bit of Category Theory. The proof can be found as Lemma 2.3.2 in [22]. **Functoriality and substitution of equals for equals** Here we make the notion of "indistinguishabiltiy of identicals" or "Leibniz principle" more precise; in particular, we will show that functions respect propositional equality and that predicates cannot differentiate between propositionally equal elements. **Lemma 5** (Functoriality of identity [22]).: _Let \(f:A\to B\) be a function. Then there is a family of functions_ \[ap_{f}:\Pi_{x,y:A}(x=_{A}y)\rightarrow(f(x)=_{B}f(y))\] _s.t. \(ap_{f}(refl_{x})=refl_{f(x)}\) That is, for all the identical elements x, y their images under f are also identical._ Proof.: Let \(P:\Pi_{x,y:A}(x=_{A}y)\to U\) be given by \(P(x,y,p)\coloneqq(f(x)=_{B}f(y))\). We can construct an element of \(\Pi_{x:A}P(x,x,refl_{x})\) as \(\lambda x.refl_{f}(x)\). Then by path induction, exists \[ap_{f}:\Pi_{x,y:A}(x=_{A}y)\rightarrow(f(x)=_{B}f(y))\] as required and \(ap_{f}(refl_{x})=refl_{f(x)}\) by the computation rule. Viewed firm the lens of Homotopy interpretation, we can read this result as all definable functions being continuous. **Lemma 6** (Transports [22]).: _Given a type family \(P:A\to U\) (which can be viewed as a predicate in A), whenever \(x,y:A\) are identical, that is there is an element \(p:x=_{A}y\) there exists a function_ \[p_{*}:P(x)\to P(y)\] We can read this lemma as "whenever two elements are related by an identity type, they are propositionally indistinguishable in a sense that if a predicate is true for one of them, it is also true for another". Proof.: By path induction it suffices to show the result for \(x\coloneqq y\) and \(p\coloneqq refl_{x}\). Then we can take \[p_{*}=\lambda(a:P(x)).a:P(x)\to P(x)\] The above results show that Identity Type as defined by us captures the indiscernibility in the sense that the language of MLTT treats elements related by identity type as indiscernible. In fact, the definition of Path Induction is designed precisely in a way to capture this: **Theorem 7**.: _In a language of MLTT without path induction, path induction is equivalent to indiscernibility of identicals (existence of transports) and uniqueness principle for identity types_ Proof.: We showed that path induction implies the other two in the lemmas above. Now, let us assume the existence of transports and the uniqueness principle for identity types. As before, let \(E_{a}=\Sigma_{x:A}(a=_{A}x)\). Let P be any predicate on \(E_{a}\) and \((b,p)\) be any element of \(E_{a}\). By existence of transports, there is a map \[f:(((b,p)=_{A}(a,refl_{a})))\to P((a,refl_{a}))\to P((b,p))\] By the uniqueness principle we get that \((a,refl_{a})=_{A}(b,p)\) is inhabited. By symmetry of identity, so is \((b,p)=_{A}(a,refl_{a})\). Let the inhabitant of that type be q. Then \[f(q):P((a,refl_{a}))\to P((b,p))\] as desired. Thus, if we want to show that a predicate P holds for every \((b,p):E_{a}\), we only need to show that P holds for \((a,refl_{a})\) giving us the statement of based path induction. ### Inductive types In order to define more complicated types such as Natural Numbers, one can employ the inductive definitions. A generic inductive type is "freely generated" by it's generators, some of which can take elements of that same type as inputs. A generic element of such a type, then, is a tree-like structure of generators, with leafs representing the constructors that do not take elements of it's own type as inputs and all the other nodes representing ones that do. The easiest example of such an inductive type is Natural Numbers (\(\mathbb{N}\)), generated by 0 (a constant constructor) and succ (a function from \(\mathbb{N}\) to \(\mathbb{N}\)). A generic member of \(\mathbb{N}\) is then a tree consisting of any number of succ and 0 nodes; in this particular case, since the arity of succ is 1, all such trees are non-branching. The recursive principle for a generic inductive type defines a function on that type by specifying what a function must do on the generators; this suffices since the generators of an inductive type act as a "spanning set", akin to a basis in a vector space in that every (canonical) elemnt of the type can be written as a successive application of generators. Such recursors act similar to fold functions from functional programming, taking in a tree-like element of an inductive type and replacing every generator in it's construction by a function of a required arity. A recursor for a list, for example, in order tp generate a function \(List[A]->B\) replaces every concatenation generator with a function \(g:B->A->B\) and the empty list constructor with an element of B: \(\begin{array}{ccccc}\vdots&&&&\mbox{g}&&&&\\ &\raisebox{-1.29pt}{\includegraphics[height=14.29pt]{14.29pt}}&\mbox{g}&&&&\\ &\raisebox{-1.29pt}{\includegraphics[height=14.29pt]{14.29pt}}&\mbox{1}&\mbox{g}&&&& \\ &\raisebox{-1.29pt}{\includegraphics[height=14.29pt]{14.29pt}}&\mbox{2}&\mbox{ g}&&&&\\ &\raisebox{-1.29pt}{\includegraphics[height=14.29pt]{14.29pt}}&\mbox{3}&\mbox{ g}&&&&\\ &\raisebox{-1.29pt}{\includegraphics[height=14.29pt]{14.29pt}}&\mbox{4}&\mbox{ b}&&&&\end{array}\) All the type constructors mentioned above (except function types and identity types) are a specific example of a general recursive type where all the constructors are arity 0. #### 3.5.1 Uniqueness of inductive types Above we mentioned that natural numbers \(\mathbb{N}\) can be defined as an inductive type freely generated by \(0:\mathbb{N}\) and \(succ:\mathbb{N}\rightarrow\mathbb{N}\). However, there is nothing preventing us from defining a different datatype \(\mathbb{N}^{\prime}\) freely generated by \(0^{\prime}:\mathbb{N}^{\prime}\) and \({}^{\prime}succ^{\prime}:\mathbb{N}^{\prime}\rightarrow\mathbb{N}^{\prime}\). Such datatype will have an identical looking recursion principle and will satisfy all the properties \(\mathbb{N}\) does, but would be syntactically distinct. This highlights the fact that the identity type as defined by Martin Lof is too restrictive. It identifies functions based on their definition, while mathematicians usually identify functions extensionally; it also identifies types based on their presentation, while mathematicians usually identify sets based on isomorphism. Such isomorphic constructions arise in the mathematics all the time, be it natural numbers and non-negative integers, lists and vectors of all sizes, etc. We, as mathematicians, know that those are really "the same" and would use the two interchangeably. This is broadly unproblematic, but it does sweep a lot of latent computational content under the rug. When two types are isomorphic in a way shown above, we can identify them by defining two functions * \(f\coloneqq rec_{\mathbb{N}}(\mathbb{N}^{\prime},0^{\prime},succ^{\prime})\) * \(f\coloneqq rec_{\mathbb{N}^{\prime}}(\mathbb{N},0,succ)\) that coerce between the two types by replacing the constructors of one with constructors of another. With that, we can freely substitute one type for another by transferring every function defined on one type to another, applying the function, and then bringing the result back: \[double^{\prime}\coloneqq\lambda n.f(double(g(n)))\] Explicit handling of such isomorphism is quite cumbersome, as one needs to always coerce between the two isomorphic types explicitly. This motivates many extensions of type theory with coarser notions of equality, such as [2] and [9]. Tthe most widespread extension of this type is borrowed from Homotopy Theory by Voevodsky and is a Univalence Axiom presented below. ### Univalence Axiom [7] In order to formally state this axiom, we need first talk about equivalences. **Definition 8** (Homotopy [22]).: Let \(f,g:\Pi_{x:A}P(x)\) be two (dependent) functions. A homotopy from f to g is a dependant function of type \[(f\sim g)\coloneqq\Pi_{x:A}(f(x)=g(x))\] A homotpy captures the idea of two functions being extensionally equivalent up-to homotopy; this notion coincides with two functions being homotopical in homotopy theory. **Definition 9** (Equivalence [22]).: Let f be a function \(f:A\to B\). f is an equivalence if the following type is inhabited: \[isequiv(f)\coloneqq(\Sigma_{g:B\to A}(f\circ g\sim id_{B}))\times(\Sigma_{h:B \to A}(h\circ f\sim id_{A}))\] Equivalence, in it's essence, captures an idea of an isomorphism between two types, but "up to homotopy"; it ensures the existence of inverses up-to-homotopy. In fact, left inverses will always be right inverses and vice versa; the two are not required to be the same in the above definition due to problems with higher coherence in proofs involving equivalences. **Definition 10**.: The two types A and B are said to be equivalent if there is an equivalence between them; that is, if the below type in inhabited: \[A\cong B\coloneqq\Sigma_{f:A\to B}isequiv(f)\] **Lemma 11**.: _The following type is inhabited:_ \[IdToEq:\Pi_{X,Y:U}(X=_{U}Y)\rightarrow(X\cong B)\] Proof.: Consider an inhabitant p of \(X=_{U}Y\). Let \(p_{*}\) be the associated transport function. Then \(p_{*}\) is an equivalence. To prove this, we only need to consider a special case when \(p=refl_{x}\), in which case \(p_{*}=id_{X}\) which is an equivalence with it being it's own left and right inverse. **Definition 12** (Univalence).: Univalence is a property of the universe type. It is represented by a type \[isUnivalent(U)\coloneqq\Pi_{X,Y:U}isEquiv(IdToEq(X,Y))\] In general, universe may or may not be univalent. Univalence of universes, however, is consistent with MLTT and so it is impossible to disproof. As such, the type coercion captured by univalence always exists in the theory, but univalence makes it more accessible. Univalence axiom asserts that the universe is univalent. **Definition 13** (Univalence Axiom).: There is an element \[univ:isUnivalent(U)\] ### Sets in Type Theory Since in general type theory has an infinite-dimensional path structure of identity types, it is often easier to work with truncations of it. **Definition 14** (Mere Proposition).: In type theory we call a type **mere proposition** if it has only one element up to equality. Such types satisfy a predicate \[isProp(P)\coloneqq\Pi_{x,y:P}x=_{P}y\] A type is mere proposition if it is equivalent to either unit type (truth) 1 or bottom type (falsehood) \(\bot\). Any proposition in type theory can be reduced to a mere proposition by propositional truncation: **Definition 15** (Propositional Truncation).: Propositional truncation of a type A is a type \(\|A\|\); we have that for every \(a:A\) there is \(|a|:\|A\|\) and for every \(x,y:\|A\|,x=_{\|A\|}y\) Propositional truncation makes every type function like a proposition in classical logic. In particular, many results of classical logic, such as LEM, are true for mere propositions. Such a transformation, however, erases the computational content of the proposition and as such breaks constructivity of terms that use such a type. **Definition 16** (Set).: A set is a type for which every identity type is a mere proposition: \[isSet(A)\coloneqq\Pi_{x,y:A}isProp(x=_{A}y)\] Sets in type theory behave similar to normal sets, with no computational content contained in equality. In general, instead of cutting off identities at the first level, one may instead truncate atn'th level, containing an n-truncated type. **Definition 17** (n-truncated type).: A type is 1-truncated if it is a set. A type is n+1-truncated if for all elements of that type the identity type is n-truncated. ## 4 Canonicity and Computability of Type Theory In this section we discuss the univalence axiom, what it means for HoTT and it's inherent problems. ### Canonicity and Computability Dependant Type Theory in it's most basic form enjoys some great computational properties. It has decidable judgemental equality and type checking (which is required by Martin Lof's meaning explanation) and it furthermore has a property of canonicity. **Definition 18** (Canonical Term).: Canonical terms are of a type are exactly those introduced by a contructor. There are two ways of defining canonical terms : * Canonical terms are closed terms produced entirely by constructors (such as S(S(0))). This definition corresponds to the concept of eager evaluation - for a term to be considered canonical it needs to be "fully evaluated" * Canonical terms are closed terms introduced by a constructor, no matter if the input of the constructor is fully evaluated. For example, \(\lambda x.x+s(0))\) is considered a canonical term of \(\mathbb{N}\rightarrow\mathbb{N}\). This definition corresponds to the concept of lazy evaluation, since the input of a function needs not be always fully evaluated. These two ways of defining closed terms are equivalent in base MLTT and only correspond to different evaluation strategies. As such, Lazy evaluation is usually adopted as default. They are, however, different in HoTT since the theory looses canonicity. **Definition 19** (Canonicity).: A Type Theory is said to enjoy canonicity if every closed term of a given type reduces to a canonical term. Canonicity corresponds to an idea that every computation in type theory terminates; every program has a canonical term to which it reduces which indicates the result of computation. For example, \(2+2\) reduces to \(4\). In this context, judgemental equality can be seen as precisely equating all the terms with same canonical form (as one would expect). Canonicity of MLTT is a product of all constructions in it being defined in a style of natural deduction - with only way of introducing new terms being the canonical terms of a type, which always have a corresponding eliminators and computation rules. This is precisely the reason why one in general avoids adding axioms to a type theory. ### Decidability As mentioned above, judgemental equality and type checking in MLTT is decidable. This is, similar to canonicity, thanks to a strict way in which the theory is defined as a natural deduction style system. Every term, by very nature of its construction, possesses a type since only way of introducing new terms is using strictly typed cosntructors. Decidability of judgemental equality follows from canonicity, but it can be guaranteed in different ways, similar to type checking. ### Problem with Univalence As we talked about above, Univalence makes life of mathematicians a lot easier, allowing them to convert between equivalent structures freely, while leaving all the actual computational coercion implicit and letting the language figure it out. When considering type theory as a language, univalence is also incredibly useful as it allows the developer to reuse any code written for one type (class) to be used for every isomorphic type. Normally, one would require to explicitly coerce between the two types and prove for each construction that it respects this coercion. A problem with axiomatic handling of univalence, however, is that it presents a term of a univalence type which claims to do the computational lifting described above, but in fact contains no computaional content; as such, every program that uses univalence to construct a proof of identity of two isomorphic structures will inevitably get stuck trying to do the substitution of one structure for another, as there is no eliminator for identity proof produced by univalence. Univalence is a well-typed but computationally stuck term, unable to reduce to a Canonical form, undermining the canonicity of the theory. A judgemental handling of univalence is currently one of the biggest research areas in HoTT, as it is the biggest cornerstone towards the full power of HoTT being utilised in a computational context, such as proof assistance. There are a few modifications to HoTT that obtain a judgemental notion of univalence such as cubical type theory, but no judgemental presentation compatible with Martin Lof's type theory has been found yet. ### Two-Dimensional Type Theory Two-Dimensional (groupoid-level) type theory (2TT) [14] aims to give a judgemental presentation of equivalence for a 2-dimensional type theory, making univalence a provable result rather than an axiom. To do this, equivalence is considered judgmentally, with a judgement of a form \(\Gamma\vdash\alpha:M\simeq_{A}N\) meaning that \(\alpha\) is the evidence of equivalence of M and N as objects of A is added. The language is then extended with judgements ensuring that type and term families respect equivalences. Identity type is then interpreted as a hom-type of equivalences. Key innovation of 2TT is a new judgement of a form \(\Gamma\vdash\alpha:M\simeq_{A}N\) which states that \(\alpha\) is the evidence of equivalence of M and N as objects of A. A computational content of such equivalence is then made explicit by providing judgemental operations ensuring that every type and term family is functorial in their indices. Univalence and function extensionality then follow as rules of equivalence and their actions account for a computational content in equivalence proofs. There are 3 ways of introducing equivalences, corresponding to rules of an equivalence relation : * \(\vdash refl_{\theta}^{\Delta}:\theta\simeq_{\Delta}\theta\) * \(\delta:\theta_{1}\simeq_{\Delta}\theta_{2}\vdash\delta^{-1}:\theta_{2}\simeq_{ \Delta}\theta_{1}\) * \(\delta_{1}:\theta_{1}\simeq_{\Delta}\theta_{2},\delta_{2}:\theta_{2}\simeq_{ \Delta}\theta_{3}\vdash\delta_{2}\circ\delta_{1}:\theta_{1}\simeq_{\Delta} \theta_{3}\) Furthermore, these constructors respect associativity and inverse laws, making equivalence structure behave like a groupoid. Judgemental preservation of equivalence requires that type and term families are functorial in their indices. This is guaranteed by the equivalence eliminators defined as operations map for type families and resp for term families: \[\frac{\Gamma\vdash B(x):A\to U\qquad\Gamma\vdash\alpha:M_{1}\simeq_{A}M_{2} \qquad\Gamma\vdash M:B[M_{1}/x]}{\Gamma\vdash map_{x:A.B}\alpha M:B[M_{2}/x]}\] \[\frac{\Gamma,x:A\vdash F:B\qquad\Gamma\vdash\alpha:M_{1}\simeq_{A}M_{2}}{ \Gamma\vdash resp(x.F)\alpha:F[M_{1}/x]\simeq_{B}F[M_{2}/x]}\] _Remark_. In order to define above maps more easily, 2DTT is presented as a substitution calculus, with a judgement for context substitution. Each type constructor is then equipped with judgements ensuring that it behaves properly with equivalences and above maps. One can find the entire formulation of this language in the original paper [14, (Pages 3-5, 11)], together with the explanation of all the constructions. Judgements of 2TT ends up having a simple categorical interpretation in a category of groupoids [12]: * Context judgement \([\![\Gamma]\!]\) is a category. * Substitution judgement \([\![\Gamma\vdash\theta:\Delta]\!]\) is a functor \([\![\theta]\!]:[\![\Gamma]\!]\rightarrow[\![\Delta]\!]\) * Equivalence judgement \([\![\Gamma\vdash\delta:\theta_{1}\simeq_{\Delta}\theta_{2}]\!]\) is a natural transformation \([\![\delta]\!]:[\![\theta_{1}]\!]\simeq[\![\theta_{2}]\!]\) * Type judgement \([\![A]\!]\) is a functor \([\![A]\!]:[\![\Gamma]\!]\to GPD\) where GPD is a large category of groupoids and fucntors I omit full details of a proof, but it can be shown that a Boolean type gets interpreted as a discrete category with two objects, thus proving consistency (statement "true = false" is refutable in one semantic interpretation, and as such not provable). #### 4.4.1 Canoncirity In order to prove canonicity, we defines a semantic interpretation of 2DTT into syntactically presented groupoids and functors, saying that a logical expression is reducible if it is a member of these semantic domains; open terms are reducible if they take reducible terms to reducible terms. **Syntactically Presented Groupoids and Functors Definition 20**.: A Groupoid \(\langle\Gamma\rangle\) is presented by context \(\Gamma\) iff: * \(Ob(G)\) is a subset of the equivalence classes of substitutions \(\bullet\vdash\theta:\Gamma\) modulo definitional equality (such an equivalence class is denoted \(\langle\!\langle\theta\rangle\!\rangle\) from now on). * Set of morphisms \(\langle\!\langle\theta_{1}\rangle\!\rangle\to_{G}\langle\!\langle\theta_{2}\rangle\!\rangle\) is a subset of equivalences classes of equivalences \(\bullet\vdash\theta_{1}\simeq_{\Gamma}\theta_{2}\) modulo definitional equality. * Identity at \(\langle\!\langle\theta\rangle\!\rangle\), composition and inverses are given by equivalence type constructors. **Definition 21**.: A groupoid \(\langle A\rangle\) is presented by type A iff: * \(Ob(G)\) is a subset of the equivalence classes of terms of A modulo definitional equality. * Set of morphisms \(\langle\!\langle M_{1}\rangle\!\rangle\to_{G}\langle\!\langle M_{2}\rangle\!\rangle\) is a subset of \(\mathrm{e}\langle\!\langle M_{1}\simeq_{\Gamma}M_{2}\rangle\!\rangle\). * Identity at M, composition and inverses are given by equivalence type constructors. **Definition 22**.: A functor \(\langle M\rangle:\langle A\rangle\to\langle B\rangle\) is presented by \(x:A\vdash M:B\) iff * For all \(\langle\!\langle N\rangle\!\rangle\in Ob(\langle A\rangle)\), * For all \(\langle\!\langle N/x\rangle\!\rangle=\langle\!\langle M[N/x]\!\rangle\) * For all \(\langle\!\langle\alpha\rangle\!\rangle:\langle\!\langle N_{1}\rangle\!\rangle \to_{\langle A\rangle}\langle\!\langle N_{2}\rangle\!\rangle\), * \(\langle M\rangle(\langle\!\langle\alpha\rangle\!\rangle)=\langle\!\langle M[ \alpha/x]\!\rangle\) **Definition 23**.: A functor \(\langle A\rangle:\langle\Gamma\rangle\to GPD\) is presented by type A (where GPD stands for category of groupoids and functors) iff * For all \(\langle\!\langle\theta_{1}\rangle\!\rangle\in Ob(\langle\Gamma\rangle)\), \(\langle A\rangle(\langle\!\langle\theta_{1}\rangle\!\rangle)\) is presented by \(A[\theta_{1}]]\) * For all \(\langle\!\langle\delta\rangle\!\rangle:\langle\!\langle\theta_{1}\rangle\! \rangle\rightarrow_{\langle\Gamma\rangle}\langle\!\langle\theta_{2}\rangle\!\rangle\), \(\langle A\rangle(\langle\!\langle\delta\rangle\!\rangle)\) is presented by \(x:A[\theta_{1}].map_{A}\delta_{x}\) **Definition 24**.: Given groupoids \(\langle\Gamma\rangle\) and \(\langle\Delta\rangle\), the set of reducible substitutions \(RedSub(\langle\Gamma\rangle,\langle\Delta\rangle)\) is those substitutions \(\Gamma\vdash\theta:\Delta\) for which * For all \(\langle\!\langle\theta_{1}\rangle\!\rangle\in Ob(\langle\Gamma\rangle)\), * For all \(\langle\!\langle\delta\rangle\!\rangle:\langle\!\langle\theta_{1}\rangle\! \rangle\rightarrow_{\langle\Gamma\rangle}\langle\!\langle\theta_{2}\rangle\!\rangle\), \(\langle\!\langle\theta[\delta]\rangle\!\rangle:\langle\!\langle\theta[\theta_{1 }]\rangle\!\rangle\rightarrow_{\langle\Delta\rangle}\langle\!\langle\theta[ \theta_{2}]\rangle\!\rangle\) **Definition 25**.: Given groupoids \(\langle\Gamma\rangle\) and \(\langle\Delta\rangle\) and \(\theta_{1},\theta_{2}\in RedSubst\langle\Gamma\rangle,\langle\Delta\rangle)\) define the set of reducible equivalences \(RedEquiv^{\langle\Delta\rangle}_{\langle\Gamma\rangle}(\theta_{1},\theta_{2})\) to contain those equivalences \(\Gamma\vdash\delta:\theta_{1}\simeq_{\Delta}\theta_{2}\)s.t. for all \(\langle\!\langle\delta^{\prime}\rangle\!\rangle:\langle\!\langle\theta^{\prime }_{1}\rangle\!\rangle\rightarrow_{\langle\Gamma\rangle}\langle\!\langle\theta^ {\prime}_{2}\rangle\!\rangle\), \(\langle\!\langle\delta[\delta^{\prime}]\!\rangle:\langle\!\langle\theta_{1}[ \theta^{\prime}_{1}]\rangle\!\rangle\rightarrow_{\langle\Delta\rangle} \langle\!\langle\theta_{2}[\theta^{\prime}_{2}]\rangle\!\rangle\) **Definition 26**.: Given a groupoid \(\langle\Gamma\rangle\) and a functor \(\langle A\rangle:\langle\Gamma\rangle\to GPD\), define a set of reducible terms \(RedTm^{\langle\Gamma\rangle}\) to be the terms \(\Gamma\vdash M:A\) s.t. * For all \(\langle\!\langle\theta_{1}\rangle\!\rangle\in Ob(\langle\Gamma\rangle), \langle\!\langle M[\theta_{1}]\rangle\!\rangle\in Ob(\langle A\rangle( \langle\!\langle\theta_{1}\rangle\!\rangle))\) * For all \(\langle\!\langle\delta\rangle\!\rangle:\langle\!\langle\theta_{1}\rangle\! \rangle\rightarrow_{\langle\Gamma\rangle}\langle\!\langle\theta_{2}\rangle\!\rangle\), \(\langle\!\langle M[\delta]\rangle\!\rangle:\langle\!\langle map_{A}(\delta M[ \theta_{1}]\rangle\!\rangle\rightarrow_{\langle A\rangle(\langle\!\langle \theta_{2}\rangle\!\rangle)}\!\rangle:\langle\!\langle M[\theta_{2}]\rangle\!\rangle\) **Theorem 27** (Fundamental Theorem).: _There exist partial functions \([\![\Gamma]\!]\) and \([\![A]\!]\) which send contexts and terms respectively to their presentations while respecting equivalences and making all well-typed expressions reducible:_ * _If_ \(\Gamma\) _is a context, then_ \([\![\Gamma]\!]\) _is a groupoid presented by_ \(\Gamma\)__ * _If_ \(\Gamma\equiv\Gamma^{\prime},[\![\Gamma]\!]=[\![\Gamma^{\prime}]\!]\)__ * _If_ \(\Gamma\vdash\theta:\Delta\)_, then_ \(\theta\in RedSebst([\![\Gamma]\!],[\![\Delta]\!])\)__ * _If_ \(\Gamma\vdash:\theta_{1}\simeq_{\Delta}\theta_{2}\)_, then_ \(\delta\in RedEquiv^{[\![\Gamma]\!]}_{[\![\Delta]\!]}(\theta_{1},\theta_{2})\)__ * _If_ \(\Gamma\vdash A\,Type\)_, then_ \([\![A]\!]\) _is a functor_ \([\![\Gamma]\!]\to GPD\) _presented by A_ * _If_ \(\Gamma\vdash A\equiv A^{\prime}\,Type\)_, then_ \([\![A]\!]=[\![A^{\prime}]\!]\)__ * _If_ \(\Gamma\vdash M:A\)_, then_ \(M\in RedTm^{[\![\Gamma]\!]}([\![A]\!])\)__ * _If_ \(\Gamma\vdash\alpha:M\simeq_{A}N\)_, then_ \(\alpha\in RedEquiv_{[\![A]\!]}^{[\![\Gamma]\!]}(M,N)\)__ Proof.: The entire proof can be found in [14, (Pages 8-10)]. It proves the result by induction on type derivations, defining the interpretation compositionally for each type constructor. Now we can prove the canonicity of 2DTT as a corollary: Proof.: Given a theorem above, assume \(\bullet\vdash M:Boolean\). Then \(M\in RedTm^{[\![\bullet]\!]}([Boolean])\), so \(M\in RedTm^{[\![\bullet]\!]}(const(\ref{eq:2}))\). By definition \(id\in Ob(\bullet)\), so \(\langle\!\langle M[id]\rangle\!\rangle\in Ob(const(\ref{eq:2})(\langle\! \langle id\rangle\!\rangle))\), so \(const(\not\!\langle id\rangle\!)(\langle\!\langle id\rangle\!\rangle))=\ref{eq:2}\). By definition there are only 2 objects of 2, so \(M\equiv true\) or \(M\equiv false\). ### Limitations It is impossible to extend the methodology of 2TT to further dimensions due to problems with coherence at identity types beyond the first level. A different judgemental framework would need to be utilised. As such, limitation of a 2TT is its two-dimensionality. As we discussed above, propositional truncations of type theory do not have decidable type checking and judgemental equality as a result of "cutting off" computational content of higher equalities. The theory does, however, retain decidability of judgemental equality and type checking for types that do not include identity types in their constructions, which is still a lot of useful types. In fact, if it was possible to extend the judgemental presentation to further dimensions, more types (ones using first/second/n'th level of identity types) would become decidable, achieving full decidability in the limit. Furthermore, a canonicity in 2TT is achieved by equivalence relations, rather than one-directional computation rules such as function application. As such, although the theory enjoys canonicity, there is no effective algorithm to compute canonical forms. Further work needs to be done to explore a possibility of extending 2TT with a one-directional operational evaluation strategy that would ensure that all terms compute to a canonical form by a terminating algorithm. ## 5 Conclusion Dependant type theory is an important recent development for both mathematics and computer programming. On a mathematical side of things, HoTT is a promising candidate for a new foundational language, able to formalise constructive mathematics in a verifiable way. On a computer scientific side of things, dependant types represent a powerful tool for formally verifiable systems and proof assistants. A univalence axiom is a foundational idea for both fields, being invaluable for representing mathematics in a more natural way, studying homotopy theory inside new mathematical foundations. For programming this axiom is important as a generic program that is able to lift equivalences across type families. The axiom, however, as of yet lacks computational content in the most general presentation of HoTT. Although some attempts have been made to create simpler theories that attain a judgemental presentation of univalence, more work is needed to achieve a computationally-relevant version of univalence in a general setting.
2303.05369
Data-dependent Generalization Bounds via Variable-Size Compressibility
In this paper, we establish novel data-dependent upper bounds on the generalization error through the lens of a "variable-size compressibility" framework that we introduce newly here. In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data. This is shown to yield bounds that depend on the empirical measure of the given input data at hand, rather than its unknown distribution. Our new generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, it is shown that our framework also allows to derive general bounds on any function of the input data and output hypothesis random variables. In particular, these general bounds are shown to subsume and possibly improve over several existing PAC-Bayes and data-dependent intrinsic dimension-based bounds that are recovered as special cases, thus unveiling a unifying character of our approach. For instance, a new data-dependent intrinsic dimension-based bound is established, which connects the generalization error to the optimization trajectories and reveals various interesting connections with the rate-distortion dimension of a process, the R\'enyi information dimension of a process, and the metric mean dimension.
Milad Sefidgaran, Abdellatif Zaidi
2023-03-09T16:17:45Z
http://arxiv.org/abs/2303.05369v3
# Data-dependent Generalization Bounds ###### Abstract In this paper, we establish novel _data-dependent_ upper bounds on the generalization error through the lens of a "variable-size compressibility" framework that we introduce newly here. In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data. This is shown to yield bounds that depend on the empirical measure of the given input data at hand, rather than its unknown distribution. Our new generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, it is shown that our framework also allows to derive general bounds on _any_ function of the input data and output hypothesis random variables. In particular, these general bounds are shown to subsume and possibly improve over several existing PAC-Bayes and data-dependent intrinsic dimension-based bounds that are recovered as special cases, thus unveiling a unifying character of our approach. For instance, a new data-dependent intrinsic dimension based bounds is established, which connects the generalization error to the optimization trajectories and reveals various interesting connections with rate-distortion dimension of process, Renyi information dimension of process, and metric mean dimension. Stochastic learning algorithm, generalization error, rate-distortion of process, Renyi information dimension of process, metric mean dimension ## 1 Introduction and problem setup Let \(Z\in\mathcal{Z}\) be some _input data_ distributed according to an unknown distribution \(\mu\), where \(\mathcal{Z}\) is the _data space_. A major problem in statistical learning is to find a _hypothesis_ (model) \(w\) in the _hypothesis space_\(\mathcal{W}\) that minimizes the _population risk_ defined as (Shalev-Shwartz and Ben-David, 2014) \[\mathcal{L}(w)\coloneqq\mathbb{E}_{Z\sim\mu}[\ell(Z,w)],\quad w\in\mathcal{W}, \tag{1}\] where \(\ell:\mathcal{Z}\times\mathcal{W}\rightarrow\mathbb{R}^{+}\) is a loss function that measures the quality of the prediction of the hypothesis \(w\in\mathcal{W}\). The distribution \(\mu\) is assumed to be unknown, however; and one has only access to \(n\) (training) samples \(S=\{Z_{1},\ldots,Z_{n}\}\sim P_{S}=\mu^{\otimes n}\) of the input data. Let \(\mathcal{A}\colon\mathcal{S}\rightarrow\mathcal{W}\), \(\mathcal{S}=\mathcal{Z}^{n}\), be a possibly stochastic algorithm which, for a given input data \(s=\{z_{1},\ldots,z_{n}\}\in\mathcal{Z}^{n}\), picks the hypothesis \(\mathcal{A}(s)=W\in\mathcal{W}\). This induces a conditional distribution \(P_{W|\mathcal{S}}\) on the hypothesis space \(\mathcal{W}\). Instead of the population risk minimization problem (1) one can consider minimizing the _empirical risk_, given by \[\hat{\mathcal{L}}(s,w)\coloneqq\frac{1}{n}\sum_{i=1}^{m}\ell(z_{i},w). \tag{2}\] Nonetheless, the minimization of the empirical risk (or a regularized version of it) is meaningful only if the difference between the population and empirical risks is small enough. This difference is known as the _generalization error_ of the learning algorithm and is given by \[\mathrm{gen}(s,\mathcal{A}(s))\coloneqq\mathcal{L}(\mathcal{A}(s))-\hat{ \mathcal{L}}(s,\mathcal{A}(s)). \tag{3}\] An exact analysis of the statistical properties of the generalization error (3) is out-of-reach, however, except in very few special cases; and, often, one resort to bounding the generalization error from the above, instead. The last two decades have witnessed the development of various such upper bounds, from different perspectives and by undertaking approaches that often appear unrelated. Common approaches include information-theoretic, compression based, fractal-based, or intrinsic-dimension based, and PAC-Bayes ones. Initiated by Russo and Zou (2016) and Xu and Raginsky (2017), the information-theoretic approach measures the complexity of the hypothesis space by the Shannon mutual information between the input data and the algorithm output. See also the follow up works (Harutyunyan et al., 2021; Haghifam et al., 2021) and (Steinke and Zakynthinou, 2020). The roots of compression-based approaches perhaps date back to Littelstone and Warmuth (1986) who studied the predictability of the training data labels using only part of the dataset; and this has then been extended and used in various ways, see, e.g., (Arora et al., 2018; Suzuki et al., 2020; Hsu et al., 2021; Barsbey et al., 2021) and the recent (Sefidgaran et al., 2022). The fractal-based approach is a recently initiated line of work that hinges on that when the algorithm has a recursive nature, e.g., it involves an iterative optimization procedure, it might generate a fractal structure either in the model trajectories (Simsekli et al., 2020; Birdal et al., 2021; Hodgkinson et al., 2022; Lim et al., 2022) or in its distribution (Camuto et al., 2021). These works show that, in that case, the generalization error is controlled by the intrinsic dimension of the generated fractal structure. The original PAC-Bayes bounds were stated for classification (McAllester, 1998, 1999); and, it has then become clear that the results could be extended to any bounded loss, resulting in many variants and extensions of them (Seeger, 2002; Langford and Caruana, 2001; Catoni, 2003; Maurer, 2004; Germain et al., 2009; Tolstikhin and Seldin, 2013; Begin et al., 2016; Thiemann et al., 2017; Dziugaite and Roy, 2017; Neyshabur et al., 2018; Rivasplata et al., 2020; Negrea et al., 2020, 2020; Viallard et al., 2021). For more details on PAC-Bayes bounds, we refer the reader to Cantoni's book (Catoni, 2007) or the tutorial paper of (Alquier, 2021). The aforementioned approaches have evolved independently of each other; and the bounds obtained with them differ in many ways that it is generally difficult to compare them. Arguably, however, most useful bounds must be _computable_. This means that the bound should depend on the particular sample of the input data at hand, rather than just on the distribution of the data which is unknown. Such bounds are called _data-dependent_; they are preferred and are generally of bigger utility in practice. In this sense, most existing information-theoretic and compression-based bounds on the generalization error are data-independent. This includes the mutual information bounds of Russo and Zou (2016) and Xu and Raginsky (2017) whose computation requires knowledge of the joint distribution of the input data and output hypothesis; and, as such, is not computable with just one sample training dataset at hand. In fact, most prominent _data-dependent_ bounds are those obtained with the PAC-Bayes approach. **Contributions.** In this paper, we establish novel _data-dependent_ generalization bounds through the lens of a "variable-size compressibility" framework that we introduce here. In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of the input data. This allows us to derive bounds that depend on the particular empirical measure of the in put data, rather than its unknown distribution. The novel generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, we show that our variable-size compressibility approach is somewhat generic and it can be used to derive general bounds on _any_ function of the input data and output hypothesis random variables - In particular, see our general tail bound of Theorem 5 in Section 3.1 as well as other bounds in that section. In fact, as we show the framework can be accommodated easily to encompass various forms of tail bounds, tail bounds on the expectation, and in-expectation bounds through judicious choices of the distortion measure. In particular, in Section 4, by specializing them we show that our general variable-size compressibility bounds subsume various existing data-dependent PAC-Bayes and intrinsic-dimension based bounds and recover them as special cases. Hence, another advantage of our approach is that it builds a unifying framework that allows formal connections with the aforementioned, seemingly unrelated, PAC-Bayes and dimension based approaches. In some cases, our bounds are shown novel and possibly tighter than the existing ones. For example, see Proposition 7 for an example new data-dependent PAC-Bayes type bound that we obtain readily by application of our general bounds and some judicious choices of the involved variables. Also, our Theorem 9 provides a new dimension-based type bound, which states that the average generalization error over the trajectories of a given algorithm can be upper bounded in terms of the amount of compressibility of such optimization trajectories, measured in terms of a suitable rate-distortion function of such trajectories. This unveils novel connections between the generalization error and newly considered data-dependent intrinsic dimensions, including the rate-distortion dimension of a process, metric mean dimension, and Renyi information dimension of a process. The latter is sometimes used in the related literature to characterize the fundamental limits of compressed sensing algorithms. We emphasize that key ingredient to our approach is the new "variable-size compressibility" framework that we introduce here. For instance, the framework of (Sefidgaran et al., 2022), which comparatively can be thought of as being of "fixed-size" compressibility type, only allows to derive data-independent bounds; and, so, it falls shorts to establishing any meaningful connections with PAC-Bayes and data-dependent intrinsic dimension-based bounds. This is also reflected in the proof techniques in our case which are different from those for the fixed-size compressibility framework. The rest of this paper is organized as follows. In Section 2, we introduce our variable-size compressibility framework, and provide a data-dependent tail bound on the generalization error. Section 3 contains our general bounds. Section 4 provides various applications of our main results, in particular, to establish PAC-Bayes and dimension-based bounds. The proofs, as well as other related results, are deferred to the appendices. Notations.We denote random variables, their realizations, and their alphabets by upper-case letters, lower-case letters, and calligraphy fonts; _e.g.,_\(X\), \(x\), and \(\mathcal{X}\). The distribution, the expected value, and the support set of a random variable \(X\) are denoted as \(P_{X}\), \(\mathbb{E}[X]\), and \(\operatorname{supp}(P_{X})\). A random variable \(X\) is called \(\sigma\)-subgaussian, if \(\log\mathbb{E}[\exp(\lambda(X-\mathbb{E}[X]))]\leqslant\lambda^{2}\sigma^{2}/2\), \(\forall\lambda\in\mathbb{R}\).1 A collection of \(m\in\mathbb{N}\) random variables \((X_{1},\ldots,X_{m})\) is denoted as \(X^{m}\) or \(\mathbf{X}\), when \(m\) is known by the context. The notation \(\{x_{i}\}_{i=1}^{m}\) is used to represent \(m\) real numbers; used also similarly for sets or functions. We use the shorthand notation \([m]\) to denote integer ranges from \(1\) to \(m\in\mathbb{N}\). Finally, the non-negative real numbers are denoted by \(\mathbb{R}^{+}\). Footnote 1: All \(\log\) are considered with base \(e\) in this paper. Most of our results are expressed in terms of information-theoretic functions.2 For two distributions \(P\) and \(Q\) defined on the same measurable space, define the _Renyi divergence_ of order \(\alpha\in\mathbb{R}\), when \(\alpha\neq 1\), as \(D_{\alpha}(Q\|P)\coloneqq\frac{1}{\alpha-1}\log\mathbb{E}_{Q}\!\left[\left( \frac{\mathrm{d}Q}{\mathrm{d}P}\right)^{\alpha-1}\right]\), if \(Q\ll P\), and \(\infty\) otherwise. Here, \(\frac{\mathrm{d}Q}{\mathrm{d}P}\) is the Radon-Nikodym derivative of \(Q\) with respect to \(P\). The _Kullback-Leibler_ (KL) divergence is defined as \(D_{KL}(Q\|P)\coloneqq\mathbb{E}_{Q}\!\left[\log\frac{\mathrm{d}Q}{\mathrm{d}P}\right]\), if \(Q\ll P\), and \(\infty\) otherwise. Note that \(\lim_{\alpha\to 1}D_{\alpha}(Q\|P)=D_{KL}(Q\|P)\). The _mutual information_ between two random variables \(X\) and \(Y\), distributed jointly according to \(P_{X,Y}\) with marginal \(P_{X}\) and \(P_{Y}\), is defined as \(I(X;Y)\coloneqq D_{KL}(P_{X,Y}\|P_{X}P_{Y})\). This quantity somehow measures the _amount_ of information \(X\) has about \(Y\), and vice versa. Finally, throughout we will make extensive usage of the following sets, defined for a random variable \(X\in\mathcal{X}\) with distribution \(P_{X}\) and a real-valued function \(g(X)\colon\mathcal{X}\to\mathbb{R}\) as Footnote 2: The reader is referred to (Cover and Thomas, 2006; El Gamal and Kim, 2011; Csiszár and Korner, 2011; Polyanskiy and Wu, 2014) for further information. \[\mathcal{G}_{X}^{\delta}\coloneqq \{\nu_{X}\in\mathcal{P}_{\mathcal{X}}\colon D_{KL}(\nu_{X}\|P_{X} )\leq\log(1/\delta)\}, \tag{4}\] \[\mathcal{S}_{X}(g(X))\coloneqq \{\nu_{X}\in\mathcal{P}_{\mathcal{X}}\,\big{|}\,\forall x\in \mathrm{supp}(\nu_{X})\colon g(x)>0\}, \tag{5}\] where \(\mathcal{P}_{\mathcal{X}}\) is the set of all distributions defined over \(\mathcal{X}\). ## 2 Variable-size compressibility As we already mentioned, the approach of Sefidgaran et al. (2022) is based on a fixed-size compressibility framework; and, for this reason, it only accommodates bounds on the generalization error that are independent of the data. In this work, we develop a "variable-size" compressibility framework, which is more general and allows us to establish new data-dependent bounds on the generalization error. As it will become clearer throughout, in particular, this allows us to build formal connections with seemingly-unrelated approaches such as PAC-Bayes and data-dependent intrinsic dimension bounds. We start by recalling the aforementioned fixed-size compressibility framework, which itself can be seen as an extension of the classic compressibility framework found in source coding literature. We refer the reader to (Cover and Thomas, 2006) for an easy introduction to classic rate-distortion theory. Consider a learning algorithm \(\mathcal{A}(S)\colon\mathcal{S}\to\mathcal{W}\). The goal of the compression for the generalization error problem is to find a suitable _compressed_ learning algorithm \(\hat{\mathcal{A}}(S,W)=\hat{W}\in\hat{\mathcal{W}}\subseteq\mathcal{W}\) which has a smaller _complexity_ than that of the original algorithm \(\mathcal{A}(S)\) and whose generalization error is close enough to that of \(\mathcal{A}(S)\). Define the distortion function \(\tilde{d}\coloneqq\mathcal{S}\times\mathcal{W}\times\hat{\mathcal{W}}\to \mathbb{R}\) as \(\tilde{d}(w,\hat{w};s)\coloneqq\mathrm{gen}(s,w)-\mathrm{gen}(s,\hat{w})\). In order to guarantee that \(\tilde{d}(\mathcal{A}(S),\hat{\mathcal{A}}(S,\mathcal{A}(S)))\) does not exceed some desired threshold one needs to consider the _worst-case_ scenario; and, in general, this results in looser bounds. Instead, they considered an adaptation of the _block-coding_ technique, previously introduced in the source coding literature, for the learning algorithms. Consider a block of \(m\in\mathbb{N}\) datasets \(s^{m}=(s_{1},\ldots,s_{m})\) and one realization of the associated hypotheses \(w^{m}=(w_{1},\ldots,w_{m})\), with \(w_{i}=\mathcal{A}(s_{i})\) for \(i\in[m]\), which we denote in the rest of this paper with a slight abuse of notation as \(\mathcal{A}(s^{m})=w^{m}\). In this technique, the compressed learning algorithm \(\mathcal{A}(s^{m},w^{m})\colon\mathcal{S}^{m}\times\mathcal{W}^{m}\to\hat{ \mathcal{W}}^{m}\) is allowed to _jointly_ compress these \(m\) instances, to produce \(\hat{\mathcal{W}}^{m}\). Let, for given \(s^{m}\) the distortion between the output hypothesis of algorithm \(\mathcal{A}(\cdot)\) applied on the vector \(s^{m}\), i.e., \(w^{m}=\mathcal{A}(s^{m})\), and its compressed version \(\hat{A}(\cdot,\cdot)\) applied on the vector \((s^{m},w^{m})\), i.e., \(\hat{w}^{m}=\hat{\mathcal{A}}(s^{m},w^{m})\), be the average of the element-wise distortions \(\tilde{d}(\cdot,\cdot,\cdot)\) between their components, \(\tilde{d}_{m}(w^{m},\hat{w}^{m};s^{m})\coloneqq\frac{1}{m}\sum_{i\in[m]}\tilde{ d}(w_{i},\hat{w}_{i};s_{i})\). As is easily seen, this block-coding approach with average distortion enables possibly smaller distortion levels, in comparison with those allowed by worst-case distortion over the components. Sefidgaran et al. introduced the following definition of (exponential) compressibility, which they then used to establish _data-independent_ tail and in-expectation bounds on the generalization error. Denote by \(\mathbb{P}_{(S,W)^{\otimes m}}\) the probability with respect to the \(m\)-times product measure of the joint distribution of \(S\) and \(W\). **Definition 1** ((Sefidgaran et al., 2022, Definition 8)): _The learning algorithm \(\mathcal{A}\) is called \((R,\epsilon,\delta;\tilde{d}_{m})\)-compressible3 for some \(R,\delta\in\mathbb{R}^{+}\) and \(\epsilon\in\mathbb{R}\), if there exists a sequence of hypothesis books \(\{\mathcal{H}_{m}\}_{m\in\mathbb{N}}\), \(\mathcal{H}_{m}=\{\hat{\mathbf{w}}[j],j\in[l_{m}]\}\subseteq\hat{\mathcal{W}} ^{m}\) such that \(l_{m}\leqslant e^{mR}\) and_ Footnote 3: Similar to the previous work, we drop the dependence of the definition on \(\mu\), \(n\), and \(P_{W|S}\). \[\lim_{m\to\infty}\bigg{[}-\frac{1}{m}\log\mathbb{P}_{(S,W)^{\otimes m}}\Big{(} \min_{j\in[l_{m}]}\tilde{d}_{m}(W^{m},\hat{\mathbf{w}}[j];S^{m})>\epsilon \Big{)}\bigg{]}\geqslant\log(1/\delta). \tag{6}\] The inequality (6) expresses the condition that, for large \(m\), the probability (over \((S^{m},W^{m})\)) of finding _no_\(\hat{\mathbf{w}}[j]\) that is within a distance less than \(\epsilon\) from \(W^{m}\) vanishes faster than \(\delta^{m}\). Equivalently, the probability that the distance from \(W^{m}\) of any element \(\hat{\mathbf{w}}[j]\) of the book exceeds \(\epsilon\) (sometimes called probability of "excess distortion" or "covering failure") is smaller than \(\delta^{m}\) for large \(m\). A result of Sefidgaran et al. (2022, Theorem 9) states that if \(\mathcal{A}\) is \((R,\epsilon,\delta;\tilde{d}_{m})\)-compressible in the sense of Definition 1 and the loss \(\ell(Z,w)\) is \(\sigma\)-subgaussian for every \(w\in\mathcal{W}\) then with probability \((1-\delta)\) it holds that \[\mathrm{gen}(S,W)\leqslant\sqrt{2\sigma^{2}(R+\log(1/\delta))/n}+\epsilon. \tag{7}\] Also, let \(R(\delta,\epsilon)\coloneqq\sup_{Q}\mathfrak{R}\mathfrak{D}(\epsilon;Q)\) where \[\mathfrak{R}\mathfrak{D}(\epsilon;Q)\coloneqq\inf_{P_{\hat{W}|S}}I(S;\hat{W}), \quad\text{s.t.}\quad\mathbb{E}[\mathrm{gen}(S,W)-\mathrm{gen}(S,\hat{W})] \leqslant\epsilon, \tag{8}\] the supremum is over all distributions \(Q\) over \(\mathcal{S}\times\mathcal{W}\) that are in the \(\delta\)-vicinity of the joint \(P_{S,W}\) in the sense of (4), i.e., \(Q\in\mathcal{G}_{S,W}^{\delta}\); and, in (8), the Shannon mutual information and the expectation are computed with respect to \(QP_{\hat{W}|S}\). In the case in which \(\mathcal{S}\times\mathcal{W}\) is discrete, a result of Sefidgaran et al. (2022, Theorem 10) states that every algorithm \(\mathcal{A}\) that induces \(P_{S,W}\) is \((R(\delta,\epsilon)+\nu_{1},\epsilon+\nu_{2},\delta;\tilde{d}_{m})\)-compressible, for every \(\nu_{1},\nu_{2}>0\). Combined, the mentioned two results yield the following tail bound on the generalization error for the case of discrete \(\mathcal{S}\times\mathcal{W}\), \[\mathrm{gen}(S,W)\leqslant\sqrt{2\sigma^{2}(R(p,\delta)+\log(1/\delta))/n}+\epsilon. \tag{9}\] It is important to note that the dependence of the tail-bound (9) on the input data \(S\) is only through the joint distribution \(P_{S,W}\), not the particular realization at hand. Because of this, the approach of (Sefidgaran et al., 2022) falls short of accommodating any meaningful connection between their framework and ones that achieve data-dependent bounds such as PAC-Bayes bounds and data-dependent intrinsic dimension based bounds. In fact, in the terminology of information-theoretic rate-distortion, the described framework can be thought of as being one for fixed-size compressibility, whereas one would here need a framework that allows _variable-size_ compressibility. It is precisely such framework that we develop in this paper. For the ease of the exposition, hereafter we first illustrate our approach and its utility for a simple case. More general results enabled by our approach will be given in the next section. To this end, define \[d(w,\hat{w};s)\coloneqq \operatorname{gen}(s,w)^{2}-\operatorname{gen}(s,\hat{w})^{2}, \quad d_{m}(w^{m},\hat{w}^{m};s^{m})\coloneqq\ \ \ \frac{1}{m}\sum_{i\in[m]}d(w_{i},\hat{w}_{i};s_{i}). \tag{10}\] **Definition 2** (Variable-size compressibility): _The learning algorithm \(\mathcal{A}\) is called \((R_{S,W},\epsilon,\delta;d_{m})\)-compressible for some \(\{R_{s,w}\}_{(s,w)\in\mathcal{S}\times\mathcal{W}}\), where \(R_{s,w}\in\mathbb{R}^{+}\) and \(R_{\max}\coloneqq\sup_{s,w}R_{s,w}<\infty\), \(\epsilon\in\mathbb{R}\), and \(\delta\in\mathbb{R}^{+}\), if there exists a sequence of hypothesis books \(\{\mathcal{H}_{m}\}_{m\in\mathbb{N}}\), \(\mathcal{H}_{m}\coloneqq\{\hat{\mathbf{w}}[j],j\in[[e^{mR_{\max}}]]\}\), such that_ \[\lim_{m\to\infty}\left[-\frac{1}{m}\log\mathbb{P}_{(S,W)^{\otimes m}}\left( \min_{j\in e^{\sum_{i\in[m]}R_{S_{i},W_{i}}}}d_{m}(W^{m},\hat{\mathbf{w}}[j];S ^{m})>\epsilon\right)\right]\geq\log(1/\delta). \tag{11}\] The former compressibility definition (Definition 1) corresponds to \(R_{s,w}\coloneqq R\) for all \((s,w)\). Comparatively, our Definition 2 here accommodates _variable-size_ hypothesis books. That is, the number of hypothesis outputs of \(\mathcal{H}_{m}\) among which one searches for a suitable _covering_ of \((s^{m},w^{m})\) depends on \((s^{m},w^{m})\). The dependency is not only through \(P_{S,W}\) but, more importantly, the quantity \(\sum_{i\in[m]}R_{S_{i},W_{i}}\). The theorem that follows, proved in Appendix D.1, shows how this framework can be used to obtain a data-dependent tail bound on the generalization error. **Theorem 3**: _If the algorithm \(\mathcal{A}\) is \((R_{S,W},\epsilon,\delta;d_{m})\)-compressible and \(\forall w\in\mathcal{W}\), \(\ell(Z,w)\) is \(\sigma\)-subgaussian, then with probability at least \((1-\delta)\), \(\operatorname{gen}(S,W)\leq\sqrt{4\sigma^{2}(R_{S,W}+\log(2n/\delta))/(2n-1)+\epsilon}\)._ Note that the seemingly benign generalization to variable-size compressibility has far-reaching consequences for the tail bound itself as well as its proof. For example, notice the difference with the associated bound (7) allowed by fixed-size compressibility, especially in terms of the evolution of the bound with the size \(n\) of the training dataset. Also, investigating the proof and contrasting it with that of (7) for the fixed-size compressibility setting, it is easily seen that while for the latter it is sufficient to consider the union bound over all hypothesis vectors in \(\mathcal{H}_{m}\), among which there exists a suitable _covering_ of \((S^{m},W^{m})\) with probability at least \((1-\delta)\), in our variable-size compressibility case this proof technique does not apply and falls short of producing the desired bound as the _effective_ size of the hypothesis book depends on each \((S^{m},W^{m})\). Next, we establish a bound on the degree of compressibility of each learning algorithm, proved in Appendix D.2. **Theorem 4**: _Suppose that the algorithm \(\mathcal{A}(S)=W\) induces \(P_{S,W}\) and \(\mathcal{S}\times\mathcal{W}\) is a finite set. Then, for any arbitrary \(\nu_{1},\nu_{2}>0\), \(\mathcal{A}\) is \((R_{S,W}+\nu_{1},\epsilon+\nu_{2},\sigma;d_{m})\)-compressible if the following sufficient condition holds: for any \(\nu_{S,W}\in\mathcal{G}_{S,W}^{\delta}\),_ \[\inf_{p_{\hat{W}|S}\in\mathcal{Q}(\nu_{S,W})}\inf_{q_{\hat{W}}}\left\{D_{KL} \Big{(}p_{\hat{W}|S}\nu_{S}\|q_{\hat{W}}\nu_{S}\Big{)}-D_{KL}(\nu_{S,W}\|P_{W| S}\nu_{S})\right\}\leq\mathbb{E}_{\nu_{S,W}}[R_{S,W}], \tag{12}\] _where \(\mathcal{Q}(\nu_{S,W})\) stands for the set of distributions \(p_{\hat{W}|S}\) that satisfy_ \[\mathbb{E}_{\nu_{S,W}p_{\hat{W}|S}}\Big{[}\operatorname{gen}(S,W)^{2}- \operatorname{gen}(S,\hat{W})^{2}\Big{]}\leq\epsilon. \tag{13}\] Combining Theorems 3 and 4 one readily gets a data-dependent tail bound on the generalization error. Because the result is in fact a special case of the more general Theorem 5 that will follow in the next section, we do not state it here. Instead, we elaborate on a useful connection with the PAC-Bayes bound of (McAllester, 1998, 1999). For instance, let \(P\) be a fixed prior on \(\mathcal{W}\). It is not difficult to see that the choice \(R_{S,W}\coloneqq\log\frac{\mathrm{d}P_{W|S}}{\mathrm{d}P}(W)\) satisfies the condition (12) for \(\epsilon=0\). The resulting tail bound recovers the PAC-Bayes bound of (Blanchard and Fleuret, 2007; Catoni, 2007), which is a _desintegrated_ version of that of (McAllester, 1998, 1999). We hasten to mention that an appreciable feature of our approach here, which is rate-distortion theoretic in nature, is its _flexibility_, in the sense that it can be accommodated easily to encompass various forms of tail bounds, by replacing (10) with a suitable choice of the distortion measure. For example, if instead of a tail bound on the generalization error itself, ones seeks a tail bound on the expected generalization error relative to \(W\sim\pi\), it suffices to consider \((R_{S,\pi},\epsilon,\delta;d_{m})\)-compressibility, for some \(R_{S,\pi}\in\mathbb{R}^{+}\), to hold when in the inequality (11) the left hand side (LHS) is substituted with \[\lim_{m\to\infty}\bigg{[}-\frac{1}{m}\log\mathbb{P}_{S^{\otimes m}}\bigg{(} \min_{j\in\varepsilon^{\sum_{i}R_{S_{i}},\pi_{S_{i}}}}\frac{1}{m}\sum_{i\in[ m]}\Big{(}\mathbb{E}_{W_{i}\sim\pi_{S_{i}}}[\mathrm{gen}(S_{i},W_{i})^{2}]- \mathrm{gen}(S_{i},\hat{w}_{i}[j])^{2}\Big{)}>\epsilon\bigg{)}\bigg{]};\] and the inequality should hold for any choice of distributions \(\pi_{S}\) (indexed by \(S\)) over \(W\) and any distribution \(\nu_{S}\in\mathcal{G}_{S}^{\delta}\) - Note the change of distortion measure (10) which now involves an expectation w.r.t. \(W\sim\pi_{S}\). Using this, we obtain that with probability at least \((1-\delta)\), the following holds: \[\forall\pi:\mathbb{E}_{W\sim\pi}[\mathrm{gen}(S,W)]\leqslant\sqrt{4\sigma^{2 }(R_{S,\pi}+\log(2n/\delta)/(2n-1))}. \tag{14}\] A general form of this tail bound which, in particular, recovers as a special case the PAC-Bayes bound of (McAllester, 1998, 1999) is stated in Theorem 6 and proved in Section 4.2. ## 3 General data-dependent bounds on the generalization error In this section, we take a bigger view. We provide generic bounds, as well as proof techniques to establishing them, that are general enough and apply not only to the generalization error, but also to any arbitrary function of the pair \((S,W)\). Specifically, let \(f\colon\mathcal{S}\times\mathcal{W}\to\mathbb{R}\) be a given function. We establish data-dependent tail bounds, data-dependent tail bounds on the expectation, and in-expectation bounds (in Appendix C.1) on the random variable \(f(S,W)\). The results, which recover and extend those of the previous section to more general settings, are interesting in their own right. For example, as it will be shown in Section 4, many existing data-dependent PAC-Bayes and intrinsic dimension-based bounds can be recovered as special cases, through various choices of \(f(S,W)\), e.g., the squared generalization error \(f(S,W)=\mathrm{gen}(S,W)^{2}\). Also, as it will be illustrated therein the framework can be used to derive useful novel bounds. For ease of exposition, our results are stated in terms of a function, denoted as \(\mathfrak{T}\), defined as follows. Let \(\alpha\geqslant 1\), \(P_{S}\) and \(\nu_{S}\) be two distributions defined over \(\mathcal{S}\), \(p_{\hat{W}|S}\) and \(q_{\hat{W}|S}\) be two conditional distributions defined over a fixed set \(\hat{\mathcal{W}}\) given \(S\), and let \(g(S,\hat{W})\colon\mathcal{S}\times\hat{\mathcal{W}}\to\mathbb{R}\) be a given function. Then, denote \[\mathfrak{T}_{\alpha,P_{S}}(\nu_{S},p_{\hat{W}|S},q_{\hat{W}|S},g)\coloneqq \mathbb{E}_{\nu_{S}}\Big{[}D_{\alpha}\big{(}p_{\hat{W}|S}\|q_{\hat{W}|S}\big{)} \Big{]}+\log\mathbb{E}_{P_{S}q_{\hat{W}|S}}\Big{[}e^{g(S,\hat{W})}\Big{]}. \tag{15}\] For \(\alpha=1\), as already mentioned, the Renyi divergence coincides with the KL-divergence; and, so, the first term in that case is \(D_{KL}\big{(}p_{\hat{W}|S}\nu_{S}\|q_{\hat{W}|S}\nu_{S}\big{)}\). ### Tail Bound Recall the definitions (4), (5), and (15). The following theorem states the main _data-dependent_ tail bound of this paper. The bound is general and will be specialized to specific settings in the next section. We remind the reader that the term _data-dependent_ is used here to refer to bounds whose value may change depending on the values of \((S,W)\). This does not include bounds that depend on \((S,W)\) only through its distribution \(P_{S,W}\), such as those of (Xu and Raginsky, 2017) which are then, in this sense, _data-independent_. The proof of the theorem is given in Appendix D.3. **Theorem 5**: _Let \(f(S,W)\colon\mathcal{S}\times\mathcal{W}\to\mathbb{R}\) and \(\Delta(S,W)\colon\mathcal{S}\times\mathcal{W}\to\mathbb{R}^{+}\). Fix arbitrarily the set \(\hat{\mathcal{W}}^{\sharp}\) and define arbitrarily \(g(S,\hat{W})\colon\mathcal{S}\times\hat{\mathcal{W}}\to\mathbb{R}\). Then, for any \(\delta\in\mathbb{R}^{+}\), with probability at least \(1-\delta\),_ \[f(S,W)\leq\Delta(S,W), \tag{16}\] _if either of the following two conditions holds:_ 1. _For some_ \(\epsilon\in\mathbb{R}^{5}\) _and any_ \(\nu_{S,W}\in\mathcal{F}^{\delta}_{S,W}\coloneqq\mathcal{G}^{\delta}_{S,W}\bigcap \mathcal{S}_{S,W}(f(s,w)-\Delta(s,w))\)_, it holds that_ \[\inf_{p_{\hat{W}|S}\in\mathbb{Q}(\nu_{S,W})}\inf_{\lambda>0,q_{ \hat{W}|S}}\Big{\{}\mathfrak{T}_{1,P_{S}}(\nu_{S},p_{\hat{W}|S},q_{\hat{W}|S}, \lambda g)-D_{KL}(\nu_{S,W}\|P_{W|S}\nu_{S})\] \[-\lambda\big{(}\mathbb{E}_{\nu_{S,W}}\big{[}\Delta(S,W)\big{]}- \epsilon\big{)}\Big{\}}\leq\log(\delta),\] (17) _where_ \(\nu_{S}\) _is the marginal distribution of_ \(S\) _under_ \(\nu_{S,W}\) _and_ \(\mathcal{Q}(\nu_{S,W})\) _is the set of conditionals_ \(p_{\hat{W}|S}\) _that satisfy_ \[\mathbb{E}_{\nu_{S,W}p_{\hat{W}|S}}\big{[}\Delta(S,W)-g(S,\hat{W}) \big{]}\leq\epsilon.\] (18) 2. _For some_ \(\alpha>1\) _and any_ \(\nu_{S,W}\in\mathcal{F}^{\delta}_{S,W}\)_, it holds that_ \[\inf_{q_{W|S},\lambda\geq\alpha/(\alpha-1)}\Big{\{}\mathfrak{T}_ {\alpha,P_{S}}(\nu_{S},\nu_{W|S},q_{W|S}, \lambda\log(f))-D_{KL}(\nu_{S,W}\|P_{W|S}\nu_{S})\] \[-\lambda\Big{(}\mathbb{E}_{\nu_{S}}\log\mathbb{E}_{\nu_{W|S}} \big{[}\Delta(S,W)\big{]}\Big{)}\Big{\}}\leq\log(\delta).\] (19) It is easily seen that the result of Theorem 5 subsumes that of Theorem 4, which can then be seen as a specific case. The compressibility approach that we undertook for the proof of Theorem 4 can still be used here, however, with suitable amendments. In particular, the bound of Theorem 5 requires a condition to hold for every \(\nu_{s,w}\in\mathcal{F}^{\delta}_{S,W}\subseteq\mathcal{G}^{\delta}_{S,W}\). This is equivalent to _covering_ all sequences \((S^{m},W^{m})\) whose empirical distributions \(Q\) are sufficiently close to \(P_{S,W}\) in the sense of (4), using the \(\hat{W}\) defined by \(p_{\hat{W}|S}\). Furthermore, the distribution \(q_{\hat{W}|S}\) is the one used to build the (part of the) hypothesis book \(\mathcal{H}_{m,Q}\) (see the proof of Theorem 4 for a definition of \(\mathcal{H}_{m,Q}\)). The bound (16) holds with probability \((1-\delta)\) with respect to the joint distribution \(P_{S,W}\). It should be noted that if a learning algorithm \(\mathcal{A}(S)\) induces \(P_{S,W}\), one can consider the bound (16) with respect to any alternate distribution \(\tilde{P}_{S,W}=P_{S}\tilde{P}_{W|S}\) and substitute accordingly in the conditions of the theorem, as is common in PAC-Bayes literature, see, e.g., (Dziugaite and Roy, 2017; Neyshabur et al., 2018). Furthermore, in our framework of compressibility, \(\epsilon\) stands for the allowed level of average distortion in (18). The specific case \(\epsilon=0\) corresponds to lossless compression; and, it is clear that allowing a non-zero average distortion level, i.e., \(\epsilon\neq 0\), can yield a strictly tighter bound (16). In fact, as will be shown in the subsequent sections, many known data-dependent PAC-Bayes and intrinsic dimension-based bounds can be recovered from the "lossless compression" case. Also, for \(\epsilon=0\) the condition (17) of part (i.) of the theorem with the choices \(\hat{\mathcal{W}}\coloneqq\mathcal{W}\), \(g(s,\hat{w})\coloneqq f(s,\hat{w})\), and \(p_{\hat{W}|S}\coloneqq\nu_{W|S}\) reduces to \[\inf_{q_{W|S},\lambda>0}\bigg{\{}\mathbb{E}_{\nu_{S,W}}\bigg{[} \log\bigg{(}\frac{\mathrm{d}P_{W|S}}{\mathrm{d}q_{W|S}}\bigg{)}+\log\mathbb{E} _{P_{S}q_{W|S}}\Big{[}e^{\lambda f(S,W)}\Big{]}-\lambda\Delta(S,W)\bigg{]} \bigg{\}}\leqslant\log(\delta). \tag{20}\] Besides, since \(\mathcal{F}_{S,W}^{\delta}\subseteq\mathcal{G}_{S,W}^{\delta}\), the theorem holds if we consider all \(\nu_{S,W}\in\mathcal{G}_{S,W}^{\delta}\). The latter set, although possibly larger, seems to be more suitable for analytical investigations. Furthermore, for any \(\nu_{S,W}\in\mathcal{F}_{S,W}^{\delta}\), the distortion criterion (18) is satisfied whenever \[\mathbb{E}_{\nu_{S,W}p_{\hat{W}|S}}\big{[}f(S,W)-g(S,\hat{W})\big{]}\leqslant\epsilon. \tag{21}\] This condition, as we will see in the following sections, is often easier to consider. In particular, (21) can be further simplified under the Lipschitz assumption, _i.e.,_ when \(\forall w,\hat{w},s\colon|f(s,w)-g(s,\hat{w})|\leqslant\mathfrak{L}\rho(w,\hat {w})\), where \(\rho\colon\mathcal{W}\times\hat{\mathcal{W}}\to\mathbb{R}^{+}\) is a distortion measure over \(\mathcal{W}\times\hat{\mathcal{W}}\). In this case, a sufficient condition to meet the distortion criterion (18) is \[\mathbb{E}_{\nu_{S,W}p_{\hat{W}|S}}\big{[}\rho(W,\hat{W})\big{]}\leqslant \epsilon/(2\mathfrak{L}). \tag{22}\] We close this section by mentioning that the result of Theorem 5 can be extended easily to accommodate any additional possible stochasticity of the algorithm, i.e., such as in (Harutyunyan et al., 2021). The result is stated in Corollary 14 in the appendices. ### Tail bound on the expectation In this section, we establish a data-dependent tail bound on the expectation of \(f(S,W)\). That is, the bound is on \(\mathbb{E}_{W\sim\pi}[f(S,W)]\) for every distribution \(\pi\) over \(\mathcal{W}\).6 Footnote 6: The result trivially holds if the choices of \(\pi\) are restricted to a given subset of distributions, as used in Theorem 9. **Theorem 6**: _Let \(f(S,W)\colon\mathcal{S}\times\mathcal{W}\to\mathbb{R}\) and \(\Delta(S,\pi)\colon\mathcal{S}\times\mathcal{P}_{\mathcal{W}}\to\mathbb{R}^{+}\). Fix any set \(\hat{\mathcal{W}}\) and define arbitrarily \(g(S,\hat{W})\colon\mathcal{S}\times\hat{\mathcal{W}}\to\mathbb{R}\). Then, for any \(\delta\in\mathbb{R}^{+}\), with probability at least \(1-\delta\),_ \[\forall\pi:\mathbb{E}_{W\sim\pi}[f(S,W)]\leqslant\Delta(S,\pi), \tag{23}\] _if either of the following two conditions holds:_ 1. _For_ \(\epsilon\in\mathbb{R}\) _and for any choice of distributions_ \(\pi_{S}\) _(indexed by_ \(S\)_) over_ \(\mathcal{W}\) _and any distribution_ \[\nu_{S}\in\mathcal{F}_{S,\pi_{S}}^{\delta}\coloneqq\mathcal{G}_{S}^{\delta} \bigcap\mathcal{S}_{S}(\mathbb{E}_{W\sim\pi_{s}}[\![f(s,W)]-\Delta(s,\pi_{s})),\] \[\inf_{p_{\hat{W}|S}\in\mathcal{Q}(\nu_{S}\pi_{S})}\inf_{q_{W|S}, \lambda>0}\Big{\{}\mathfrak{T}_{1,P_{S}}(\nu_{S},p_{\hat{W}|S},q_{\hat{W}|S}, \lambda g)-\lambda(\mathbb{E}_{\nu_{S}}[\Delta(S,\pi_{S})]-\epsilon)\Big{\}} \leqslant\log(\delta),\] (24) _where_ \(\mathfrak{T}_{\alpha,P_{S}}\) _is defined in (_15_) and_ \(\mathcal{Q}(\nu_{S}\pi_{S})\) _contains all the distributions_ \(p_{\hat{W}|S}\) _such that_ \[\mathbb{E}_{\nu_{S}p_{\hat{W}|S}}\big{[}\Delta(S,\pi_{S})-g(S,\hat{W})\big{]} \leqslant\epsilon.\] (25) 2. _For some_ \(\alpha>1\) _and any choice of distributions_ \(\pi_{S}\) _over_ \(\mathcal{W}\) _and any distribution_ \(\nu_{S}\in\mathcal{F}_{S,\pi_{S}}^{\delta}\)_,_ \[\inf_{q_{W|S},\lambda\geqslant\alpha/(\alpha-1)}\Big{\{}\mathfrak{T}_{\alpha,P _{S}}(\nu_{S},\pi_{S},q_{W|S},\lambda\log(f))-\lambda\mathbb{E}_{\nu_{S}}[\log \Delta(S,\pi_{S})]\Big{\}}\leqslant\log(\delta).\] (26) The theorem is proved in Appendix D.4. The result, which is interesting in its own right, in particular allows us to establish a formal connection between the variable-size compressibility approach that we develop in this paper and the seemingly unrelated PAC-Bayes approaches. In fact, as we will see in Section 4.2, several known PAC-Bayes bounds can be recovered as special cases of our Theorem 6. It should be noted that prior to this work such connections were established only for few limited settings, such as the compressibility framework of (Littlestone and Warmuth, 1986), which differs from ours, and for which connections with PAC-Bayes approaches were established by Blum and Langford (2003). Furthermore, observe that for \(\epsilon=0\), the condition (24) with the choices \(\hat{\mathcal{W}}\coloneqq\mathcal{W}\), \(g(s,\hat{w})\coloneqq f(s,w)\), and \(p_{\hat{W}|S}\coloneqq\nu_{W|S}\) yields \[\inf_{q_{W|S},\lambda>0}\bigg{\{}D_{KL}\big{(}\pi_{S}\nu_{S}\|q_{W|S}\nu_{S} \big{)}+\log\mathbb{E}_{P_{S}q_{W|S}}\Big{[}e^{\lambda f(S,W)}\Big{]}-\lambda \mathbb{E}_{\nu_{S}}[\Delta(S,\pi_{S})]\bigg{\}}\leqslant\log(\delta). \tag{27}\] ## 4 Applications In this section, we apply our general bounds of Section 3 to various settings; and we show how one can recover, and in sometimes obtain _new and possibly tighter_, _data-dependent_ bounds. The settings that we consider include rate-distortion theoretic, PAC-Bayes, and dimension-based approaches. As these have so far been thought of, and developed, largely independently of each other in the related literature, in particular, this unveils the strength and unifying character of our variable-size compression framework. We remind the reader that the fixed-size compressibility approach of (Sefidgaran et al., 2022), which only allows obtaining _data-independent_ bounds, is not applicable here. ### Rate-distortion theoretic bounds In Corollary 10, in the Appendix, we show how the tail bound of (Sefidgaran et al., 2022, Theorem 10) can be recovered using our Theorem 5. The latter allows to establish similar bounds for any \(\alpha\geqslant 1\). Furthermore, the bound allows to extend this corollary. As an example, consider \(\epsilon=0\) and \(f(s,w)\coloneqq nD_{KL}(\hat{\mathcal{L}}(w)\|\mathcal{L}(s,w))\), where for \(a,b>0\), \(D_{KL}(a\|b)\coloneqq a\log(a/b)+(1-a)\log((1-a)\log(1-a)\log(1-a)\log(1-a))\), where \(a\) and \(b\) are the constants of the form \(\mathcal{L}(s,w)\coloneqq\mathcal{L}(s,w)\), and \(\mathcal{L}(s,w)\coloneqq\mathcal{L}(s,w)\). The latter \(a)/(1-b))\). This choice was used previously by (Seeger, 2002) to derive PAC-Bayes bound. Now, using the following inequality of (Tolstikhin and Seldin, 2013), \(D_{KL}^{-1}(a\|b)\leq a+\sqrt{2ab}+2b\), where \(D_{KL}^{-1}(a\|b)\coloneqq\sup\{p\in[0,1]\colon D_{KL}(p\|a)\leq b\}\), we establish that, with probability \(1-\delta\), \(\operatorname{gen}(S,W)\leq\sqrt{\hat{\mathcal{L}}(S,W)C/n}+C/n\), where \(C\coloneqq 4\sigma^{2}\big{(}\sup_{\nu_{S,W}\in\mathfrak{G}_{S,W}^{S}}I(S;W)+ \log(2\sqrt{n}/\delta)\big{)}\). This bound achieves the generalization bound with \(\mathcal{O}(1/n)\), if \(\hat{\mathcal{L}}(S,W)=0\). A similar approach can be taken for the lossy case, as used in the PAC-Bayes approach by Biggs and Guedj (2022). ### PAC-Bayes bounds PAC-Bayes bounds were introduced initially by McAllester (1998, 1999); and, since then, developed further in many other works. The reader is referred to Guedj (2019); Alquier (2021) for a summary of recent developments on this. In this section, we show that our general framework not only recovers several existing PAC-Bayes bounds (in doing so, we focus on the most general ones) but also allows to derive novel ones. Consider part i. of Theorem 6 and the condition (27). Let \(\lambda\coloneqq 1\) and set \(q_{W|S}\) to be a fixed, possibly data-dependent, common to all \(\nu_{S,W}\). This yields that with probability at least \((1-\delta)\) we have \[\forall\pi\colon\mathbb{E}_{W\sim\pi}[f(S,W)]\leq D_{KL}\big{(}\pi\|q_{W|S} \big{)}+\log\mathbb{E}_{P_{S}q_{W|S}}\Big{[}e^{f(S,W)}\Big{]}+\log\left(1/ \delta\right). \tag{28}\] The obtained bound (28) equals that of (Rivasplata et al., 2020, Theorem 1.ii). Similarly, derivations using our Theorem 5 and the condition (20) allow to recover the result of (Rivasplata et al., 2020, Theorem 1.i). As observed by Clerico et al. (2022), these recovered bounds are themselves general enough to subsume most of other existing PAC-Bayes bounds including those of (McAllester, 1998, 1999; Seeger, 2002; Catoni, 2003; Maurer, 2004; Catoni, 2007; Germain et al., 2009; Tolstikhin and Seldin, 2013; Thiemann et al., 2017). Similarly, part ii. of Theorem 6 with the choice \(\lambda=\alpha/(\alpha-1)\) allows to recover the result of (Begin et al., 2016) (see also (Viallard et al., 2021, Theorem 1)). Our variable-size compressibility framework and our general bounds of Section (3) also allow to establish _novel_ PAC-Bayes type bounds. The following proposition, proved in Appendix D.5, provides an example of such bounds. **Proposition 7**: _Let the set \(\hat{\mathcal{W}}\) and function \(g(S,\hat{W})\) be arbitrary. Let \(q_{\hat{W}|S}\) be a possibly data-dependent prior over \(\hat{\mathcal{W}}\)._ 1. _With probability at least_ \((1-\delta)\)_, we have_ \[\forall\pi\colon\mathbb{E}_{\pi}[f(S,W)]\leq D_{KL}\Big{(}p_{\hat{W}|S,\pi}\|q _{\hat{W}|S}\Big{)}+\log\mathbb{E}_{P_{S}q_{\hat{W}|S}}\Big{[}e^{g(S,\hat{W}) }\Big{]}+\log\left(1/\delta\right)+\epsilon,\] (29) _where for any_ \(\pi\)_,_ \(p_{\hat{W}|S,\pi}\) _is such that_ \(\mathbb{E}_{\pi P_{\hat{W}|S,\pi}}\big{[}f(S,W)-g(S,\hat{W})\big{]}\leq\epsilon\)_._ 2. _Let_ \(p_{\hat{W}|S,W}=p_{\hat{W}|W}\) _be any distribution such that for any_ \(\nu_{s,w}\in\mathcal{F}_{S,W}^{\delta}\)_,_ \(\mathbb{E}_{\nu_{S,W}p_{\hat{W}|W}}\big{[}\Delta(S,W)-g(S,\hat{W})\big{]}\leq\epsilon\)_. Denote the induced conditional distribution of_ \(\hat{W}\) _given_ \(S\)_, under_ \(P_{W|S}p_{\hat{W}|W,S}\) _where_ \(p_{\hat{W}|W,S}=p_{\hat{W}|W}\)_, as_ \(p_{\hat{W}|S}^{*}\)_. Then, with probability at least_ \(1-\delta\)_,_ \[f(S,W)\leq\mathbb{E}_{p_{\hat{W}|W}}\bigg{[}\log\left(\frac{\mathrm{d}p_{W|S}^ {*}}{\mathrm{d}q_{\hat{W}|S}}\right)\bigg{]}+\log\mathbb{E}_{P_{S}q_{\hat{W}|S} }\Big{[}e^{g(S,\hat{W})}\Big{]}+\log\left(1/\delta\right)+\epsilon.\] (30) The bound (29) is closely related to results obtained in (Langford and Seeger, 2001; Langford and Shawe-Taylor, 2002; Biggs and Guedj, 2022). In particular, considering \(f(S,W)\) as a function of the \(\gamma\)-margin generalization error and \(g(S,\hat{W})\) as a function of the generalization error of the 0-1 loss function with threshold \(\gamma/2\), sufficient conditions to meet the distortion criterion are provided in terms of the margin in (Biggs and Guedj, 2022). The results are then applied to various setups ranging from SVM to ReLU neural networks. It is noteworthy that, under the Lipschitz assumption for \(f(S,W)\), an alternate sufficient condition can be derived. For example, suppose that \(\hat{\mathcal{W}}\subseteq\mathcal{W}\) and \(f(s,w)=g(s,w)\coloneqq\lambda\operatorname{gen}(s,w)^{2}\) for some \(\lambda>0\). Then for the \(B\)-bounded and \(\mathfrak{L}\)-Lipschitz loss function, _i.e.,_ if \(\forall z\in\mathcal{Z},w,w^{\prime}\in\mathcal{W}\colon|\ell(z,w)-\ell(z,w^ {\prime})|\leq\mathfrak{L}\rho(w,w^{\prime})\), where \(\rho\colon\mathcal{W}\times\mathcal{W}\to\mathbb{R}^{+}\) is a distortion function on \(\mathcal{W}\), a sufficient condition for satisfying the distortion criterion is that \(\mathbb{E}_{\nu_{W}P_{\hat{W}|S}}[\rho(W,\hat{W})]\leq 4B\mathfrak{L}\lambda\epsilon\) holds true for every \(\nu_{s,w}\in\mathcal{G}_{S,W}^{\delta}\). If \(\rho(w,\hat{w})\coloneqq\|w-\hat{w}\|\), an example is \(\hat{W}\coloneqq W+Z\), where \(Z\) is an independent noise such that \(\mathbb{E}[\|Z\|]\leq 4B\mathfrak{L}\epsilon\). In particular, this choice prevents the upper bound from taking very large (infinite) values. To the best knowledge of the authors, the bound (30) has _not_ been reported in the PAC-Bayes literature; and it appears to be a _novel_ one. This bound subsumes that of (Rivasplata et al., 2020, Theorem 1.i) which, as can be seen easily, it recovers with the specific choices \(\hat{\mathcal{W}}\coloneqq\mathcal{W}\), \(g(s,\hat{w})\coloneqq f(s,\hat{w})\) and \(p_{\hat{w}|w}\coloneqq\delta_{w}\) (a Dirac delta function). ### Dimension-based bounds Prior to this work, the connection between compressibility and intrinsic dimension-based approaches has been established in (Sefidgaran et al., 2022). However, as the framework introduced therein is of a "fixed-size" compressibility type and only allows establishing data-independent bounds, the connection was made only to the intrinsic dimensions of the _marginal_ distributions introduced by the algorithm. This departs from most of the proposed dimension-based bounds in the related literature, which are data-dependent, i.e., they depend on a particular dimension arising for a given \(S=s\). See, e.g., (Simsekli et al., 2020; Camuto et al., 2021; Hodgkinson et al., 2022). In Appendix B, we show how using our variable-size compressibility one can recover the main generalization error bound of (Simsekli et al., 2020). The approach can be extended similarly to derive (Camuto et al., 2021, Theorem 1). In what follows, we make a new connection between the generalization error and the rate-distortion of a process arising from the optimization trajectories. Denote the hypothesis along the trajectory at iteration or time \(t\) by \(\mathcal{A}(S,t)\coloneqq W_{S}^{t}\coloneqq W^{t}\). The time \(t\) can be discrete or continuous-time. For continuous-time trajectory, we will use \(\mathsf{T}\coloneqq[0,1]\subseteq\mathbb{R}\); and, for discrete time, we use \(\mathsf{T}\coloneqq[t_{1}:t_{2}]\subseteq\mathbb{N}\). We denote the set of hypotheses along the trajectory by \(W^{\mathsf{T}}\). Simsekli et al. (2020) made a connection between the Hausdorff dimension of the _continuous-time_ SGD and \(\sup_{t\in[0,1]}\operatorname{gen}(S,W_{S}^{t})\). This finding has been also verified numerically (Simsekli et al., 2020). However, it is not clear enough at least to us (i) how the supremum over time of the generalization error of the continuous-time model relates to the maximum of the generalization error of the discrete-time model (for which the Hausdorff dimension equals zero), and (ii) how the maximum of the generalization error over the optimization trajectory relates, order-wise, to the generalization error of the final model. For this reason, instead, hereafter we consider a discrete-time model for the optimization trajectory (compatible with common optimization algorithms such as SGD), together with the average of the generalization error along the trajectory. Specifically, let \[\operatorname{gen}(S,W_{S}^{\mathsf{T}})\coloneqq\frac{1}{\Delta t}\sum_{t\in \mathsf{T}}\operatorname{gen}(S,W_{S}^{t}), \tag{31}\] where \(\Delta t\coloneqq t_{2}-t_{1}\), \(t_{1},t_{2}\in\mathbb{N}\). In particular, the quantity (31) hinges on that there is information residing in the optimization trajectories, an aspect that was already observed numerically in (Jastrzebskii et al., 2017; Jastrzebskii et al., 2020, 2021; Martin and Mahoney, 2021; Xing et al., 2018; Hodgkinson et al., 2022). For a given dataset, if \(t_{1}\) and \(\Delta t_{2}\) are large, the intuition suggests that (31) is close to the "average" generalization error of the algorithm. In particular, this holds, if we assume an ergodic invariant measure for the distribution of \(W^{\infty}\), as in (Camuto et al., 2021). The results of this section are expressed in terms of the rate-distortion functions of the process. Assume \(\hat{\mathcal{W}}\subseteq\mathcal{W}\). For any distribution \(\nu_{S}\) defined over \(\mathcal{S}\) and any conditional distribution \(Q_{W^{\mathsf{T}}|S}\) of \(W^{\mathsf{T}}\) given \(S\), define the rate-distortion of the optimization trajectory \(\mathsf{T}\coloneqq[t_{1}:t_{2}]\) as \[\mathfrak{R}\mathfrak{D}(\nu_{S},\epsilon;Q_{W^{\mathsf{T}}|S})\coloneqq\inf_{ P_{\hat{W}^{\mathsf{T}}|S}}I(S;\hat{W}^{\mathsf{T}}),\quad\text{s.t.}\quad \mathbb{E}\big{[}\operatorname{gen}(S,W^{\mathsf{T}})-\operatorname{gen}(S, \hat{W}^{\mathsf{T}})\big{]}\leq\epsilon, \tag{32}\] where the mutual information and expectation are with respect to \(\nu_{S}Q_{W^{\mathsf{T}}|S}P_{\hat{W}^{\mathsf{T}}|S}\). Now, we establish a bound on the conditional expectation of \(\operatorname{gen}(S,W^{\mathsf{T}})\), which is stated for simplicity for the case \(\ell(z,w)\in[0,1]\). The theorem is proved in Appendix D.6. **Theorem 8**: _Suppose that \(\ell\colon\mathcal{Z}\times\mathcal{W}\to[0,1]\). Then, for any \(\epsilon\in\mathbb{R}\) with probability at least \(1-\delta\),_ \[\forall\pi\colon\mathbb{E}_{W^{\mathsf{T}}\sim\pi}\Big{[} \operatorname{gen}(S,W^{\mathsf{T}})\Big{]}\leq\sqrt{\Big{(}\sup_{\nu_{s} \in\mathcal{G}_{S}^{\mathsf{T}}}\mathfrak{R}\mathfrak{D}(\nu_{S},\epsilon; \pi)+\log(1/\delta)\Big{)}/(2n)}+\epsilon.\] This result suggests that in order to have a small average generalization error, the trajectory should be 'compressible' for all empirical distributions that are within \(\delta\)-vicinity of the true unknown distribution \(\mu^{\otimes n}\). This bound is not data-dependent, however. Theorem 9 that will follow provides one that is _data-dependent_; it relates the average generalization error to the rate-distortion of the optimization trajectory, induced for a given \(s\). Assume that the loss is \(\mathfrak{L}\)-Lipschitz, _i.e._, \(\forall z\in\mathcal{Z},w,\hat{w}\in\mathcal{W}\colon|\ell(z,w)-\ell(z,\hat{w })|\leq\mathfrak{L}\rho(w,\hat{w})\), where \(\rho\colon\mathcal{W}\times\hat{\mathcal{W}}\to\mathbb{R}^{+}\) is a distortion function on \(\mathcal{W}\). Define the rate-distortion of the optimization trajectory \(\mathsf{T}\coloneqq[t_{1}:t_{2}]\) for any \(S=s\) and \(P_{W^{\mathsf{T}}|s}\) as \[\mathfrak{R}\mathfrak{D}(s,\epsilon;P_{W^{\mathsf{T}}|s})\coloneqq\inf_{P_{ \hat{W}^{\mathsf{T}}|W^{\mathsf{T}},s}}I(W^{\mathsf{T}};\hat{W}^{\mathsf{T}}), \quad\text{s.t.}\quad\mathbb{E}\big{[}\rho(W^{\mathsf{T}},\hat{W}^{\mathsf{T}}) \big{]}\leq\epsilon, \tag{33}\] where the mutual information and expectation are with respect to \(P_{W^{\mathsf{T}}|s}P_{\hat{W}^{\mathsf{T}}|W^{\mathsf{T}},s}\), and with slight abuse of notations \(\rho(W^{\mathsf{T}},\hat{W}^{\mathsf{T}})\):=\(\frac{1}{\Delta t}\sum_{t\in\mathsf{T}}\rho(w^{t},\hat{w}^{t})\) is the average distortion along the iterations. Note that the defined rate-distortion function depends on the distortion function \(\rho\), which is assumed to be fixed. Conceptually, there is a difference between the two rate-distortion functions defined above. Namely, while (32) considers the joint _compression_ of all trajectories \(W^{\mathsf{T}}\sim Q_{W^{\mathsf{T}}|S}\) when \(S\sim\nu_{S}\) (33) considers joint _compression_ of all iterations \(W^{\mathsf{T}}\sim Q_{W^{\mathsf{T}}|s}\) for a particular dataset \(s\). Now, we are ready to state the main result of this section. The proof is given in Appendix D.7. **Theorem 9**: _Suppose that the loss \(\ell\colon\,\mathcal{Z}\times\mathcal{W}\to[0,1]\) is \(\mathfrak{L}\)-Lipschitz with respect to \(\rho\colon\,\mathcal{W}\times\hat{\mathcal{W}}\to\mathbb{R}^{+}\). Suppose that the optimization algorithm induces \(P_{W^{\mathsf{T}}|S}\), denoted as \(\pi_{S}\) for brevity. Then, for any \(\epsilon\in\mathbb{R}\) with probability at least \(1-\delta\),_ \[\mathbb{E}_{W^{\mathsf{T}}\sim\pi_{S}}\Big{[}\mathrm{gen}(S,W^{\mathsf{T}}) \Big{]}\leqslant \sqrt{\big{(}\mathfrak{R}\mathfrak{D}(S,\epsilon;\pi_{S})+\log(2 nM/\delta)\big{)}/(2n-1)+4\mathfrak{L}\epsilon},\] _where \(M:=\exp\Bigl{(}\sup_{\nu_{S}\in\mathcal{G}_{S}^{\delta}}\mathbb{E}_{\pi_{S}} \log\frac{\mathrm{d}\pi_{S}}{\mathrm{d}[\pi_{S}\nu_{S}]_{W^{\mathsf{T}}}} \Bigr{)}\) and \([\pi_{S}\nu_{S}]_{W^{\mathsf{T}}}\) denotes the marginal distribution of \(W^{\mathsf{T}}\) under \(\pi_{S}\nu_{S}\)._ The _coupling_ coefficient \(M\) is similar to one defined in Simsekli et al. (2020, Definition 5) (See Assumption 11 in the Appendix); and intuitively, it measures the dependency of \(W^{T}\) on \(S\). This term can be bounded by \(I_{\infty}(S;W^{\mathsf{T}})\), a quantity that is often considered in related literature (Hodgkinson et al., 2022; Lim et al., 2022). The result of Theorem 9 establishes a connection between the generalization error and new data-dependent intrinsic dimensions. To see this, let \(R_{\Delta t}(s,\epsilon;P_{W^{\mathsf{T}}|S}):=\mathfrak{R}\mathfrak{D}(s, \epsilon;P_{W^{\mathsf{T}}|S})/\Delta t\). Assume that the conditional \(P_{W^{\mathsf{T}}|S}\) is stationary and ergodic; and consider the following \(R_{\infty}(s,\epsilon):=\lim_{\Delta t\to\infty}R_{\Delta t}(s,\epsilon;P_{W ^{\mathsf{T}}|S})\). It has been shown that for the i.i.d. process and also the Gauss-Markov process, the convergence to the infinite limit is with the rate of \(1/\sqrt{\Delta t}\)(Kostina and Verdu, 2012; Tian and Kostina, 2019). Hence, this limit gives a reasonable estimate of \(R_{\Delta t}(s,\epsilon;P_{W^{\mathsf{T}}|S})\), even for modest \(\Delta t\). Rezagah et al. (2016) and Geiger and Koch (2019) studied the ratio \(\lim_{\epsilon\to 0}\frac{R_{\infty}(s,\epsilon)}{\log(1/\epsilon)}\), known as _rate-distortion dimension of the process_. Using a similar approach as in (Sefidgaran et al., 2022, Proof of Corollary 7), the generalization bound of Theorem 8 can be expressed in terms of the rate-distortion dimension of the optimization trajectories. Rezagah et al. (2016) and Geiger and Koch (2019) have shown that for a large family of distortions \(\rho(w,\hat{w})\), the rate-distortion dimension of the process coincides with the Renyi information dimension of the process, and determines the fundamental limits of compressed sensing settings (Renyi, 1959; Jalali and Poor, 2014). The mentioned literature also provides practical methods that allow measuring efficiently the compressibility of the process. Finally, in Lindenstrauss and Tsukamoto (2018), it has been shown how this is related to the metric mean dimension, a notion that emerged in the literature of dynamical systems. ## Acknowledgments The authors would like to thank Umut Simsekli and Yijun Wan for some helpful discussions on some aspects of this work.
2305.01281
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a linear aggregation of the models. While several heuristics exist that follow this strategy, methods are still missing that rely on thorough theories for bounding the target error. In this turn, we propose a method that extends weighted least squares to vector-valued functions, e.g., deep neural networks. We show that the target error of the proposed algorithm is asymptotically not worse than twice the error of the unknown optimal aggregation. We also perform a large scale empirical comparative study on several datasets, including text, images, electroencephalogram, body sensor signals and signals from mobile phones. Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees. We further study several competitive heuristics, all outperforming IWV and DEV on at least five datasets. However, our method outperforms each heuristic on at least five of seven datasets.
Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger
2023-05-02T09:34:03Z
http://arxiv.org/abs/2305.01281v1
# Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation ###### Abstract We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a linear aggregation of the models. While several heuristics exist that follow this strategy, methods are still missing that rely on thorough theories for bounding the target error. In this turn, we propose a method that extends weighted least squares to vector-valued functions, e.g., deep neural networks. We show that the target error of the proposed algorithm is asymptotically not worse than twice the error of the unknown optimal aggregation. We also perform a large scale empirical comparative study on several datasets, including text, images, electroencephalogram, body sensor signals and signals from mobile phones. Our method1 outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees. We further study several competitive heuristics, all outperforming IWV and DEV on at least five datasets. However, our method outperforms each heuristic on at least five of seven datasets. Footnote 1: Large scale benchmark experiments are available at [https://github.com/Xpitfire/iwa](https://github.com/Xpitfire/iwa); [email protected], [email protected] ## 1 Introduction The goal of _unsupervised domain adaptation_ is to learn a model on unlabeled data from a _target_ input distribution using labeled data from a different _source_ distribution (Pan & Yang, 2010; Ben-David et al., 2010). If this goal is achieved, medical diagnostic systems can successfully be trained on unlabeled images using labeled images with a different modality (Varsavsky et al., 2020; Zou et al., 2020); segmentation models for natural images can be learned using only labeled data from computer simulations Peng et al. (2018); natural language models can be learned from unlabeled biomedical abstracts by means of labeled data from financial journals (Blitzer et al., 2006); industrial quality inspection systems can be learned on unlabeled data from new products using data from related products (Jiao et al., 2019; Zellinger et al., 2020). However, missing target labels combined with distribution shift makes parameter choice a hard problem (Sugiyama et al., 2007; You et al., 2019; Saito et al., 2021; Zellinger et al., 2021; Musgrave et al., 2021). Often, one ends up with a sequence of models, e.g., originating from different hyper-parameter configurations (Ben-David et al., 2007; Saenko et al., 2010; Ganin et al., 2016; Long et al., Although methods with mathematical error guarantees have been proposed to select the best model in the sequence (Sugiyama et al., 2007; Kouw et al., 2019; You et al., 2019; Zellinger et al., 2021), methods for learning aggregations of the models are either heuristics or their theory guarantees are limited by severe assumptions (cf. Wilson & Cook (2020)). Typical aggregation approaches are (a) to learn an aggregation on source data only (Nozza et al., 2016), (b) to learn an aggregation on a set of (unknown) labeled target examples (Xia et al., 2013; Dai et al., 2007; III & Marcu, 2006; Duan et al., 2012), (c) to learn an aggregation on target examples (pseudo-)labeled based on confidence measures of the given models (Zhou et al., 2021; Ahmed et al., 2022; Sun, 2012; Zou et al., 2018; Saito et al., 2017), (d) to aggregate the models based on data-structure specific transformations (Yang et al., 2012; Ha & Youn, 2021), and, (e) to use specific (possibly not available) knowledge about the given models, such as information obtained at different time-steps of its gradient-based optimization process (French et al., 2018; Laine & Aila, 2017; Tarvainen & Valpola, 2017; Athiwaratkun et al., 2019; Al-Stouhi & Reddy, 2011) or the information that the given models are trained on different (source) distributions (Hoffman et al., 2018; Rakshit et al., 2019; Xu et al., 2018; Kang et al., 2020; Zhang et al., 2015). One problem shared among all methods mentioned above is that they cannot guarantee a small error, even if the sample size grows to infinity. See Figure 1 for a simple illustrative example. In this work, we propose (to the best of our knowledge) the first algorithm for computing aggregations of vector-valued models for unsupervised domain adaptation with target error guarantees. We extend the _importance weighted least squares algorithm_(Shimodaira, 2000) and corresponding recently proposed error bounds (Gizewski et al., 2022) to linear aggregations of vector-valued models. The importance weights are the values of an estimated ratio between target and source density evaluated at the examples. Every method for density-ratio estimation can be used as a basis for our approach, e.g. Sugiyama et al. (2012); Kanamori et al. (2012) and references therein. Our error bound proves that the target error of the computed aggregation is asymptotically at most twice the target error of the optimal aggregation. In addition, we perform extensive empirical evaluations on several datasets with academic data (Transformed Moons), text data (Amazon Reviews (Blitzer et al., 2006)), images (MiniDomainNet (Peng et al., 2019; Zellinger et al., 2021)), electroencephalography signals (Sleep-EDF (Eldele et al., 2021; Goldberger et al., 2000)), body sensor signals (UCI-HAR (Anguita et al., 2013), WISDM (Kwapisz et al., 2011)), and, sensor signals from mobile phones and smart watches (HHAR (Stisen et al., 2015)). Figure 1: Unsupervised domain adaptation problem (Shimodaira, 2000; Sugiyama et al., 2007; You et al., 2019). Left: Source distribution (solid) and target distribution (dashed). Right: A sequence of different linear models (dashed) is used to find the optimal linear aggregation of the models (solid). Model selection methods (Sugiyama et al., 2007; Kouw et al., 2019; You et al., 2019; Zellinger et al., 2021) cannot outperform the best single model in the sequence, confidence values as used in Zou et al. (2018) are not available, and, approaches based on averages or tendencies of majorities of models (Saito et al., 2017) suffer from a high fraction of large-error-models in the sequence. In contrast, our approach (dotted-dashed) is nearly optimal. In addition, the model computed by our method provably approaches the optimal linear aggregation for increasing sample size. For further details on this example we refer to Section C in the Supplementary Material. We compute aggregations of models obtained from different hyper-parameter settings of 11 domain adaptation methods (e.g., DANN (Ganin et al., 2016) and Deep-Coral Sun & Saenko (2016)). Our method sets a new state of the art for methods with theoretical error guarantees, namely importance weighted validation (IWV) (Sugiyama et al., 2007) and deep embedded validation (DEV) (Kouw et al., 2019), on all datasets. We also study (1) classical least squares aggregation on source data only, (2) majority voting on target predictions, (3) averaging over model confidences, and (4) learning based on pseudo-labels. All of these heuristics outperform IWV and DEV on at least five of seven datasets, which is a result of independent interest. In contrast, our method outperforms each heuristic on at least five of seven datasets. Our main contributions are summarized as follows: * We propose the (to the best of our knowledge) first algorithm for ensemble learning of vector-valued models in (single-source) unsupervised domain adaptation that satisfies a non-trivial target error bound. * We prove that the target error of our algorithm is asymptotically (for increasing sample sizes) at most twice the target error of the unknown optimal aggregation. * We outperform IWV and DEV, and therefore set a new state-of-the-art performance for re-solving parameter choice issues under theoretical target error guarantees. * We describe four heuristic baselines which all outperform IWV and DEV on at least five of seven datasets. Our method outperforms each heuristic on at least five of seven datasets. * Our method tends to be more stable than others w.r.t. adding inaccurate models to the given sequence of models. ## 2 Related Work It is well known that aggregations of models in an ensemble often outperform individual models (Dong et al., 2020; Goodfellow et al., 2016). Traditional ensemble methods that have shown the advantage of aggregation are Boosting (Schapire, 1990; Breiman, 1998), Bootstrap Aggregating (bagging) (Breiman, 1994; 1996a) and Stacking (Wolpert, 1992; Breiman, 1996b). For example, averages of multiple models pre-trained on data from a distribution different from the target one have recently been shown to achieve state-of-the-art performance on ImageNet (Wortsman et al., 2022) and their good generalization properties can be related to flat minima (Hochreiter & Schmidhuber, 1994; 1997). However, most such methods don't take into account a present distribution shift. Although some ensemble learning methods exist, which take into account a present distribution shift, in contrast to our work, they are either relying on labeled target data (Nozza et al., 2016; Xia et al., 2013; III & Marcu, 2006; Dai et al., 2007; Mayr et al., 2016), are restricted by fixing the aggregation weights to be the same (Razar & Samothrakis, 2019), make assumptions on the models in the sequence or the corresponding process for learning the models (Yang et al., 2012; Ha & Youn, 2021; French et al., 2018; Laine & Aila, 2017; Tarvainen & Valpola, 2017; Athiwaratkun et al., 2019; Al-Stouhi & Reddy, 2011; Hoffman et al., 2018; Rakshit et al., 2019; Xu et al., 2018; Kang et al., 2020; Zhang et al., 2015), or, learn an aggregation based on the heuristic approach of (pseudo-)labeling some target data based on confidence measures of models in the sequence (Zhou et al., 2021; Ahmed et al., 2022; Sun, 2012; Zou et al., 2018; Saito et al., 2017). Another crucial difference of all methods above is that none of these methods can guarantee a small target error in the general setting (distribution shift, vector valued models, different classes, single source domain) described above, even if the sample size grows to infinity. Another branch of research are methods which aim at selecting the best model in the sequence. Although, such methods with error bounds have been proposed for the general setting above (Sugiyama et al., 2007; You et al., 2019; Zellinger et al., 2021), they cannot overcome a limited performance of the best model in the given sequence (cf. Figure 1 and Section 6 in the Supplementary Material of Zellinger et al. (2021)). In contrast, our method can outperform the best model in the sequence, and our empirical evaluations show that this is indeed the case in practical examples. A recent kernel-based algorithm for univariate regression, that is similar to ours, can be found in Gizewski et al. (2022). However, in contrast to Gizewski et al. (2022), our method allows a much more general form of vector-valued models which are not necessarily obtained from regularized kernel least squares, and, can therefore be applied to practical deep learning tasks. Our work employs technical tools developed in Caponnetto & De Vito (2007; 2005). In fact, we extend Caponnetto & De Vito (2007; 2005) to deal with importance weighted least squares. Finally, it is important to note Huang et al. (2006), where a core Lemma of our proofs is proposed. ## 3 Aggregation by Importance Weighted Least Squares This section gives a summary of the main problem of this paper and our approach. For detailed assumptions and proofs, we refer to Section A of the Supplementary Material. Notation and SetupLet \(\mathcal{X}\subset\mathbb{R}^{d_{1}}\) be a compact _input space_ and \(\mathcal{Y}\subset\mathbb{R}^{d_{2}}\) be a compact _label space_ with inner product \(\langle.,.\rangle_{\mathcal{Y}}\) such that for the associated norm \(\left\|y\right\|_{\mathcal{Y}}\leq y_{0}\) holds for all \(y\in\mathcal{Y}\) and some \(y_{0}>0\). Following Ben-David et al. (2010), we consider two datasets: A _source dataset_\((\mathbf{x},\mathbf{y})=\left((x_{1},y_{1}),\ldots,(x_{n},y_{n})\right)\in \left(\mathcal{X}\times\mathcal{Y}\right)^{n}\) independently drawn according to some source distribution (probability measure) \(p\) on \(\mathcal{X}\times\mathcal{Y}\) and an unlabeled _target_ dataset \(\mathbf{x}^{\prime}=(x_{1}^{\prime},\ldots,x_{m}^{\prime})\in\mathcal{X}^{m}\) with elements independently drawn according to the marginal distribution2\(q_{\mathcal{X}}\) of some target distribution \(q\) on \(\mathcal{X}\times\mathcal{Y}\). The marginal distribution of \(p\) on \(\mathcal{X}\) is analogously denoted as \(p_{\mathcal{X}}\). We further denote by \(\mathcal{R}_{q}(f)=\int_{\mathcal{X}\times\mathcal{Y}}\left\|f(x)-y\right\|_{ \mathcal{Y}}^{2}\mathrm{d}q(x,y)\) the _expected target risk_ of a vector valued function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) w.r.t. the least squares loss. Footnote 2: The existence of the conditional probability density \(q(y|x)\) with \(q(x,y)=q(y|x)q_{\mathcal{X}}(x)\) is guaranteed by the fact that \(\mathcal{X}\times\mathcal{Y}\) is Polish, i.e., a separable and complete metric space, c.f. Dudley (2002, Theorem 10.2.2.). ProblemGiven a set \(f_{1},\ldots,f_{l}:\mathcal{X}\rightarrow\mathcal{Y}\) of models, the labeled source sample \((\mathbf{x},\mathbf{y})\) and the unlabeled target sample \(\mathbf{x}^{\prime}\), the problem considered in this work is to find a model \(f:\mathcal{X}\rightarrow\mathcal{Y}\) with a minimal target error \(\mathcal{R}_{q}(f)\). Main AssumptionsWe rely (a) on the _covariate shift_ assumption that the source conditional distribution \(p(y|x)\) equals the target conditional distribution \(q(y|x)\), and, (b) on the _bounded density ratio_ assumption that there is a function \(\beta:\mathcal{X}\rightarrow[0,B]\) with \(B>0\) such that \(\mathrm{d}q_{\mathcal{X}}(x)=\beta(x)\,\mathrm{d}p_{\mathcal{X}}(x)\). ApproachOur goal is to compute the linear aggregation \(f=\sum_{i=1}^{l}c_{i}f_{i}\) for \(c_{1},\ldots,c_{l}\in\mathbb{R}\) with minimal squared target risk \(\mathcal{R}_{q}\left(\sum_{i=1}^{l}c_{i}f_{i}\right)\). Our approach relies on the fact that \[\operatorname*{arg\,min}_{c_{1},\ldots,c_{l}\in\mathbb{R}}\mathcal{R}_{q} \left(\sum_{i=1}^{l}c_{i}f_{i}\right)=\operatorname*{arg\,min}_{c_{1},\ldots,c _{l}\in\mathbb{R}}\int_{\mathcal{X}}\left\|\sum_{i=1}^{l}c_{i}f_{i}(x)-f_{q}( x)\right\|_{\mathcal{Y}}^{2}\mathrm{d}q_{\mathcal{X}}(x) \tag{1}\] for the _regression function_\(f_{q}(x)=\int_{\mathcal{Y}}y\,\mathrm{d}q(y|x)\)3, see e.g. Cucker & Smale (2002, Proposition 1). Unfortunately, the right hand side of Eq. (1) contains information about labels \(f_{q}(x)\) which are not given in our setting of unsupervised domain adaptation. However, borrowing an idea from _importance sampling_, it is possible to estimate Eq. (1). More precisely, from the covariate shift assumption we get \(f_{p}(x)=\int_{\mathcal{Y}}y\,\mathrm{d}p(y|x)=f_{q}(x)\) and we can use the bounded density ratio \(\beta\) to obtain Footnote 3: \(\mathcal{Y}\)-valued integrals are defined in the sense of Lebesgue-Bochner. \[\operatorname*{arg\,min}_{c_{1},\ldots,c_{l}\in\mathbb{R}}\mathcal{R}_{q} \left(\sum_{i=1}^{l}c_{i}f_{i}\right)=\operatorname*{arg\,min}_{c_{1},\ldots,c _{l}\in\mathbb{R}}\int_{\mathcal{X}}\beta(x)\left\|\sum_{i=1}^{l}c_{i}f_{i}(x) -f_{p}(x)\right\|_{\mathcal{Y}}^{2}\mathrm{d}p_{\mathcal{X}}(x) \tag{2}\] which extends importance weighted least squares (Shimodaira, 2000; Kanamori et al., 2009) to linear aggregations \(\sum_{i=1}^{l}c_{i}f_{i}\) of vector-valued functions \(f_{1},\ldots,f_{l}\). The unique minimizer of Eq. (2) can be approximated based on available data analogously to classical least squares estimation as detailed in Algorithm 1. In the following, we call Algorithm 1 Importance Weighted Least Squares Linear Aggregation (IWA). Relation to Model SelectionThe optimal aggregation \(f^{*}:=\arg\min_{c_{1},\ldots,c_{l}\in\mathbb{R}}\mathcal{R}_{q}\left(\sum_{i=1}^{l }c_{i}f_{i}\right)\) defined in Eq. (2) is clearly better than any single model selection since \[\mathcal{R}_{q}(f^{*})=\min_{c_{1},\ldots,c_{l}\in\mathbb{R}}\mathcal{R}_{q} \left(\sum_{i=1}^{l}c_{i}f_{i}\right)\leq\min_{c_{1},\ldots,c_{l}\in\{0,1\}} \mathcal{R}_{q}\left(\sum_{i=1}^{l}c_{i}f_{i}\right)\leq\min_{f_{1},\ldots,f_{ l}}\mathcal{R}_{q}(f_{i}). \tag{3}\] However, the optimal aggregation \(f^{*}\) cannot be computed based on finite datasets and the next logical questions are about the accuracy of the approximation \(\widetilde{f}\) in Algorithm 1. ``` 0: Set \(f_{1},\ldots,f_{l}:\mathcal{X}\rightarrow\mathcal{Y}\) of models, labeled source sample \((\mathbf{x},\mathbf{y})\) and unlabeled target sample \(\mathbf{x}^{\prime}\). 0: Linear aggregation \(\widetilde{f}=\sum_{i=1}^{l}\widetilde{c}_{i}f_{i}\) with weights \(\widetilde{c}=(\widetilde{c}_{1},\ldots,\widetilde{c}_{l})\in\mathbb{R}^{l}\). 0: 0: Step 1 Use unlabeled samples \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) to approximate density ratio \(\frac{\mathrm{d}\mathbf{x}_{\mathcal{X}}}{\mathrm{d}\mathbf{x}_{\mathcal{X}}}\) by some function \(\beta:\mathcal{X}\rightarrow[0,B]\) using a classical algorithm, e.g. Sugiyama et al. (2012). 0: Step 2 Compute weight vector \(\widetilde{c}=\widetilde{G}^{-1}\widetilde{g}\) with empirical Gram matrix \(\widetilde{G}\) and vector \(\widetilde{g}\) defined by \[\widetilde{G}=\left(\frac{1}{m}\sum_{k=1}^{m}\left\langle f_{i}(x^{\prime}_{k} ),f_{j}(x^{\prime}_{k})\right\rangle_{\mathcal{Y}}\right)_{i,j=1}^{l} \widetilde{g}=\left(\frac{1}{n}\sum_{k=1}^{n}\beta(x_{k})\left\langle y_{k},f_ {i}(x_{k})\right\rangle_{\mathcal{Y}}\right)_{i=1}^{l}.\] 0: Return :Linear aggregation \(\widetilde{f}=\sum_{i=1}^{l}\widetilde{c}_{i}f_{i}\). ``` **Algorithm 1**Importance Weighted Least Squares Linear Aggregation (IWA). ## 4 Target Error Bound for Algorithm 1 Let us start by introducing some further notation: \(L^{2}(p)\) refers to the Lebesgue-Bochner space of functions from \(\mathcal{X}\) to \(\mathcal{Y}\), associated to a measure \(p\) on \(\mathcal{X}\) with corresponding inner product \(\left\langle.,.\right\rangle_{L^{2}(p)}\) (this space basically consists of all \(\mathcal{Y}\)-valued functions whose \(\mathcal{Y}\)-norms are square integrable with respect to the given measure \(p\)). Moreover, let us introduce the (positive semi-definite) Gram matrix \(G=\left(\left\langle f_{i},f_{j}\right\rangle_{L^{2}(q_{\mathcal{X}})}\right)_{ i,j=1}^{l}\) and the vector \(\overline{g}=\left(\left\langle\beta f_{p},f_{i}\right\rangle_{L^{2}(p_{ \mathcal{X}})}\right)_{i=1}^{l}\). We can assume that \(G\) is invertible (and thus positive definite), since otherwise some models are too similar to others and can be withdrawn from consideration (see Section D). Next, we recall that the minimizer of Eq. (2) is \(c^{*}=(c_{1}^{*},\ldots,c_{l}^{*})=G^{-1}\overline{g}\), see Lemma 4. However, neither \(G\) nor the vector \(\overline{g}\) is accessible in practice, because there is no access to the target measure \(q_{\mathcal{X}}\). Driven by the law of large numbers we try to approximate them by averages over our given data and therefore arrive at the formulas for \(\widetilde{G}\) and \(\widetilde{g}\) given in Algorithm 1. This leads to the approximation \(\widetilde{f}\). Up to this point, we were only considering an intuitive perspective on the problem setting, therefore, we will now formally discuss statements on the distance between the model \(\widetilde{f}\) and the optimal linear model \(f^{*}=\sum_{i=1}^{l}c_{i}^{*}f_{i}\), measured in terms of target risks, and how this distance behaves with increasing sample sizes. This is what we attempt with our main result: **Theorem 1**.: _With probability \(1-\delta\) it holds that_ \[\mathcal{R}_{q}(\widetilde{f})-\mathcal{R}_{q}(f_{q})\leq 2\left(\mathcal{R}_{q} \left(f^{*}\right)-\mathcal{R}_{q}(f_{q})\right)+C\left(\log\frac{1}{\delta} \right)(n^{-1}+m^{-1}) \tag{4}\] _for some coefficient \(C>0\) not depending on \(m,n\) and \(\delta\), and sufficiently large \(m\) and \(n\)._ Before we give an outline of the proof (see Section A), let us briefly comment on the main message of Algorithm 1. Observe, that (Cucker and Smale, 2002, Proposition 1) \(\mathcal{R}_{q}(f)-\mathcal{R}_{q}(f_{q})=\left\|f-f_{q}\right\|_{L^{2}(q_{ \mathcal{X}})}^{2}\) can be interpreted as the total target error made by Algorithm 1, sometimes called _excess risk_. Indeed, in the deterministic setting of labeling functions, \(f_{q}\) equals the target labeling function and the excess risk equals the target error of Ben-David et al. (2010). Eq. (4) compares this error for the aggregation \(\widetilde{f}\), computed by Algorithm 1, to the error for the optimal aggregation \(f^{*}\). Note that the error of the optimal aggregation \(f^{*}\) is unavoidable in the sense that it is determined by the decision of searching for linear aggregations of \(f_{1},\ldots,f_{l}\) only. However, if the models \(f_{1},\ldots,f_{l}\) are sufficiently different, then this error can be expected to be small. Theorem 1 tells us that the error of \(\widetilde{f}\) approaches the one of \(f^{*}\) with increasing target and source sample size. The rate of convergence is at least linear. Finally, we emphasize that Theorem 1 does not take into account the error of the density-ratio estimation. We refer to the recent work Gizewski et al. (2022), who, for the first time, included such error in the analysis of importance weighted least squares. Let us now give a brief outline for the proof of Theorem 1. One key part concerns the existence of a Hilbert space \(\mathcal{H}\) with associated inner product \(\left\langle.,.\right\rangle_{\mathcal{H}}\) (a reproducing kernel space of functions from \(\mathcal{X}\rightarrow\mathcal{Y}\)) which contains all given models \(f_{1},\ldots,f_{l}\) and the regression function \(f_{q}=f_{p}\). The space \(\mathcal{H}\) can be constructed from any given models that are bounded and continuous functions. Furthermore, Algorithm 1 does not need any knowledge of \(\mathcal{H}\), which is a modeling assumption only needed for the proofs, so that we can apply many arguments developed in Caponnetto & De Vito (2007; 2005). \(\mathcal{H}\) is also not necessarily generated by a prescribed kernel such as Gaussian or linear kernel, and, no further smoothness assumption is required, see Sections A and B in the Supplementary Material. Moreover, in this setting one can express the excess risk as follows: \(\mathcal{R}_{q}\left(f\right)-\mathcal{R}_{q}(f_{q})=\left\|A(f-f_{q})\right\| _{\mathcal{H}}^{2}\) for some bounded linear operator \(A:\mathcal{H}\rightarrow\mathcal{H}\). This also allows us to formulate the entries of \(G\) and \(\bar{g}\) in terms of the inner product \(\left\langle.,.\right\rangle_{\mathcal{H}}\) instead. Using properties related to the operators that appear in the construction of \(\mathcal{H}\), in combination with Hoeffding-like concentration bounds in Hilbert spaces and bounds that measure, e.g., the deviation between empirical averages in source and target domain (as done in Gretton et al. (2006, Lemma 4)), we can quantify differences between the entries of \(G\) and \(\widetilde{G}\) (and \(\bar{g}\) and \(\widetilde{g}\) respectively) in terms of \(n\), \(m\) and \(\delta\). This leads to Eq. (4). ## 5 Empirical Evaluations We now empirically evaluate the performance of our approach compared to classical ensemble learning baselines and state-of-the-art model selection methods. Therefore, we structure our empirical evaluation as follows. First, we outline our experimental setup for unsupervised domain adaptation and introduce all domain adaptation methods for our analysis. Second, we describe the ensemble learning and model selection baselines, and third, we present the datasets used for our experiments. We then conclude with our results and a detailed discussion thereof. ### Experimental Setup To assess the performance of our ensemble learning Algorithm 1 IWA, we perform numerous experiments with different domain adaptation algorithms on different datasets. By changing the hyper-parameters of each algorithm, we obtain, as results of applying these algorithms, sequences of models. The goal of our method is to find optimal models based on combinations of candidates from each sequence. As domain adaptation algorithms, we consider the AdaTime benchmark suite, and run our experiments on language, image, text and time-series data. This suite comprises a collection of \(11\) domain adaptation algorithms. We follow their evaluation setup and apply the following algorithms: Adversarial Figure 2: Top: Mean classification accuracy (y-axis) of our method (IWA), source-only regression (SOR), deep embedded validation (DEV) and individual models (green: source accuracy, orange: target accuracy) used in the aggregation for the HHAR dataset (Stisen et al., 2015) over 3 seeds. The individual models (x-axis) are trained with DIRT (Shu et al., 2018) for different hyper-parameter choices. Bottom: Scaled Aggregation weights (y-axis) for individual models (x-axis) computed by IWA, SOR and DEV (average over 3 seeds). Instead of searching for the best model in the sequence, IWA effectively uses all models in the sequence and obtains a performance not reachable by any procedure selecting only one model. Spectral Kernel Matching (AdvSKM) (Liu and Xue, 2021), Deep Domain Confusion (DDC) (Tzeng et al., 2014), Correlation Alignment via Deep Neural Networks (Deep-Coral) (Sun et al., 2017), Central Moment Discrepancy (CMD) (Zellinger et al., 2017), Higher-order Moment Matching (HoMM) (Chen et al., 2020), Minimum Discrepancy Estimation for Deep Domain Adaptation (MMDA) (Rahman et al., 2020), Deep Subdomain Adaptation (DSAN) (Zhu et al., 2021), Domain-Adversarial Neural Networks (DANN) (Ganin et al., 2016), Conditional Adversarial Domain Adaptation (CDAN) (Long et al., 2018), A DIRT-T Approach to Unsupervised Domain Adaptation (DIRT) (Shu et al., 2018) and Convolutional deep Domain Adaptation model for Time-Series data (CoDATS) (Wilson et al., 2020). In addition to the sequence of models, IWA requires an estimate of the density ratio between source and target domain. To compute this quantity we follow (Bickel et al., 2007) and (You et al., 2019, Section 4.3), and, train a classifier discriminating between source and target data. The output of this classifier is then used to approximate the density ratio denoted as \(\beta\) in Algorithm 1. Overall, to compute the results in our tables we trained \(16680\) models over approximately a timeframe of \(1500\) GPU/hours using computation resources of NVIDIA P100 \(16\)GB GPUs. For example, consider the top plot of Figure 2, where we compare the performance of Algorithm 1 to deep embedded validation (DEV) (You et al., 2019), a heuristics baseline source-only regression (SOR, see Section 5.2) and each individual model in the sequence. The bottom plot shows the the scaled aggregation weights, i.e. how much each individual model contributes to the aggregated prediction of IWA, DEV, and SOR. In this example, the given sequence of models is obtained from applying the algorithm proposed in Shu et al. (2018) with different hyper-parameter choices to the Heterogeneity Human Activity Recognition dataset (Stisen et al., 2015). See Section D.3 in the Supplementary Material for the exact hyper-parameter values. ### Baselines As representatives for the most prominent methods discussed in Section 1, we compare our method, IWA, to ensemble learning methods that use linear regression and majority voting as _heuristic_ for model aggregation, and, model selection methods with _theoretical error guarantees_. **Heuristic Baselines** The first baseline is majority voting on target data (TMV). It aggregates the predictions of all models by counting the overall class predictions and selects the class with the maximum prediction count as ensemble output. In addition, we implement three heuristic baselines which aggregate the vector-valued output, i.e. probabilities, of all classifiers using weights learned via linear regression. The final ensemble prediction is then made by selecting the class with the highest probability. The three heuristic regression baselines differ in the input used for the performed regression. Source-only regression (SOR) trains a regression model on classifier predictions (of the given models) and labels from the source domain only. Target majority voting regression (TMR) uses the same voting procedure as explained above to generate pseudo-labels on the target domain, which are then further used to train a linear regression model. In contrast, target confidence average regression (TCR) selects the highest average class probability over all classifiers to pseudo-label the target samples, which is then used for training the linear regression model. **Baselines with Theoretical Error Guarantees** We compare IWA to the model selection methods importance weighted validation (IWV) (Sugiyama et al., 2007) and deep embedded validation (DEV) (You et al., 2019), which select models according to their (importance weighted) target risk. Both methods assume the knowledge of an estimated density ratio between target and source domains. In our experiments we follow Bickel et al. (2007); You et al. (2019) and estimate this ratio, by using a classifier that discriminates between source and target domain (see Supplementary Material Section D for more details). ### Datasets We evaluate the previously mentioned methods according to a diverse set of datasets, including language, image and time-series data. All datasets have a train, evaluation and test split, with results only presented on the held-out test sets. For additional details we refer to Appendix C and D. **TransformedMoons** This specific form of twinning moons is based on Zellinger et al. (2021). The source domain consists of two-dimensional input data points and their transformations to two opposing moon-shaped forms. **MiniDomainNet** is a reduced version of DomainNet-2019 (Peng et al., 2019) consisting of six different image domains (Quickdraw, Real, Clipart, Sketch, Infograph, and Painting). In particular, MiniDomainNet (Zellinger et al., 2021) reduces the number of classes of DomainNet-2019 to the top-five largest representatives in the training set of each class across all six domains. **AmazonReviews** is based on Blitzer et al. (2006) and consists of text reviews from four domains: books, DVDs, electronics, and kitchen appliances. Reviews are encoded in feature vectors of bag-of-words unigrams and bigrams with binary labels indicating the rankings. From the four categories we obtain twelve domain adaptation tasks where each category serves once as source domain and once as target domain. **UCI-HAR** The _Human Activity Recognition_(Anguita et al., 2013) dataset from the UC Irvine Repository contains data from three motion sensors (accelerometer, gyroscope and body-worn sensors) gathered using smartphones from \(30\) different subjects. It classifies their activities in several categories, namely, walking, walking upstairs, downstairs, standing, sitting, and lying down. **WISDM**(Kwapisz et al., 2011) is a class-imbalanced dataset variant from collected accelerometer sensors, including GPS data, from \(29\) different subjects which are performing similar activities as in the UCI-HAR dataset. **HHAR** The _Heterogeneity Human Activity Recognition_(Stisen et al., 2015) dataset investigate sensor-, device- and workload-specific heterogeneities using \(36\) smartphones and smartwatches, consisting of \(13\) different device models from four manufacturers. **Sleep-EDF** The _Sleep Stage Classification_ time-series setting aims to classify the electroencephalography (EEG) signals into five stages i.e., Wake (W), Non-Rapid Eye Movement stages (N1, N2, N3), and Rapid Eye Movement (REM). Analogous to Ragab et al. (2022); Eldele et al. (2021), we adopt the Sleep-EDF-20 dataset obtained from PhysioBank (Goldberger et al., 2000), which contains EEG readings from \(20\) healthy subjects. We rely on the AdaTime benchmark suite (Ragab et al., 2022) in most evaluations. The four time-series datasets above are originally included there. We extend AdaTime to support the other discussed datasets as well, and extend its domain adaptation methods. ### Results We separate the applied methods into two groups, namely _heuristic_ and methods with _theoretical error guarantees_. All tables show accuracies of source-only (SO) and target-best (TB) models, where source-only denotes training without domain adaptation and target-best the best performing model obtained among all parameter settings. We highlight in bold the performance of the best performing method with theoretical error guarantees, and in italic the best performing heuristic. See Table 1 for results. Please find the full tables in the Supplementary Material Section D. **Outperformance of theoretically justified methods:** On all datasets, our method outperforms IWV and DEV, setting a new state of the art for solving parameter choice issues under theoretical guarantees. **Outperformance of heuristics:** It is interesting to note that each heuristic outperforms IWV and DEV on at least five of seven datasets. Moreover, every heuristic outperforms the (average) target best model (TB) in at least two cases, making it impossible for _any_ model selection method to win in these cases. These facts highlight the quality of the predictions of our chosen heuristics. However, each heuristic is outperformed by our method on at least five of seven datasets. **Information in aggregation weights and robustness w.r.t. inaccurate models:** It is interesting to observe that, in contrast to the other heuristic aggregation baselines, the aggregation weights \(c_{1},\dots,c_{l}\) of our method tend to be larger for accurate models, see Section D.5. Another result is that our method tends to be less sensitive to a high number of inaccurate models than the baselines, see Section D.6. This serves as another reason for its high empirical performance. ## 6 Conclusion and Future Work We present a constructive theory-based method for approaching parameter choice issues in the setting of unsupervised domain adaptation. Its theoretical approach relies on the extension of weighted least squares to vector-valued functions. The resulting aggregation method distinguishes itself by a wide scope of admissible model classes without strong assumptions, e.g. support vector machines, decision trees and neural networks. A broad empirical comparative study on benchmark datasets for language, images, body sensor signals and handy signals, underpins the theory-based optimality claim. It is left for future research to further refine the theory and its estimates, e.g., by exploiting concentration bounds from Gretton et al. (2006) or advanced density ratio estimators from Sugiyama et al. (2012). \begin{table} \begin{tabular}{l c|c c c c|c c c|c c c|c} \hline \hline \multicolumn{10}{c}{**Amazon Reviews**} \\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Method**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**SO**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**TMV**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**TDR**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**TCR**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**SO**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**RIV**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**DEV**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**RIV**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**RIV**}} \\ \cline{2-2} \cline{7-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-111} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} \cline{10-11} #### Acknowledgments The ELLIS Unit Linz, the LIT AI Lab, and the Institute for Machine Learning are supported by the Federal State Upper Austria. IARAI is supported by Here Technologies. We thank the projects AI-MOTION (LIT-2018-6-YOU-212), AI-SNN (LIT-2018-6-YOU-214), DeepFlood (LIT-2019-8-YOU-213), Medical Cognitive Computing Center (MC3), INCONTROL-RL (FFG-881064), PRIMAL (FFG-873979), S3AI (FFG-872172), DL for GranularFlow (FFG-871302), AIRI FG 9-N (FWF-36284, FWF-36235), and ELISE (H2020-ICT-2019-3 ID: 951847). We further thank Audi.JKU Deep Learning Center, TGW LOGISTICS GROUP GMBH, Silicon Austria Labs (SAL), FILL GmbH, Anyline GmbH, Google, ZF Friedrichshafer AG, Robert Bosch GmbH, UCB Biopharma SRL, Merck Healthcare KGaa, Verbund AG, TUV Austria, Frauscher Sensonic, and the NVIDIA Corporation. The research reported in this paper has been funded by the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), the Federal Ministry for Digital and Economic Affairs (BMDW), and the Province of Upper Austria in the frame of the COMET-Competence Centers for Excellent Technologies Programme and the COMET Module S3AI managed by the Austrian Research Promotion Agency FFG.
2310.05251
The graphs of pyramids are determined by their spectrum
For natural numbers $k<n$ we study the graphs $T_{n,k}:=K_{k}\lor\overline{K_{n-k}}$. For $k=1$, $T_{n,1}$ is the star $S_{n-1}$. For $k>1$ we refer to $T_{n,k}$ as a \emph{graph of pyramids}. We prove that the graphs of pyramids are determined by their spectrum, and that a star $S_{n}$ is determined by its spectrum iff $n$ is prime. We also show that the graphs $T_{n,k}$ are completely positive iff $k\le2$.
Noam Krupnik, Abraham Berman
2023-10-08T18:01:00Z
http://arxiv.org/abs/2310.05251v1
# The graphs of pyramids are determined by their spectrum ###### Abstract For natural numbers \(k<n\) we study the graphs \(T_{n,k}:=K_{k}\vee\overline{K_{n-k}}\). For \(k=1\), \(T_{n,1}\) is the star \(S_{n-1}\). For \(k>1\) we refer to \(T_{n,k}\) as a _graph of pyramids_. We prove that the graphs of pyramids are determined by their spectrum, and that a star \(S_{n}\) is determined by its spectrum iff \(n\) is prime. We also show that the graphs \(T_{n,k}\) are completely positive iff \(k\leq 2\). Department of Mathematics, Technion - Israel Institute of Technology, Haifa 3200003, Israel _Keywords :_ cospectral graphs, graphs that are determined by their spectrum, graphs of pyramids, star graphs, completely positive graphs, Schur complement. _Mathetical classification numbers :_ 05C50, 15B48 ## 1 Introduction The first author participated in a course in matrix theory given by the second author. This paper is one of the outcomes of this course. Two of the topics studied in the course were completely positive (CP) matrices and graphs and cospectrality of graphs. In one of the problems in the course we used the Schur complement to show that the graph \(T_{n}\), the graph of \(n-2\) triangles with a common base, is CP. In this paper, we generalize \(T_{n}\) to \(T_{n,k}\) - graphs that consist of \(n-k\) pyramids \(K_{k+1}\) with \(K_{k}\) as a common base and use the Schur complement to compute their spectrum. For \(k>1\) we refer to \(T_{n,k}\) as graphs of pyramids and show that they are determined by their spectrum (DS). For \(k=1\), \(T_{n,1}\) is a star \(S_{n-1}\), and it is DS if and only if \(n-1\) is prime. Since the motivation for this article was the proof that \(T_{n}\) is completely positive, we start the paper with a short survey of complete positivity. We summarize the paper with a table that classifies the graphs \(T_{n,k}\) according to their being DS/CP. ## 2 Background ### Matrix theory preliminaries We denote by \(\mathbb{R}^{m\times n}\) the vector space of \(m\times n\) real matrices, and by \(S^{n}\) the subspace of symmetric matrices in \(\mathbb{R}^{n\times n}\). Let \(\lambda_{k}\leq...\leq\lambda_{2}\leq\lambda_{1}\) be the distinct eigenvalues of \(A\in S^{n}\) and let \(\alpha_{i}\) be the multiplicity of \(\lambda_{i}\) ; \(i=1,2,...,k\), \(\sum_{i=1}^{k}\alpha_{i}=n\). The _spectrum_ of \(A\), \(\sigma\left(A\right)\), is the multiset of its eigenvalues, and is denoted by \(\sigma\left(A\right)=\left\{\left(\lambda_{k}\right)^{\alpha_{k}},\ldots, \left(\lambda_{2}\right)^{\alpha_{2}},\left(\lambda_{1}\right)^{\alpha_{1}}\right\}\). For background on matrix theory we refer the readers to [10]. Two theorems that are frequently used in the paper are Cauchy's interlace theorem (Theorem 2.1) and Schur's theorem (Theorem 2.3). _Notation_.: \(\left\langle n\right\rangle=\left\{1,2,...,n\right\}\) **Theorem 2.1**.: _(Cauchy's Interlace Theorem [13]). For natural numbers \(n>m\), let \(B\in S^{m}\) be a principal submatrix of \(A\in S^{n}\). Denote the eigenvalues of \(A\) by \(\lambda_{n}\leq...\leq\lambda_{2}\leq\lambda_{1}\) and the eigenvalues of \(B\) by \(\mu_{m}\leq...\leq\mu_{2}\leq\mu_{1}\). Then_ \[\forall k\in<m>\text{, }\lambda_{n+k-m}\leq\mu_{k}\leq\lambda_{k}.\] **Definition 2.2**.: Let \(M=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\) be a square block matrix, where \(D\) is nonsingular. The _Schur Complement_, \({}^{M}\!/_{D}\), of \(D\)_in_\(M\) is \[{}^{M}\!/_{D}=A-BD^{-1}C.\] **Theorem 2.3**.: _[_14_]_ _In the notation of Definition 2.2_ 1. \(det\left(M\right)=det\left(D\right)det\left({}^{M}\!/_{D}\right)\)_._ 2. \(rank\left(M\right)=rank\left(D\right)+rank\left({}^{M}\!/_{D}\right)\)_._ ### Graph theory preliminaries A _graph_\(G\) is a pair \(G=\left(V,E\right)\) where \(V\) is the set of _vertices_ and \(E\subset V\times V\) is the set of _edges_. The _order_ of a graph \(G=\left(V,E\right)\) is \(\left|V\right|\). A graph \(G\) is _simple_ if it has no loops, i.e. for all \(v\in V\), \(\left(v,v\right)\notin E\). \(G\) is an _undirected_ graph if for every \(v,u\in V\), \(\left(v,u\right)\in E\ \Leftrightarrow\left(u,v\right)\in E\). In this paper all of the graphs are simple and undirected. A graph is _connected_ if there is a path between any two of its vertices. A _tree_ is a connected graph without cycles. **Theorem 2.4**.: _The following properties of a graph \(G=\left(V,E\right)\) of order \(n\) are equivalent_ 1. \(G\) _is a tree._ 2. \(G\) _is connected and_ \(\left|E\right|=n-1\)_._ 3. \(G\) _has no cycles and_ \(\left|E\right|=n-1\)_._ **Definition 2.5**.: The _complement_, \(\overline{G}\), of a graph \(G=\left(V,E\right)\), is: \[\overline{G}=\left(V,\left\{\left(v,u\right)\ \mid\ v\neq u,\ \left(v,u\right) \notin E\right\}\right).\] **Definition 2.6**.: Let \(G_{1}=\left(V_{1},E_{1}\right),G_{2}=\left(V_{2},E_{2}\right)\) be two graphs. 1. \(G_{1}\) and \(G_{2}\) are _disjoint_ if \(V_{1}\cap V_{2}=\emptyset\). 2. If \(G_{1},G_{2}\) are disjoint, their _disjoint union_ is \[G_{1}\uplus G_{2}=\left(V_{1}\uplus V_{2},E_{1}\uplus E_{2}\right).\] **Definition 2.7**.: The _join_, of two graphs \(G_{1}=\left(V_{1},E_{1}\right),G_{2}=\left(V_{2},E_{2}\right)\) is obtained from their disjoint union by connecting all the vertices in \(G_{1}\) to all the vertices in \(G_{2}\). In the paper we consider the following graphs of order \(n\) : * The complete graph. * The empty graph. * The star graph. * The cycle. * The path. We also consider the complete bipartite graph \(K_{n,m}=\overline{K_{n}}\vee\overline{K_{m}}\). Observe that \(S_{n}=K_{1,n}\). **Definition 2.8**.: Let \(G=\left(V,E\right)\). The _line graph_ of \(G\) is \(L\left(G\right)=\left(V^{\prime},E^{\prime}\right)\) where \[V^{\prime}=E\,\ E^{\prime}=\left\{\left(\left(v_{1},v_{2}\right),\left(v_{2},v _{3}\right)\right)\ \mid\ v_{1},v_{2},v_{3}\in V\right\}.\] **Definition 2.9**.: Let \(G=\left(V\left(G\right),E\left(G\right)\right)\) and \(H=\left(V\left(H\right),E\left(H\right)\right)\). 1. \(H\) is a _subgraph_ of \(G\) if \(V\left(H\right)\subseteq V\left(G\right)\) and \(E\left(H\right)\subseteq E\left(G\right)\). 2. \(H\) is an _induced subgraph_ of \(G\) if \(H\) is a subgraph of \(G\) and \(E\left(H\right)\) contains every edge in \(G\) that has both ends in \(V\left(H\right)\). **Definition 2.10**.: Let \(G=\left(V,E\right)\) be a graph and let \(U\subseteq V\) be a set of vertices. 1. \(U\) is a _clique_ if the subgraph induced by \(U\) is a complete graph. 2. \(U\) is a _maximal clique_ if it is a clique, and \(G\) does not have a clique of greater order. ### Spectral graph theory preliminaries There are several books on spectral graph theory, for example [5], [7], [8]. **Definition 2.11**.: The adjacency matrix of a graph \(G=\left(V,E\right)\) with \(n\) vertices, is a \(\left(0,1\right)\) matrix \(A\left(G\right)=\left(a_{i,j}\right)\in S^{n}\), where \(a_{i,j}=1\) if and only if \(\left(i,j\right)\in E\). _Remark 2.12_.: There are other matrices associated with a graph \(G\). For example the Laplacian, the signless Laplacian, the normalized Laplacian and the distance matrix. We do not consider them in this paper. **Definition 2.13**.: The _spectrum of a graph_\(G\), \(\sigma\left(G\right)\), is the spectrum of \(A\left(G\right)\). The _charateristic matrix (polynomial) of_\(G\) is the characteristic matrix (polynomial) of \(A\left(G\right)\). **Theorem 2.14**.: _Let \(G=\left(V,E\right)\) be a graph with eigenvalues \(\lambda_{n}\leq...\leq\lambda_{2}\leq\lambda_{1}\). The number of close walks of length \(k\) in \(G\) is \(trace(A^{k})=\sum_{i=1}^{n}\lambda_{i}^{k}\)._ **Corollary 2.15**.: 1. \(\sum_{i=1}^{n}\lambda_{i}=0\)_._ 2. _The number of edges in_ \(G\) _is_ \(\frac{1}{2}\sum_{i=1}^{n}\lambda_{i}^{2}\)_._ 3. _The number of triangles in_ \(G\) _is_ \(\frac{1}{6}\sum_{i=1}^{n}\lambda_{i}^{3}\)_._ **Theorem 2.16**.: _The spectrum of disjoint union of two graphs \(H_{1},H_{2}\) is \(\sigma_{H_{1}\uplus H_{2}}=\sigma_{H_{1}}+\sigma_{H_{2}}\), where the \(+\) sign stands for multisets sum._ _Notation_.: For a real number \(a\) and a graph \(G\), let \(\nu\left(G,a\right)\) (\(\mu\left(G,a\right)\)) denote number of eigenvalues of \(G\) that are greater (smaller) than or equal to \(a\). The following important theorem is a corollary of Theorem 2.1: **Theorem 2.17**.: _Let be \(H\) is an induced subgraph of \(G\). Let \(\lambda_{n}\leq...\leq\lambda_{2}\leq\lambda_{1}\) be the eigenvalues of \(G\) and \(\mu_{m}\leq...\leq\mu_{2}\leq\mu_{1}\) the eigenvalues of \(H\). Then_ 1. \(\mu_{1}\leq\lambda_{1}\) _and_ \(\lambda_{n}\leq\mu_{m}\) 2. _For every real number_ \(a\)_,_ \(\nu\left(G,a\right)\geq\nu\left(H,a\right)\) _and_ \(\mu\left(G,a\right)\geq\mu\left(H,a\right)\)_._ 3. \(rank\left(A\left(H\right)\right)\leq rank\left(A\left(G\right)\right)\)_._ **Theorem 2.18**.: _The following table lists the spectrum of some special graphs._ \begin{tabular}{c c c} \hline \hline **Name** & **Symbol** & **Spectrum** \\ \hline Complete graph & \(K_{n}\) & \(\sigma\left(K_{n}\right)=\left\{\left(-1\right)^{n-1},\left(n-1\right)\right\}\) \\ Complete bipartite graph & \(K_{n,m}\) & \(\sigma\left(K_{m}\right)=\left\{\left(-\sqrt{n\cdot m}\right),\left(0\right)^{ n+m-2},\left(\sqrt{n\cdot m}\right)\right\}\) \\ Star & \(S_{n}\) & \(\sigma\left(S_{n}\right)=\left\{\left(-\sqrt{n}\right),\left(0\right)^{n-1}, \left(\sqrt{n}\right)\right\}\) \\ Path & \(P_{n}\) & \(\sigma\left(P_{n}\right)=\left\{2\cdot cos\left(\frac{\pi\cdot k}{n+1}\right) \ \mid\ k\in\left\langle n\right\rangle\right\}\) \\ Cycle & \(C_{n}\) & \(\sigma\left(C_{n}\right)=\left\{2\cdot cos\left(\frac{2\cdot\pi\cdot k}{n} \right)\ \mid\ k\in\left\langle n\right\rangle\right\}\) \\ \hline \hline \end{tabular} **Definition 2.19**.: 1. Two graphs \(G_{1}\) and \(G_{2}\) are _cospectral_ if they have the same spectrum. 2. A graph \(G\) is _determined by its spectrum_ (DS) if every graph that is cospectral with \(G\) is isomorphic to \(G\). There are many papers that give examples of DS graphs and of pairs of non-isomorphic cospectral graphs, for example [17], [18], [9]. **Theorem 2.20**.: _The graphs \(K_{n},C_{n},P_{n},\overline{K_{n}}\), and all the graphs with less than \(5\) vertices are DS [17]._ The following two examples are of pairs of non-isomorphic cospectral graphs. **Example 2.21**.: [8] The graphs \(S_{4}\) and \(C_{4}\uplus K_{1}\) Figure 2 are cospectral but **not** DS. The graphs are not isomorphic since one is connected and one is not. In the following example both graphs are connected. **Example 2.22**.: [19] The two graphs \(\Gamma_{1},\Gamma_{2}\) in Figure 3 are cospectral and non-isomorphic. Figure 2: Non-isomorphic cospectral graphs with \(5\) vertices The following important result follows from Theorem 2.14 and Corollary 2.15. **Theorem 2.23**.: _[_17_]_ _Let \(G\) and \(H\) be two cospectral graphs. Then_ 1. \(G\) _and_ \(H\) _have the same number of edges._ 2. \(G\) _and_ \(H\) _have the same number of triangles._ 3. _If_ \(G\) _is bipartite then_ \(H\) _is also bipartite._ ## 3 Completely positive matrices and graphs **Definition 3.1**.: A matrix \(A\) is _completely positive_ if there exists a nonnegative matrix \(B\) s.t. \(A=BB^{T}\). The readers are referred to [3] and [15] for the properties and the many applications of CP matrices. A necessary condition for a matrix to be completely positive is that it is _doubly nonnegative_ i.e. nonnegative and positive semidefinite. This condition is not sufficient. **Example 3.2**.: The matrix \[\begin{pmatrix}1&1&0&0&1\\ 1&2&1&0&0\\ 0&1&2&1&0\\ 0&0&1&2&1\\ 1&0&0&1&3\end{pmatrix}\] is doubly nonnegative but not CP. When is the necessary condition sufficient? **Definition 3.3**.: Figure 3: Non-isomorphic cospectral graphs with 7 vertices 1. The graph of a matrix \(A\in S^{n}\), denoted \(G\left(A\right)\), is a graph \(G=\left(V,E\right)\) where \[V=\left\langle n\right\rangle\text{, }\text{ }E=\left\{\left(i,j\right)\text{ }\mid\text{ }i\neq j\text{, }a_{i,j}\neq 0\right\}.\] 2. If \(G\left(A\right)=G\), we say that \(A\) is a _matrix realization_ of \(G\). **Example 3.4**.: The graph of the matrix in Example 3.2 is a pentagon. **Definition 3.5**.: A graph \(G\) is _completely positive_ ( _CP graph_ ) if every doubly nonnegative matrix realization of \(G\) is completely positive. Examples of CP graphs are : * Graphs with less than 5 vertices [12]. * Bipartite graphs [1]. * \(T_{n}\)[11]. **Definition 3.6**.: A _long odd cycle_ is a cycle of odd length greater than 3. Graphs that contain a long odd cycle are not completely positive [2]. Examples of graphs that contain a long odd cycle are \(B_{n}\)[4], a cycle \(C_{n}\) with a chord between vertices 1 and 3 (Figure 4), and the graphs \(\Gamma_{1},\Gamma_{2}\) in Figure 3. A characterization of completely positive graphs is : **Theorem 3.7**.: _[_11_]_ _The following properties of a graph \(G\) are equivalent :_ 1. \(G\) is CP. 2. \(G\) does not contain a long odd cycle 3. The line graph of \(G\) is perfect. The equivalence between 2 and 3 was proved in [16]. Figure 4: The graphs \(B_{5}\) and \(B_{6}\). The graphs \(T_{n}\) Recall that the graph \(T_{n}\) consists of \(n-2\) triangles with a common base. **Theorem 4.1**.: _The spectrum of \(T_{n}\) is_ \[\sigma\left(T_{n}\right)=\left\{\left(\frac{1-\sqrt{8n-15}}{2}\right),\left(-1 \right),\left(0\right)^{n-3},\left(\frac{1+\sqrt{8n-15}}{2}\right)\right\}.\] Proof.: The adjacency matrix of \(T_{n}\) is \[A\left(T_{n}\right)=\left(\begin{array}{cccccc}0&1&1&1&\cdots&1\\ 1&0&1&1&\cdots&1\\ 1&1&0&0&\cdots&0\\ \vdots&\vdots&0&0&\cdots&\vdots\\ 1&1&0&0&\cdots&0\\ 1&1&0&0&\cdots&0\\ \end{array}\right).\] Let \(M=\lambda I-A\left(T_{n}\right)\) be the characteristic matrix of \(T_{n}\). The characteristic polynomial of \(T_{n}\) is \[det\left(M\right)=det\left(\begin{array}{cccccc}\lambda&-1&-1&-1&\cdots&-1 \\ -1&\lambda&-1&-1&\cdots&-1\\ -1&-1&\lambda&0&\cdots&0\\ \vdots&\vdots&0&\lambda&0&0\\ -1&-1&0&0&\ddots&0\\ -1&-1&0&0&\cdots&\lambda\\ \end{array}\right).\] Denote \(A=\begin{pmatrix}\lambda&-1\\ -1&\lambda\\ \end{pmatrix}\)\(,\)\(B=-J_{2,n}\)\(,\)\(D=\lambda\cdot I_{n-2}\). Then \[M=\begin{pmatrix}A&B\\ B^{T}&D\\ \end{pmatrix}.\] Using Theorem 2.3, we get \[det\left(M\right)=det\left(D\right)\cdot det\left({{M}\mathord{\left/{ \vphantom{{M}\mathord{\left/{\vphantom{{M}\mathord{\left/{\vphantom{{M} \mathord{\left/{\vphantom{{M}\mathord{\left/{\vphantom{M} \mathord{\left/{\vphantom{{M}\mathord{\left/{\vphantom{M}\left/{\vphantom{{M} \left/{\vphantom{M}\left/{{\vphantom{M}\left/{{\vphantom{M}\left/{{M} \left/{{\left/{M}\left/{{\left/{M}}\left{{\left/{M}{\left/{{M}^{{ }^{{}^{\leftleft({{}^{{}^{\left({}^{{}^{{}^{}^{ \left({}^{ }^{{}^{ }^{ }^{{}^{ }^{ \left({}^{{}^{}^{}^{ }^{\left({}^{{}^{}^{ }^{{}^{}^{}^{\left({}^{{}^{}^{ }^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{ }^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{{}^{}^{{}^{{}^{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}}^{}^{}^{ }^{{{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}}^{{}^{}^{}^{{}^{{}}^{{}^{ }^{{}^{{}^{}^{}^{{}^{{}}^{{}^{}^{{}^{}^{{}^{{}}^{{}^{}^{{}^{{}}^{{}^{ }}^{{{}^{{}^{}^{{}}^{{}^{{}^{{}}^{{}^{}^{{}}^{{}^{}^{{}^{{}}^{{}^{{}}^{{ }^{{}}^{{}^{{}^{}^{{}^{}^{{}}^{{}^{{}}^{{}^{}^{{}^{}^{{}}^{{}^{{}}^{{}}^{{ }^{{}^{{}^{{}^{{}}^{{}^{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }^{{{}}^{{}^{}}^{{{}^{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{ }}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{{}}^{{ }}^{{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{}^{{ }}^{{{}}^{{{}}^{{}^{{}}^{{}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{}^{{ }}^{{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{}}^{{ }^{{{}}^{{{}}^{{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{{}}}^{{}}^{{ }^{{{}}^{{{}}^{{{}}^{{}^{{}}^{{{}}^{{{}}^{{}}^{{}}^{{{}^{{}}^{{{}}^{{ }}^{{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{}}^{{ }^{{{}}^{{{}}^{{}}^{{{{{}}}^{{}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{ }}^{{{}}^{{{{}}^{}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{ }^{{{}}^{{{}}}^{{{}^{{}}^{{{{}}^{{{}}}^{{{}}^{{}}^{{{}}^{{}}^{{ }^{{}^{{}}^{{}^{{}}^{{}^{{{}}^{{}}^{{{}^{{}}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }^{{}^{{}^{{}}^{{{}^{{}}^{{}^{{{}}^{{{}}^{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}}^{{ }^{{}^{{{}}^{{}^{{}}^{{}^{{}}^{{{}^{}^{{}}^{{{}^{{}}^{{}}^{{}^{{ }}^{{}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{ }^{{}}^{{}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}^{ }^{{}^{{}^{\}{{}^{}^{{}^{{}}^{{{}^{}^{\}{{}^{{}^{ }^{{}^{\}{{}^{{}^{\}{}^{{}^{{}^{{}^{\}{}^{{}^{{}^{{}^{{}}^{ \sim}{}^{{}^{{}^{{}^{{}^{{}^{{\sim}{}^{{}^{{}^{{}^{\sim}{{}^{ \sim}^{{{}^{{\sim}{{}^{{{}^{\sim}{{}^{{\sim} \sim{{}^{{}^{{{\sim}}{}^{{{}^{{\sim} \sim{{{}^{{{}^{}^{\sim}{{}^{{\sim}^{\sim}{{}^{{ Hence, the spectrum is: \[\sigma\left(T_{n}\right)=\left\{\left(\frac{1-\sqrt{8n-15}}{2}\right),\left(-1 \right),\left(0\right)^{n-3},\left(\frac{1+\sqrt{8n-15}}{2}\right)\right\}.\] _Remark 4.2_.: Observe that for \(n=3\) we get the spectrum of a triangle \(\sigma\left(T_{3}\right)=\sigma\left(K_{3}\right)=\sigma\left(C_{3}\right)= \left\{\left(-1\right)^{2},\left(2\right)\right\}\). In the next section we study the graphs \(T_{n,k}\), and show that for \(k>1\) they are DS (Theorem 5.4). Since \(T_{n}=T_{n,2}\), we get the following theorem as special case of Theorem 5.4. **Theorem 4.3**.: _The graph \(T_{n}\) is determined by its spectrum._ In the previous section we showed that \(T_{n}\) is CP. For \(k>2\), \(T_{n,k}\) contains a long odd cycle, so they are not CP. Hence \(T_{n}\) are the only graphs of pyramids that are both DS and CP. ## 5 The graphs \(T_{n,k}\) Let \(k<n\) be natural numbers. Define \(T_{n,k}=K_{k}\vee\overline{K_{n-k}}\) (not to be confused with Turan graph \(T\left(n,k\right)\)). For \(k=1\), \(T_{n,1}=S_{n-1}\), and for \(k=2\), \(T_{n,2}=T_{n}\). Since the graph \(T_{n,k}\), \(k>1\) consists of \(n-k\) pyramids (\(K_{k+1}\)) that intersect in \(K_{k}\) we refer to it as a _graph of pyramids_. **Theorem 5.1**.: _The spectrum of \(T_{n,k}\) is_ \[\sigma\left(T_{n,k}\right)=\left\{\left(\lambda_{2}\right),\left(-1\right)^{k -1},\left(0\right)^{n-k-1},\left(\lambda_{1}\right)\right\}\] _where \(\lambda_{1},\lambda_{2}=\frac{\left(k-1\right)\pm\sqrt{\left(k-1\right)^{2}+4 k\left(n-k\right)}}{2}\)._ _Remark 5.2_.: We precede the proof with a sanity check. For \(k=1\), we get the spectrum of \(S_{n-1}\) (Theorem 2.18). For \(k=2\), we get the spectrum of \(T_{n}\) (Theorem 4.1). Proof.: (of Theorem 5.1) Let \(k\leq n\) be natural numbers and denote \(r:=n-k\). The adjacency matrix of \(T_{n,k}\) is \[A\left(T_{n,k}\right)=\begin{pmatrix}J_{k}-I_{k}&J_{k,r}\\ J_{r,k}&0_{r}\end{pmatrix}.\] Let \(M=\lambda I_{n}-A\left(T_{n,k}\right)\) be the characteristic matrix of \(T_{n,k}\). The characteristic polynomial of \(T_{n,k}\) is \[det\left(M\right)=det\begin{pmatrix}\left(\lambda+1\right)I_{k}-J_{k}&-J_{k,r }\\ -J_{r,k}&\lambda I_{r}\end{pmatrix}.\] Denote \(B=\left(\lambda+1\right)I_{k}-J_{k}\). Then \[M=\begin{pmatrix}B&-J_{k,r}\\ -J_{r,k}&\lambda I_{r}\end{pmatrix}.\] The Schur complement \({}^{M}\!/\lambda I_{r}\) is \[{}^{M}\!/\lambda I_{r}=B-J_{k,r}\cdot\left(\lambda I\right)^{-1}\cdot J_{r,k}= B-\frac{1}{\lambda}J_{k,r}\cdot J_{r,k}=B-\frac{r}{\lambda}J_{k}=\] \[=\left(\lambda+1\right)I_{k}-J_{k}-\frac{r}{\lambda}J_{k}=\left(\lambda+1 \right)I_{k}-\frac{r+\lambda}{\lambda}J_{k}.\] By the theorem of Schur (Theorem 2.3 ) \[det\left(M\right)=det\left(\lambda I_{r}\right)\cdot det\left({}^{M}\!/ \lambda I_{r}\right)=\lambda^{r}\cdot det\left({}^{M}\!/\lambda I_{r}\right).\] The determinant of the Schur complement is : \[det\left({}^{M}\!/\lambda I_{r}\right)=det\left(\left(\lambda+1\right)I_{k}- \frac{r+\lambda}{\lambda}J_{k}\right)=det\left(\frac{r+\lambda}{\lambda}\cdot \left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}I_{k}-J_{k}\right)\right)\] \[=\left(\frac{r+\lambda}{\lambda}\right)^{k}\cdot det\left(\frac{\lambda\cdot \left(\lambda+1\right)}{\lambda+r}I_{k}-J_{k}\right).\] Since \(\sigma\left(J_{k}\right)=\left\{\left(0\right)^{k-1},\left(k\right)\right\}\), the spectrum of \(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}I_{k}-J_{k}\) is \[\sigma\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}I_{k}-J_{k} \right)=\left\{\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}-k \right),\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}\right)^{k-1 }\right\},\] so \[det\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}I_{k}-J_{k} \right)=\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}-k\right) \cdot\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}\right)^{k-1}\] \[det\left({}^{M}\!/\lambda I_{r}\right)=\left(\frac{r+\lambda}{\lambda}\right) ^{k}\cdot\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}-k\right) \cdot\left(\frac{\lambda\cdot\left(\lambda+1\right)}{\lambda+r}\right)^{k-1}=\] \[=\frac{\left(\lambda+1\right)^{k-1}\cdot\left(\lambda\cdot\left(\lambda+1 \right)-k\left(\lambda+r\right)\right)}{\lambda}.\] Hence, \[det\left(M\right)=\lambda^{r}\cdot det\left({}^{M}\!/\lambda I_{r}\right)= \lambda^{r}\cdot\left(\frac{\left(\lambda+1\right)^{k-1}\cdot\left(\lambda \cdot\left(\lambda+1\right)-k\left(\lambda+r\right)\right)}{\lambda}\right)\] \[=\lambda^{r-1}\cdot\left(\lambda+1\right)^{k-1}\cdot\left(\lambda\left(\lambda +1\right)-k\left(\lambda+r\right)\right)\] \[=\lambda^{r-1}\cdot\left(\lambda+1\right)^{k-1}\cdot\left(\lambda^{2}+\left(1 -k\right)\lambda-rk\right).\] Thus, the eigenvalues of \(T_{n,k}\) are \(0\) with multiplicity \(n-k-1\), \(-1\) with multiplicity \(k-1\) and the two roots of \(\lambda^{2}+\left(1-k\right)\lambda-\left(n-k\right)k\). Is \(T_{n,k}\) determined by its spectrum? We consider two cases - the star graphs and the graphs of pyramids. **Theorem 5.3**.: _The star graph \(S_{n}=T_{n+1,1}\) is DS iff \(n\) is prime._ Proof.: The spectrum of \(S_{n}\) is \[\sigma\left(S_{n}\right)=\left\{\left(-\sqrt{n}\right),\left(0\right)^{n-1}, \left(\sqrt{n}\right)\right\}\] * If \(n\) is composite, there exist natural numbers \(2\leq p\leq q\) s.t. \(n=p\cdot q\). Let \(l=n+1-p-q\). Observe that \(1\leq l\) since \[n=p\cdot q=\left(q-1\right)p+p\geq\left(2-1\right)p+p=p+p\geq q+p.\] Let \(H=K_{p,q}\uplus\overline{K_{l}}\). Since \(\sigma\left(K_{p,q}\right)=\left\{\left(-\sqrt{pq}\right),\left(0\right)^{p+ q-2},\left(\sqrt{pq}\right)\right\}\) and \(\sigma\left(\overline{K_{l}}\right)=\left\{\left(0\right)^{l}\right\}\), \(H\) and \(S_{n}\) are cospectral. They are not isomorphic since \(S_{n}\) is connected and \(H\) is not connected. * If \(n\) is prime, we have to show that \(S_{n}\) is DS. Let \(G=\left(V,E\right)\) be cospectral with \(S_{n}\). Since all the graphs of less than 5 vertices are DS, we can assume that \(n\geq 5\). Since \(S_{n}\) is bipartite, \(G\) is bipartite as well, so the subgraph induced by a maximal clique in \(G\) is \(K_{2}\). \(G\) is not a disjoint union of non-empty graphs, since it has only 1 positive eigenvalue. By Theorem 2.17, \(P_{4}\) is not an induced subgraph of \(G\), since \(rank\left(A\left(P_{4}\right)\right)>2\). Hence, the maximal distance between two vertices in the same connected component of \(G\) is 2. First we show that \(G\) is a tree by showing it has no cycles, and then that the trees \(G\) and \(S_{n}\) are isomorphic. Suppose to the contrary that there exists a cycle in \(G\). Let \(H\) be the only non-empty connected component of \(G\). Since \(G\) is bipartite, \(H\) is bipartite as well. Let \(L\) and \(R\) be the two parts of \(H\), \(H=\left(L\uplus R,E\right)\). Since \(H\) has a cycle \(\left|L\right|>1\), \(\left|R\right|>1\). * If \(H\) is a complete bipartite graph then \[\sigma\left(H\right)=\left\{\left(-\sqrt{\left|L\right|\cdot\left|R\right|} \right),\left(0\right)^{\left|L\right|+\left|R\right|-2},\left(\sqrt{\left|L \right|\cdot\left|R\right|}\right)\right\}.\] Thus \[\sigma\left(G\right)=\left\{\left(-\sqrt{\left|L\right|\cdot\left|R\right|} \right),\left(0\right)^{\left|V\right|-2},\left(\sqrt{\left|L\right|\cdot \left|R\right|}\right)\right\}=\sigma\left(S_{n}\right)\] \[\Rightarrow\left|L\right|\cdot\left|R\right|=n,\] contradicting the primality of \(n\). * If \(H\) is not a complete bipartite graph, then there exist \(v\in L,u\in R\) s.t. \(\left(v,u\right)\notin E\). Hence, \(d\left(u,v\right)\neq 1\). Since \(G\) is bipartite \(d\left(u,v\right)\neq 2\). Since \(u,v\) are in the same connected component, \(d\left(v,u\right)>2\). a contradiction. We showed that \(G\) has no cycle, so since \(G\) has \(n\) edges it is a tree by Theorem 2.4. Let \(v_{1},v_{2},v_{3}\) be vertices of \(G\) where \(v_{1},v_{2}\) are neighbors. Since \(G\) is triangle-free we can assume without loss of generality that \((v_{2},v_{3})\notin E\). Assume to the contrary that there exists a vertex \(u\) s.t. \((u,v_{1})\notin E\). Since \(G\) is a tree there is only one path from \(v_{1}\) to \(u\). Let \(w\in V\) be a vertex s.t. \(\left(w,u\right),\left(v_{1},u\right)\in E\). Assume without loss of generality that \(w\neq v_{2}\). The distance between \(w\) and \(v_{2}\) is strictly greater than 2 - a contradiction since \(P_{4}\) is not an induced subgraph of \(G\). Hence, \(v_{1}\) is adjacent to all the other vertices, so \(G\) is isomorphic to \(S_{n}\). **Theorem 5.4**.: _The graphs of pyramids are DS._ Proof.: Let \(1<k\leq n\) and let \(G=(V,E)\) be a graph that is cospectral with \(T_{n,k}\). We have to show that \(G\) and \(T_{n,k}\) are isomorphic. We start by showing that * \(K_{k+2}\) and \(P_{4}\) are not induced subgraphs of \(G\) (Lemma 5.6). * Every two non-empty induced subgraphs of \(G\) are connected by an edge (Lemma 5.7). * If \(U\) is a maximal clique in \(G\), then every edge in \(G\) has an end in \(U\) (Lemma 5.8). **Lemma 5.5**.: 1. \(G\) _has one positive eigenvalue._ 2. \(G\) _has_ \(k\) _negative eigenvalues._ 3. \(G\) _has one eigenvalue strictly smaller than_ \(-1\)_._ Proof.: Since \(G\) is cospectral with \(T_{n,k}\), its spectrum is \[\sigma\left(T_{n,k}\right)=\left\{\left(\lambda_{2}\right),\left(-1\right)^{k -1},\left(0\right)^{n-k-1},\left(\lambda_{1}\right)\right\}\] where \(\lambda_{1},\lambda_{2}=\frac{\left(k-1\right)\pm\sqrt{\left(k-1\right)^{2}+4 k\left(n-k\right)}}{2}\). **Lemma 5.6**.: 1. \(G\) _does not contain a clique of size_ \(k+2\)_._ 2. \(G\) _does not have_ \(P_{4}\) _as an induced subgraph._ Proof.: By Theorem 2.18, the spectrum of \(K_{k+2}\) is \(\sigma\left(K_{k+2}\right)=\left\{\left(-1\right)^{k+1},\left(k+1\right)\right\}\), and the spectrum of \(P_{4}\) is \(\sigma\left(P_{4}\right)=\left\{2\cdot cos\left(\frac{\pi\cdot k}{5}\right)\ \mid\ k\in\left\langle 4 \right\rangle\right\}\). \(K_{k+2}\) has \(k+1\) negative eigenvalues and \(P_{4}\) has 2 positive eigenvalues. By Lemma 5.5, \(G\) has \(k\) negative eigenvalues and 1 positive eigenvalue. Thus, by Theorem 2.17, \(P_{4}\) and \(K_{k+2}\) are not induced subgraphs of \(G\) **Lemma 5.7**.: _Every two disjoint non-empty induced subgraphs in \(G\) are connected by an edge._ Proof.: Assume there exist two disjoint non-empty induced subgraphs \(H_{1}=\left(V_{1},E_{1}\right),H_{2}=\left(V_{2},E_{2}\right)\). Let \(H=H_{1}\uplus H_{2}\) be the subgraph induced by \(V_{1}\cup V_{2}\). \[A\left(H\right)=\begin{pmatrix}A\left(H_{1}\right)&0\\ 0&A\left(H_{2}\right)\end{pmatrix}.\] By Theorem 2.16, \(\sigma\left(H\right)=\sigma\left(H_{1}\right)+\sigma\left(H_{2}\right)\). Both \(H_{1}\) and \(H_{2}\) have a positive eigenvalue, since they are non-empty. Thus, \(H\) has at least two positive eigenvalues, contradicting Theorem 2.17. **Lemma 5.8**.: _Let \(H_{1}=\left(V_{1},E_{1}\right)\) be a maximal clique in \(G\), and let \(H_{2}=\left(V_{2},E_{2}\right)\) be the subgraph induced by the vertices \(V_{2}:=V\setminus V_{1}\). Then \(H_{2}\) is an empty graph._ Proof.: Without loss of generality we assume that \(V_{1}=\left\langle m\right\rangle\). By Lemma 5.6\(m\leq k+1\). We have to show that \(H_{2}\) is an empty graph. Assume to the contrary that there is an edge \(e=\left(v_{1},v_{2}\right)\in E_{2}\), for some \(v_{1},v_{2}\in V_{2}\). Since \(H_{1}\) is induced by a maximal clique, there exists a vertex \(u_{1}\in V_{1}\) s.t. \(\left(v_{1},u_{1}\right)\notin E\) (otherwise, the subgraph obtained by \(V\uplus\left\{v_{1}\right\}\) is a complete graph of order greater than \(m\), contradicting the maximality of \(H_{1}\)). Similarly, there exists a vertex \(u_{2}\in V_{1}\) s.t. \(\left(v_{2},u_{2}\right)\notin E\). We can also assume that \(u_{1}\neq u_{2}\), since if both \(v_{1},v_{2}\) are connected to all the vertices except \(u_{1}\), then \(\left(V_{1}\setminus\left\{u\right\}\right)\uplus\left\{v_{1},v_{2}\right\}\) is a clique, contradicting the maximality of \(V_{1}\). We now show that the following cases are impossible : 1. \(\left(v_{1},u_{2}\right)\notin E\) and \(\left(v_{2},u_{1}\right)\notin E\) 2. \(\left(v_{1},u_{2}\right)\in E\) but \(\left(v_{2},u_{1}\right)\notin E\) 3. \(\left(v_{1},u_{2}\right)\notin E\) but \(\left(v_{2},u_{1}\right)\in E\) 4. \(\left(v_{1},u_{2}\right)\in E\) and \(\left(v_{2},u_{1}\right)\in E\) Case 1 is impossible since in this case the subgraph obtained by the vertices \(\left\{v_{1},v_{2},u_{1},u_{2}\right\}\) is \(K_{2}\uplus K_{2}\). Thus, \(G\) has induced cliques that are not connected by an edge, contradicting Lemma 5.7. Case 2 is impossible since the subgraph induced by the vertices \(\left\{v_{1},v_{2},u_{1},u_{2}\right\}\) is \(P_{4}\), contradicting Lemma 5.6. Similarly, Case 3 is impossible. The proof that Case 4 is impossible needs more work: Since \(G\) is cospectral with \(T_{n,k}\) they have the same number of edges. Thus, there exists another edge in \(E_{2}\). By Lemma 5.7, \(G\) does not have disjoint cliques. Thus, there is an edge \(\left(v_{1},v_{3}\right)\) for some \(v_{3}\in V_{2}\). By Lemma 5.6 there exists \(u_{3}\in V_{1}\) s.t. \(\left(u_{3},v_{3}\right)\notin E\). Up to isomorphism, there are two possibilities for the subgraph induced by vertices \(\left\{v_{1},v_{2},v_{3},u_{1},u_{2},u_{3}\right\}\) - \(\left(v_{2},v_{3}\right)\in E_{2}\) or \(\left(v_{2},v_{3}\right)\notin E_{2}\). If \(\left(v_{2},v_{3}\right)\in E_{2}\), then \(\left\{v_{1},v_{2},v_{3}\right\}\) is a triangle in \(H_{2}\). Since \(H_{1}\) is a maximal clique, there exists a vertex \(u_{3}\in V_{1}\) s.t. \(\left(v_{3},u_{3}\right)\notin E\) (otherwise \(\left\{v_{1},v_{3}\right\}\) or \(\left\{v_{2},v_{3}\right\}\) satisfies one of the previous conditions). In this case, the subgraph induced by the vertices \(\{v_{1},v_{2},v_{3},u_{1},u_{2},u_{3}\}\) is \(H_{3}\) in Figure 5. Its adjacency matrix is \[A\left(H_{3}\right)=\begin{pmatrix}0&1&1&0&1&1\\ 1&0&1&1&0&1\\ 1&1&0&1&1&0\\ 0&1&1&0&1&1\\ 1&0&1&1&0&1\\ 1&1&0&1&1&0\end{pmatrix}\] and its spectrum is \[\sigma\left(H_{3}\right)=\left\{\left(0\right)^{3},\left(-2\right)^{2},\left( 4\right)\right\}.\] Hence, \(H_{3}\) has \(2\) eigenvalues strictly smaller than \(-1\), contradicting Theorem 2.17. If \(\left(v_{2},v_{3}\right)\notin E_{2}\), there exists a vertex \(u_{3}\in V_{1}\) s.t. \(\left(v_{3},u_{3}\right)\notin E\). Assume that there exists \(u_{3}\notin\{u_{1},u_{2}\}\) such that \(\left(v_{1},u_{3}\right)\in E\) and \(\left(v_{2},u_{3}\right)\in E\) (otherwise one of the previous conditions occur, with respect to \(v_{3}\)). In this case, the subgraph induced by the vertices \(\{v_{1},v_{2},v_{3},u_{1},u_{2},u_{3}\}\) is \(H_{4}\) in Figure 6. Its adjacency matrix is \[A\left(H_{4}\right)=\begin{pmatrix}0&1&1&0&1&1\\ 1&0&0&1&0&1\\ 1&0&0&1&1&0\\ 0&1&1&0&1&1\\ 1&0&1&1&0&1\\ 1&1&0&1&1&0\end{pmatrix}\] and its spectrum is \[\sigma\left(H_{4}\right)=\left\{\left(\frac{-\sqrt{5}-1}{2}\right),\left(-2.2 31\right),\left(-0.483\right),\left(0\right),\left(\frac{-\sqrt{5}-1}{2}\right),\left(3.714\right)\right\}.\] Figure 5: \(H_{3}\) Hence, \(H_{4}\) also has \(2\) eigenvalues strictly smaller than \(-1\), contradicting Theorem 2.17. Now we can complete the proof of the theorem. The adjacency matrix of \(G\) is \[A\left(G\right)=\begin{pmatrix}A\left(H_{1}\right)&B\\ B^{T}&0\end{pmatrix}=\begin{pmatrix}J_{m}-I_{m}&B\\ B^{T}&0_{n-m}\end{pmatrix}\] for some block matrix \(B\). Recall that \[A\left(T_{n,k}\right)=\begin{pmatrix}J_{k+1}-I_{k+1}&J_{k+1,n-k-1}\\ J_{n-k-1,k+1}&0_{n-k-1}\end{pmatrix}.\] We want to show that \(m=k+1\) and \(B=J_{n\times\left(n-m\right)}\). Assume to the contrary \(m\neq k+1\). By Lemma 5.6, \(m<k+1\). Then clearly the number of edges of \(G\) is smaller then the number of edges of \(T_{n,k}\), contradicting Theorem 2.23. Hence, \(m=k+1\) and for some \(B\in\mathbb{R}^{\left(k+1\right)\times\left(n-k-1\right)}\) \[A\left(G\right)=\begin{pmatrix}J_{k+1}-I_{k+1}&B\\ B^{T}&0_{n-k-1}\end{pmatrix}.\] Again by considering the number of edges, all the entries of \(B\) are equal to \(1\). Thus \[A\left(G\right)=\begin{pmatrix}J_{k+1}-I_{k+1}&J_{k+1,n-k-1}\\ J_{n-k-1,k+1}&0_{n-k-1}\end{pmatrix}=A\left(T_{n,k}\right).\] ## 6 Summary We showed that the graphs of pyramids \(T_{n,k}\) (\(2\leq k<n\)) are DS (Theorem 5.4), and that for \(k=1\), the graph \(T_{n,1}=S_{n-1}\) is DS if and only if \(n-1\) is prime. In the next table we list the graphs \(T_{n,k}\) according to their being DS/CP : \begin{table} \begin{tabular}{c c c} \hline \hline & CP & not CP \\ \hline DS & \(k=2\) or \(k=1,\ n\) is prime & \(3\leq k\) \\ \hline not DS & \(k=1,\ n\) is composite & \\ \hline \hline \end{tabular} \end{table} Table 1: Classification of the graphs \(T_{n,k}\) according to their being DS/CP. Figure 6: \(H_{4}\) We have an empty field in the table, since the only graphs \(T_{n,k}\) that are not DS are stars, and stars are CP since they are bipartite. This suggests the following questions on general graphs (not only \(T_{n,k}\)). **Question 6.1**.: _What is the smallest order \(\nu\) of a graph that is not CP neither DS?_ Solution. The graphs \(\Gamma_{1},\Gamma_{2}\) in Figure 3 are not DS, and by Theorem 3.7 are not CP, so \(\nu\leq 7\). On the other hand, \(\nu\geq 6\) since all the graphs with 4 vertices are CP, and there is only one pair of non-isomorphic cospectral graphs with 5 vertices (Figure 2), and both are CP. Computer search shows that \(\nu>6\)[6], so \(\nu=7\).
2308.10370
cantnlp@LT-EDI-2023: Homophobia/Transphobia Detection in Social Media Comments using Spatio-Temporally Retrained Language Models
This paper describes our multiclass classification system developed as part of the LTEDI@RANLP-2023 shared task. We used a BERT-based language model to detect homophobic and transphobic content in social media comments across five language conditions: English, Spanish, Hindi, Malayalam, and Tamil. We retrained a transformer-based crosslanguage pretrained language model, XLMRoBERTa, with spatially and temporally relevant social media language data. We also retrained a subset of models with simulated script-mixed social media language data with varied performance. We developed the best performing seven-label classification system for Malayalam based on weighted macro averaged F1 score (ranked first out of six) with variable performance for other language and class-label conditions. We found the inclusion of this spatio-temporal data improved the classification performance for all language and task conditions when compared with the baseline. The results suggests that transformer-based language classification systems are sensitive to register-specific and language-specific retraining.
Sidney G. -J. Wong, Matthew Durward, Benjamin Adams, Jonathan Dunn
2023-08-20T21:30:34Z
http://arxiv.org/abs/2308.10370v2
Cantnlp@LT-EDI-2023: Homophobia/Transphobia Detection in Social Media Comments using Spatio-Temporally Retrained Language Models ###### Abstract This paper describes our multiclass classification system developed as part of the LT-EDI@RANLP-2023 shared task. We used a BERT-based language model to detect homophobic and transphobic content in social media comments across five language conditions: English, Spanish, Hindi, Malayalam, and Tamil. We retrained a transformer-based cross-language pretrained language model, XLM-RoBERTa, with spatially and temporally relevant social media language data. We also retrained a subset of models with simulated script-mixed social media language data with varied performance. We developed the best performing seven-label classification system for Malayalam based on weighted macro averaged F1 score (ranked first out of six) with variable performance for other language and class-label conditions. We found the inclusion of this spatio-temporal data improved the classification performance for all language and task conditions when compared with the baseline. The results suggests that transformer-based language classification systems are sensitive to register-specific and language-specific retraining. ## 1 Introduction The purpose of this shared task was to develop a classification system to predict whether samples of social media comments contained forms of homophobia or transphobia across different language conditions. There were no restrictions on language models or data pre-processing methods. The five language conditions: English, Spanish, Hindi, Malayalam, and Tamil. In addition to the language conditions, participants were tasked with developing a system for a three-class and seven-class classification system defining different forms of homophobic and transphobic hate speech Chakravarthi et al. (2021). The main contribution of our proposed system outlined in this paper included spatio-temporal relevant social media language data to retrain a transformer-based language model to increase the sensitivity of the pretrained language model (PLM). We have also created simulated samples of script-mixed social media language data which was used as part of the retraining process. ### Problem Description The organisers of this shared task provided.csv files containing labelled data of pre-processed comments of users reacting to LGBT+ videos on YouTube. This was an expanded data set of the Homophobia/Transphobia Detection data set Chakravarthi et al. (2021) with the inclusion of Hindi, Malayalam, and Spanish in addition to the pre-existing English and Tamil data. The comments were manually annotated based on a three-class and a seven-class classification system. The participants of the shared task were not provided any further information on the annotation process or measures of inter-annotator agreement. The shared task was broken down into the following tasks: * Task A involves developing a classification model for three classes across all five language conditions as shown in Table 1. * Task B involves developing a classification model for seven classes across three language conditions as shown in Table 2. The organisers of this shared task provided training and validation data to develop the system. The test data was provided once the results of the shared task were announced. The organisers evaluated the performance of each homophobia/transphobia detection system with weighted macro averaged F1 score. The performance for each language and class-label condition were ranked based on this score. ### Related Work Previous approaches in detecting homophobia and transphobia on social media comments has shown varying levels of success Chakravarthi et al. (2022). In this shared task, participants were asked to detect homophobia and transphobia across three language conditions: English, Tamil, and an additional English-Tamil script-mixed data set. Participants of the shared task combined various natural language processing methods such as statistical language models and machine learning to complete the task. However, the performance of transformer-based language models remained consistently high across all three language conditions. More specifically BERT-based models with minimal fine-tuning outperformed statistical language models using TF-IDF for feature extraction. BERT, or Bidirectional Encoder Representations from Transformers, structures the complex relationship between words in a language through embeddings Devlin et al. (2019). The best performing BERT-based system for English yielded an average weighted macro F1 score of 0.92 compared with non-transformer-based language models Maimaitituoheti et al. (2022). Conversely, the same BERT-based models struggled to outperform machine learning and deep learning systems approaches in Tamil and in the English-Tamil condition. This suggests further work is needed to refine BERT-based to improve its performance outside an English-context. Based on the promising results of BERT-based language models in Chakravarthi et al. (2022), the current study extend on this transformer-based approach to develop and refine a homophobia and transphobia detection system across language conditions. ## 2 Methodology In this section, we provide a system overview of our transformer-based language model. We also provide details on our retraining and fine-tuning procedures. ### System Overview Due to the number of language conditions for the current shared task, it was unfeasible to use language-specific BERT-based models. One risk for using independently developed language-specific BERT-based models was that there was no control on the source data used to train the representations. For this reason, we used a cross-lingual transformer-based language model as our baseline language model. XLM-RoBERTa was trained on two terabytes of CommonCrawl for 100 languages Conneau et al. (2020). Some of these languages include English, Hindi, Malayalam, Spanish, and Tamil. Furthermore, Romanised Hindi and Tamil have also been included in the pretraining of this cross-lingual transformer-based language model. Despite these benefits, we were aware of the risk in overgeneralising the register of language of \begin{table} \begin{tabular}{l c c c c} \hline \hline **Language Condition** & **H** & **N** & **T** & **Total** \\ \hline English & 179 & 2978 & 7 & 3164 \\ Hindi & 45 & 2423 & 92 & 2560 \\ Malayalam & 476 & 2468 & 170 & 3114 \\ Spanish & 200 & 450 & 200 & 850 \\ Tamil & 453 & 2064 & 145 & 2662 \\ \hline \hline \end{tabular} \end{table} Table 1: The labelled training data broken down by language condition and class label for Task A. The class labels for Task A were homophobia (H), non-anti-LGBT+ content (N), and transphobia (T). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Language Condition** & **CS** & **HT** & **HD** & **HS** & **NO** & **TT** & **TD** & **Total** \\ \hline English & 302 & 12 & 167 & 436 & 2240 & 1 & 6 & 3164 \\ Malayalam & 152 & 57 & 419 & 69 & 2247 & 7 & 163 & 3114 \\ Tamil & 212 & 37 & 416 & 218 & 1634 & 34 & 111 & 2662 \\ \hline \hline \end{tabular} \end{table} Table 2: The labelled training data broken down by language condition and class label for Task B. The class labels for Task B were counter-speech (CS), homophobic-threatening (HT), homophobic-derogation (HD), hope-speech (HS), none-of-the-above (NO), transphobic-threatening (TT), and transphobic-derogation (TD). CommonCrawl as the language used on this platform is not reflective of the language used on social media. We could retrain PLMs for a specific task to mitigate this issue without the need to train a PLM from scratch. This retraining method has shown to improve the performance of PLMs in downstream tasks (such as label classification) for under-represented and under-resourced languages by pretraining with additional register-specific language data Liu et al. (2019). Therefore, we have retrained the baseline XLM-RoBERTa PLM prior to fine-tuning the baseline XLM-RoBERTa PLM. ### Retraining We used social media language data from the Corpus of Global Language Use (CGLU) for retraining Dunn (2020). The CGLU is a very large digital corpora which contains over 20 billion words associated with 10,000 point locations across the globe. Although the source of the CGLU social media language data comes from Twitter, a microblogging platform, and the training data comes from YouTube, a video sharing platform, our focus is on the written language components, and we assume some close domain alignment. We removed hashtags and hyperlinks to ensure the retraining data has a similar form to the training data. We also removed multiple punctuation and blank space characters. Short tweets with fewer than 50 characters were also systematically removed. We controlled the spatial and temporal window of the sampled tweets by restricting the sample of tweets to those originating in India produced between 1 January 2019 and 31 December 2019. Once again, we wanted to closely match retraining data with the time and geographic source of the labelled training data Chakravarthi et al. (2021). We used the langdetect1 library to detect the language condition for each tweet. For each of the five different language conditions, we extracted a random sample of 50,000 tweets for training. We then use the LanguageModelingModel class from the simpletransformers library to retrain XLM-RoBERTa on an unlabelled corpus of social media language data. Footnote 1: [https://pypi.org/project/langdetect/](https://pypi.org/project/langdetect/) In addition to creating corpus training data for the five different language conditions, we created additional corpus training data with simulated script-mixing. A major motivation to retrain the model with the simulated script-mixed retraining data is the lack of Romanised Malayalam in XLM-RoBERTa. We used the transliteration.XlitEngine class from the ai4bharat2 library to transliterate one-fifth of the sample tweets from Indic to Latin script. Footnote 2: [https://pypi.org/project/ai4bharat-transliteration/](https://pypi.org/project/ai4bharat-transliteration/) The size of our retraining corpora for each language condition is shown in Table 3. We retrained the language model for 4 iterations and we evaluated the training for every 500 steps. We saved the model with the best performance determined by the loss function in our output directory. ### Fine-tuning Once we retrained XLM-RoBERTa with the social media language data from the CGLU, we fine-tuned the baseline and the retrained language models with the labelled training data. As shown in Table 1 and Table 2, the class labels for both Task A and Task B are highly unbalanced. We used the RandomOverSampler class from the library to oversample the minority classes. In most cases, these minority classes related to homophobia and transphobia. We used the classificaton class from the simpletransformers library to fine-tune the retrained PLMs with the labelled training data. We trained the classification model for 8 iterations and we evaluated the training for every 500 steps. We also used AdamW optimization Loshchilov and Hutter (2019). We applied the same fine-tuning strategy to Task A and Task B to maintain consistency across the shared task. We saved the model with the best performance determined by the loss function in our output directory. \begin{table} \begin{tabular}{l c c c} \hline \hline **Language** & **Indic** & **Latin** & **Total** \\ \hline English & - & 50K & 50K \\ Spanish & - & 50K & 50K \\ Hindi & 50K & - & 50K \\ Malayalam & 50K & - & 50K \\ Tamil & 50K & - & 50K \\ SM Hindi & 37.5K & 12.5K & 50K \\ SM Malayalam & 37.5K & 12.5K & 50K \\ SM Tamil & 37.5K & 12.5K & 50K \\ \hline \hline \end{tabular} \end{table} Table 3: Corpus size of language samples for fine-tuning with simulated script-mixing (SM). ### Other Settings We completed the retraining and fine-tuning in Python3 on Google Colaboratory. We used GPU as our hardware accelerator using NVIDIA A100 Tensor Core graphics card. ## 3 Results The results of Task A are shown in Table 4 and the results of Task B are shown in Table 5. Both tables compare the weighted macro averaged F1 metrics for the classification models derived from the baseline XLM-RoBERTa and the modified XLM-RoBERTa models produced specifically for this task. The ranking of our models are also presented in the final column of the tables. In Task A, English, Hindi, and Malayalam performed the best of the the baseline classification models with a macro averaged F1 score of.93. Tamil performed the worst of the baseline classification models. The performance of the retrained classification models were consistently better than the baseline classification models. Malayalam performed the best with a macro averaged F1 score of 0.95 while Spanish performed the worst with a macro averaged F1 score of 0.86. The classification models fine-tuned on simulated script-mixed training data did not improve the classification performance for Tamil. Conversely, we saw a decrease in performance for Malayalam. There was a large improvement in classification performance for Hindi. We have highlighted the performance metric in bold in terms of the optimal classification models submitted to the organisers for evaluation in Table 4. Hindi and Tamil ranked third out of seven, Malayalam ranked fourth out of seven, and English ranked seventh out of eleven. Due to issues with the labels, the submission for the Spanish condition in Task A was invalid. However, the macro averaged F1 metric is provided in brackets for reference. In Task B, Malayalam performed the best of the baseline classification models with a macro averaged F1 score of 0.86 while English performed the worst with a macro averaged F1 score of 0.15. The performance increased once we fine-tuned the classification model with the retrained XLM-RoBERTa with the macro averaged F1 score for English improving from 0.15 to 0.54 and for Tamil improving from 0.77 to 0.90. The performance remained stable between the baseline and retrained models for Malayalam. When we introduced the script-mixed models for Malayalam and Tamil, we saw varying levels of performance. The macro averaged F1 score for Malayalam increased from 0.86 to 0.88. This suggests an increase in performance accuracy. Counterintuitively, the macro averaged score F1 for Tamil decreased from 0.90 to 0.80 which was on par with the baseline model. This suggests a decrease in performance accuracy. Our Malayalam classification system fine-tuned on the script-mixed social media language data ranked first out of six, while the Tamil classification system fine-tuned on the script-mixed social media language data ranked fourth out of seven despite the decrease in performance from the re \begin{table} \begin{tabular}{l c c c c} \hline \hline **Language Condition** & **Baseline** & **Retrained** & **Script-Mixed** & **Rank** \\ \hline English & 0.93 & **0.94** & - & 7 \\ Hindi & 0.93 & 0.92 & **0.97** & 3 \\ Malayalam & 0.93 & 0.95 & **0.94** & 4 \\ Spanish & 0.83 & (0.86) & - & - \\ Tamil & 0.70 & 0.93 & **0.93** & 3 \\ \hline \hline \end{tabular} \end{table} Table 4: Macro averaged F1 for each language condition for Task A and overall rank for the shared task. The submitted result is in **bold**. Note that the result for Spanish was invalid. \begin{table} \begin{tabular}{l c c c} \hline \hline **Language Condition** & **Baseline** & **Retrained** & **Script-Mixed** & **Rank** \\ \hline English & 0.15 & **0.54** & - & 6 \\ Malayalam & 0.86 & 0.86 & **0.88** & 1 \\ Tamil & 0.77 & 0.90 & **0.80** & 4 \\ \hline \hline \end{tabular} \end{table} Table 5: Macro averaged F1 for each language condition for Task B and overall rank for the shared task. The submitted result is in **bold**. trained language model. Our English classification system fine-tuned on the retrained language model ranked sixth out of nine. ## 4 Discussion This paper addresses intrinsic issues related to hate speech detection in written social media data. This was mirrored in our training, validation, and testing data which reflects the multifaceted challenges of real-world scenarios. Hate speech is not confined to any single language or geographic region and its instances are often buried within the vast array of existing textual data, particularly in the context of social media. The employment of the XLM-RoBERTa model has demonstrated to be an effective system in detecting homophobia and transphobia in social media comments, particularly when the PLM has been retrained with spatio-temporal data for the English and Tamil language conditions. These findings underline the potential of integrating geographic language data into models as a means of enhancing their performance not only on highly represented languages, but also on lower underrepresented languages, thus offering a robust solution. In the preceding sections of this paper, we have outlined and highlighted in Table 1 and Table 2 an influencing challenge relating to the distribution of data across the different language conditions. The imbalance observed between language conditions as exhibited in the discrepancies in their respective training, validation, and test data sets poses an additional obstacle when it comes to drawing comparative inferences. However, there are opportunities that could help balance the data and potentially improve performance. To alleviate this issue, the utilisation of synthetic data through data augmentation techniques could prove to be a promising approach. Data augmentation, as a broad concept, involves expanding the existing data sets to enhance their diversity, and therefore, the generalisability of the models trained on them (Hoffmann et al., 2022). The generation of synthetic data has been demonstrated to be an effective mechanism in addressing biased data sets, but it also presents a desirable practice particularly suited for hate speech detection given the prevailing concerns over text obfuscation of such instances (Aggarwal and Zesch, 2022). To help facilitate a system that can account for these nuances, data noise injection via character, word, or even phrasal additions could be advantageous. In this sense, the application of synthetic data coupled with noise injection can help address class imbalance, while also training more robust classifiers that are less reliant on explicit instances of derogatory terms, but are more adept at discerning underlying contextual uses of hate speech. There are real-world applications to our homophobia/transphobia detection system as we can refine our model with language-specific and region-specific information to monitor hate speech on social media directed at LGBTQ+ communities. This is particularly useful for languages that are not otherwise as well represented in large language models such as Malayalam which saw great improvement in performance with the addition of script-mixed retraining data. ## 5 Conclusion We saw an improvement in performance in our retrained homophobia/transphobia classification model when compared with our baseline model. Our unique approach to this shared task has shown potential for retraining pretrained language models with spatio-temporal relevant language data to improve the performance of our homophobia/transphobia detection system. Counterintuitively, the inclusion of script-mixed language data gave us variable results. We will aim to refine our classification system with other attested methods such as noise injection in order to improve the performance of our system.
2307.11833
PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks
Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs). However, conventional PINNs, relying on multilayer perceptrons (MLP), neglect the crucial temporal dependencies inherent in practical physics systems and thus fail to propagate the initial condition constraints globally and accurately capture the true solutions under various scenarios. In this paper, we introduce a novel Transformer-based framework, termed PINNsFormer, designed to address this limitation. PINNsFormer can accurately approximate PDE solutions by utilizing multi-head attention mechanisms to capture temporal dependencies. PINNsFormer transforms point-wise inputs into pseudo sequences and replaces point-wise PINNs loss with a sequential loss. Additionally, it incorporates a novel activation function, Wavelet, which anticipates Fourier decomposition through deep neural networks. Empirical results demonstrate that PINNsFormer achieves superior generalization ability and accuracy across various scenarios, including PINNs failure modes and high-dimensional PDEs. Moreover, PINNsFormer offers flexibility in integrating existing learning schemes for PINNs, further enhancing its performance.
Zhiyuan Zhao, Xueying Ding, B. Aditya Prakash
2023-07-21T18:06:27Z
http://arxiv.org/abs/2307.11833v3
# PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks ###### Abstract Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs). However, conventional PINNs, relying on multilayer perceptrons (MLP), neglect the crucial temporal dependencies inherent in practical physics systems and thus fail to propagate the initial condition constraints globally and accurately capture the true solutions under various scenarios. In this paper, we introduce a novel Transformer-based framework, termed PINNsFormer, designed to address this limitation. PINNsFormer can accurately approximate PDE solutions by utilizing multi-head attention mechanisms to capture temporal dependencies. PINNsFormer transforms point-wise inputs into pseudo sequences and replaces point-wise PINNs loss with a sequential loss. Additionally, it incorporates a novel activation function, Wavelet, which anticipates Fourier decomposition through deep neural networks. Empirical results demonstrate that PINNsFormer achieves superior generalization ability and accuracy across various scenarios, including PINNs failure modes and high-dimensional PDEs. Moreover, PINNsFormer offers flexibility in integrating existing learning schemes for PINNs, further enhancing its performance. ## 1 Introduction Numerically solving partial differential equations (PDEs) has been widely studied in science and engineering. The conventional approaches, such as finite element method (Bathe, 2007) or pseudo-spectral method (Fornberg, 1998), suffer from high computational costs in constructing meshes for high-dimensional PDEs. With the development of scientific machine learning, Physics-informed neural networks (PINNs) (Lagaris et al., 1998; Raissi et al., 2019) have emerged as a promising novel approach. Conventional PINNs and most variants employ multilayer perceptrons (MLP) as end-to-end frameworks for point-wise predictions, achieving remarkable success in various scenarios. Nevertheless, recent works have shown that PINNs fail in scenarios when solutions exhibit high-frequency or multiscale features (Raissi, 2018; Fuks & Tchelepi, 2020; Krishnapriyan et al., 2021; Wang et al., 2022), though the corresponding analytical solutions are simple. In such cases, PINNs tend to provide overly smooth or naive approximations, deviating from the true solution. Existing approaches to mitigate these failures typically involve two general strategies. The first strategy, known as data interpolation (Raissi et al., 2017; Zhu et al., 2019; Chen et al., 2021), employs data regularization observed from simulations, or real-world scenarios. These approaches face challenges in acquiring ground truth data. The second strategy employs different training schemes (Mao et al., 2020; Krishnapriyan et al., 2021; Wang et al., 2021; 2022), which potentially impose a high computational cost in practice. For instance, _Seq2Seq_ by Krishnapriyan et al. (2021) requires training multiple neural networks sequentially, while other networks suffer from convergence issues due to error accumulation. Another method, _Neural Tangent Kernel (NTK)_(Wang et al., 2022a), involves constructing kernels \(K\in\mathbb{R}^{D\times P}\), where \(D\) is the sample size and \(P\) is the model parameter, which suffers from scalability issues as the sample size or model parameter increases. While most efforts to improve the generalization ability and address failure modes in PINNs have focused on the aforementioned aspects, the crucial temporal dependencies in real-world physical systems have been largely neglected. Finite Element Methods, for instance, implicitly incorporate temporal dependencies by sequentially propagating the global solution. This propagation relies on the principle that the state at time \(t+\Delta t\) depends on the state at time \(t\). In contrast, PINNs, being a point-to-point framework, do not account for temporal dependencies within PDEs. Neglecting temporal dependencies poses challenges in globally propagating initial condition constraints in PINNs. Consequently, PINNs often exhibit failure modes where the approximations remain accurate near the initial condition but subsequently fail into overly smooth or naive approximations. To address this issue of neglecting temporal dependencies in PINNs, a natural idea is employing Transformer-based models (Vaswani et al., 2017), which are known for capturing long-term dependencies in sequential data through multi-head self-attentions and encoder-decoder attentions. Variants of transformer-based models have shown substantial success across various domains. However, adapting the Transformer, which is inherently designed for sequential data, to the point-to-point framework of PINNs presents non-trivial challenges. These challenges span both the data representation and the regularization loss within the framework. **Main Contributions.** In this work, we introduce PINNsFormer, a novel sequence-to-sequence PDE solver built on the Transformer architecture. To the best of our knowledge, PINNsFormer is the first framework in the realm of PINNs that explicitly focuses on and learns temporal dependencies within PDEs. Our key contributions can be summarized as follows: * **New Framework:** We propose a novel yet straightforward Transformer-based framework named PINNsFormer. This framework equips PINNs with the capability to capture temporal dependencies through the generated pseudo sequences, thereby enhancing the generalization ability and approximation accuracy in effectively solving PDEs. * **Novel Activation:** We introduce a novel non-linear activation function Wavelet. Wavelet is designed to anticipate the Fourier Transform for arbitrary target signals, making it a universal approximator for infinite-width neural networks. Wavelet can also be potentially beneficial to various deep learning tasks across different model architectures. * **Extensive Experiments:** We conduct comprehensive evaluations of PINNsFormer for various scenarios. We demonstrate its advantages in optimization and approximation accuracy when addressing the failure modes of PINNs or solving high-dimensional PDEs. Additionally, we showcase the flexibility of PINNsFormer in incorporating variant learning schemes of PINNs. We show the outperformance of PINNsFormer than PINNs with the schemes. ## 2 Related Work **Physics-Informed Neural Networks (PINNs).** Physics-Informed Neural Networks (PINNs) have emerged as a promising approach to tackling scientific and engineering problems. Raissi et al. (2019) introduced the novel framework that incorporates physical laws into the neural network training to solve PDEs. This pioneering work has inspired subsequent investigations, leading to applications across diverse domains, including fluid dynamics, solid mechanics, and quantum mechanics (Ling et al., 2016; Carleo et al., 2019; Yang et al., 2020). Researchers also investigate different learning schemes of PINNs (Mao et al., 2020; Wang et al., 2021, 2022a). These strategies have yielded substantial improvements in convergence, generalization, and interpretability than PINNs. **Failure Modes of PINNs.** Despite the promise exhibited by PINNs, recent works have indicated certain failure modes inherent to PINNs, particularly when confronted with PDEs featuring high-frequency or multiscale features (Fuks & Tchelepi, 2020; Raissi, 2018; McClenny & Braga-Neto, 2020; Krishnapriyan et al., 2021; Zhao et al., 2022; Wang et al., 2022a). This challenge has prompted investigations from various perspectives, including designing variant model architectures, learning schemes, or using data interpolations (Han et al., 2018; Lou et al., 2021; Wang et al., 2021, 2022a;b). A comprehensive understanding of PINNs' limitations and the underlying failure modes is fundamental for applications in addressing complicated physical problems. **Transformer-Based Models.** The Transformer model (Vaswani et al., 2017) has achieved significant attention due to its ability to capture long-term dependencies, leading to major achievements in natural language processing tasks (Devlin et al., 2018; Radford et al., 2018). Transformers have also been extended to other domains, including computer vision, speech recognition, and time-series analysis (Liu et al., 2021; Dosovitskiy et al., 2020; Gulati et al., 2020; Zhou et al., 2021). Researchers have also developed techniques aimed at enhancing the efficiency of Transformers, such as sparse attention and model compression (Child et al., 2019; Sanh et al., 2019). ## 3 Methodology **Preliminaries:** Let \(\Omega\) be an open set in \(\mathbb{R}^{d}\), bounded by \(\partial\Omega\in\mathbb{R}^{d-1}\). The PDEs with spatial input \(\mathbf{x}\) and temporal input \(t\) generally fit the following abstraction: \[\begin{split}\mathcal{D}[u(\mathbf{x},t)]=f(\mathbf{x},t),\;\;\forall \mathbf{x},t\in\Omega\\ \mathcal{B}[u(\mathbf{x},t)]=g(\mathbf{x},t),\;\;\forall\mathbf{x},t\in \partial\Omega\end{split} \tag{1}\] where \(u\) is the PDE's solution, \(\mathcal{D}\) is the differential operator that regularizes the behavior of the system, and \(\mathcal{B}\) describes the boundary or initial conditions in general. Specifically, \(\{\mathbf{x},t\}\in\Omega\) are residual points, and \(\{\mathbf{x},t\}\in\partial\Omega\) are boundary/initial points. Let \(\hat{u}\) be neural network approximations, PINNs describe the framework where \(\hat{u}\) is empirically regularized by the following constraints: \[\mathcal{L}_{\texttt{PINNs}}=\lambda_{r}\sum_{i=1}^{N}\|\mathcal{D}[\hat{u}( \mathbf{x},t)]-f(\mathbf{x},t)\|^{2}+\lambda_{b}\sum_{i=1}^{N_{b}}\|\mathcal{B}[\hat{u }(\mathbf{x},t)]-g(\mathbf{x},t)\|^{2} \tag{2}\] where \(N_{b},N_{r}\) refer to the residual and boundary/initial points separately, \(\lambda_{r},\lambda_{b}\) are the regularization parameters that balance the emphasis of the loss terms. The neural network \(\hat{u}\) takes vectorized \(\{\mathbf{x},t\}\) as input and outputs the approximated solution. The goal is then to use machine learning methodologies to train the neural network \(\hat{u}\) that minimizes the loss in Equation 2. **Methodology Overview:** While PINNs focus on point-to-point predictions, the exploration of temporal dependencies in real-world physics systems has been merely neglected. Conventional PINNs methods employ a single pair of spatial information \(\mathbf{x}\) and temporal information \(t\) to approximate the numerical solution \(u(\mathbf{x},t)\), without accounting for temporal dependencies across previous or subsequent time steps. However, this simplification is only applicable to elliptic PDEs, where the relationships between unknown functions and their derivatives do not explicitly involve time. In contrast, hyperbolic and parabolic PDEs incorporate time derivatives, implying that the state at one time step can influence states at preceding or subsequent time steps. Consequently, considering temporal dependencies is crucial to effectively address these PDEs using PINNs. In this section, we introduce a novel framework featuring a Transformer-based model of PINNs, namely PINNsFormer. Unlike point-to-point predictions, PINNsFormer extends PINNs' capabilities to sequential predictions. PINNsFormer allows accurately approximating solutions at specific time steps while also learning and regularizing temporal dependencies among incoming states. The framework consists of four components: Pseudo Sequence Generator, Spatio-Temporal Mixer, Encoder-Decoder with multi-head attention, and an Output Layer. Additionally, we introduce a novel activation function, named Wavelet, which employs Real Fourier Transform techniques to anticipate solutions to PDEs. The framework diagram is exhibited in Figure 1. We provide detailed explanations of each framework component and learning schemes in the following subsections. ### Pseudo Sequence Generator While Transformers and Transformer-based models are designed to capture long-term dependencies in sequential data, conventional PINNs utilize non-sequential data as inputs for neural networks. Consequently, to incorporate PINNs with Transformer-based models, it is essential to transform the pointwise spatiotemporal inputs into temporal sequences. Thus, for a given spatial input \(\mathbf{x}\in\mathbb{R}^{d-1}\) and temporal input \(t\in\mathbb{R}\), the Pseudo Sequence Generator performs the following operations: \[[\mathbf{x},t]\xlRightarrow{\texttt{Generator}}\{[\mathbf{x},t],[\mathbf{x},t+\Delta t], \ldots,[\mathbf{x},t+(k-1)\Delta t]\} \tag{3}\] where \([\cdot]\) is the concatenation operation, such that \([\mathbf{x},t]\in\mathbb{R}^{d}\) is vectorized, and the generator outputs the pseudo sequence in the shape of \(\mathbb{R}^{k\times d}\). In short, the Pseudo Sequence Generator extrapolates sequential time series by extending a single spatiotemporal input to multiple isometric discrete time steps. \(k\) and \(\Delta t\) are hyperparameters, which intuitively determine how many steps the pseudo sequence needs to 'look ahead' and how 'far' each step should be. In practice, both \(k\) and \(\Delta t\) should not be set to very large scales, as larger \(k\) can cause heavy computational and memory overheads, while larger \(\Delta t\) may undermine the time dependency relationships of neighboring discrete time steps. ### Model Architecture In addition to the Pseudo Sequence Generator, PINNsFormer consists of three components of its architecture: Sptio-Temporal Mixer, Encoder-Decoder with multi-head attentions, and Output Layer. The Output Layer is straightforward to interpret as a fully-connected MLP appended to the end. We provide detailed insights into the first two components below. Notably, PINNsFormer relies only on linear layers and non-linear activations, avoiding complex operations such as convolutional or recurrent layers. This design preserves PINNsFormer's computational efficiency in practice. **Spatio-Temporal Mixer.** Most PDEs contain low-dimensional spatial or temporal information. Directly feeding low-dimensional data to encoders may fail to capture the complex relationships between each feature dimension. Hence, it is necessary to embed original sequential data in higher-dimensional spaces such that more information is encoded into each vector. Instead of embedding raw data in a high-dimensional space where the distance between vectors reflects the semantic similarity (Vaswani et al., 2017; Devlin et al., 2018), PINNsFormer constructs a linear projection that maps spatiotemporal inputs onto a higher-dimensional space using a fully-connected MLP. The embedded data enriches the capability of information by mixing all raw spatiotemporal features together, so-called the linear projection Spatio-Temporal Mixer. **Encoder-Decoder Architecture.** PINNsFormer employs an encoder-decoder architecture similar to Transformer. The encoder consists of a stack of identical layers, each of which contains an encoder self-attention layer and a feedforward layer. The decoder is slightly different from the vanilla Transformer, where each of the identical layers contains only an encoder-decoder self-attention layer and a feedforward layer. At the decoder level, PINNsFormer uses the same spatiotemporal embeddings as the encoder. Therefore, the decoder does not need to relearn dependencies for the same input embeddings. The diagram for the encoder-decoder architecture is shown in Figure 2 Intuitively, the encoder self-attentions allow learning the dependency relationships of all spatiotemporal information. The decoder encoder-decoder attentions allow selectively focusing on specific dependencies within the input sequence during the decoding process, enabling it to capture more information than conventional PINNs. We use the same embeddings for the encoder and decoder since Figure 1: Architecture of proposed PINNsFormer. PINNsFormer generates a pseudo sequence based on pointwise input features. It outputs the corresponding sequential approximated solution. The first approximation of the sequence is the desired solution \(\hat{u}(\mathbf{x},t)\). Figure 2: The architecture of PINNsFormer’s Encoder-Decoder Layers. The decoder is not equipped with self-attentions. PINNs focus on approximating the solution of the _current_ state, in contrast to _next_ state prediction in language tasks or time series forecastings. ### Wavelet Activation While Transformers typically employ LayerNorm and ReLU non-linear activation functions (Vaswani et al., 2017; Gehring et al., 2017; Devlin et al., 2018), these activation functions might not always be suitable in solving PINNs. In particular, employing ReLU activation in PINNs can result in poor performance, whose effectiveness relies heavily on the accurate evaluation of derivatives while ReLU has a discontinuous derivative (Haghighat et al., 2021; de Wolff et al., 2021). Recent studies utilize Sin activation for specific scenarios to mimic the periodic properties of PDEs' solutions (Li et al., 2020; Jagtap et al., 2020; Song et al., 2022). However, it requires strong prior knowledge of the solution's behavior and is limited in its applicability. Tackling this issue, we proposed a novel and simple activation function, namely Wavelet, defined as follows: \[\texttt{Wavelet}(\mathbf{x})=\omega_{1}\sin(\mathbf{x})+\omega_{2}\cos(\mathbf{x}) \tag{4}\] Where \(\omega_{1}\) and \(\omega_{2}\) are registered learnable parameters. The intuition behind Wavelet activation simply follows Real Fourier Transform: While periodic signals can be decomposed into an integral of sines of multiple frequencies, all signals, whether periodic or aperiodic, can be decomposed into an integral of sines and cosines of varying frequencies. It is evident that Wavelet can approximate arbitrary functions giving sufficient approximation power, which leads to the following proposition: **Proposition 1**: _Let \(\mathcal{N}\) be a two-hidden-layer neural network with infinite width, equipped with Wavelet activation function, then \(\mathcal{N}\) is a universal approximator for any real-valued target._ _Proof sketch:_ The proof follows the Real Fourier Transform (Fourier Integral Transform). For any given input \(x\) and its corresponding real-valued target \(f(x)\), it has the Fourier Integral: \[f(x)=\int_{-\infty}^{\infty}F_{c}(\omega)\cos(\omega x)\,d\omega+\int_{-\infty }^{\infty}F_{s}(\omega)\sin(\omega x)\,d\omega\] where \(F_{c}\) and \(F_{s}\) are the coefficients of Sines and Cosines respectively. Second, by Riemann sum approximation, the integral can be approximated by the infinite sum such that: \[f(x)\approx\sum_{n=1}^{N}\left[F_{c}(\omega_{n})\cos(\omega_{n}x)+F_{s}( \omega_{n})\sin(\omega_{n}x)\right]\equiv W_{2}(\texttt{Wavelet}(W_{1}x))\] where \(W_{1}\) and \(W_{2}\) are the weights of \(\mathcal{N}\)'s first and second hidden layer. As \(W_{1}\) and \(W_{2}\) are infinite-width, we can divide the piecewise summation into infinitely small intervals, making the approximation arbitrarily close to the true integral. Hence, \(\mathcal{N}\) is a universal approximator for any given \(f\). In practice, most PDE solutions contain only a finite number of major frequencies. Using a neural network with finite parameters would also lead to proper approximations of the true solutions. Although Wavelet activation function is primarily employed by PINNsFormer to improve PINNs in our work, it may have potential applications in other deep-learning tasks. Similar to ReLU, \(\sigma(\cdot)\), and Tanh activations, which all turn infinite-width two-hidden-layer neural networks into universal approximators (Cybenko, 1989; Hornik, 1991; Glorot et al., 2011), we anticipate that Wavelet can demonstrate its effectiveness in other applications beyond the scope of this work. ### Learning Scheme While conventional PINNs focus on point-to-point predictions, there is an unexplored realm in adapting PINNs to handle pseudo-sequential inputs. In PINNsFormer, each generated point in the sequence, i.e., \([\mathbf{x}_{i},t_{i}+j\Delta t]\), is forwarded to the corresponding approximation, i.e., \(\hat{u}(\mathbf{x}_{i},t_{i}+j\Delta t)\) for any \(j\in\mathbb{N},J<k\). This approach allows us to compute the nth-order gradients with respect to \(\mathbf{x}\) or to independently for any valid \(n\). For instance, for any given input pseudo sequence \(\{[\mathbf{x}_{i},t_{i}],[\mathbf{x}_{i},t_{i}+\Delta t],\ldots,[\mathbf{x}_{i},t_{i}+(k-1 )\Delta t]\}\), and the corresponding approximations \(\big{\{}\hat{u}(\mathbf{x}_{i},t_{i}),\hat{u}(\mathbf{x}_{i},t_{i}+\Delta t),\ldots, \hat{u}(\mathbf{x}_{i},t_{i}+(k-1)\Delta t)\}\), we can compute the first-order derivatives w.r.t. \(\mathbf{x}\) and \(t\) separately as follows: \[\frac{\partial\{\hat{u}(\mathbf{x}_{i},t_{i}+j\Delta t)\}_{j=0}^{k-1}}{ \partial\{t_{i}+j\Delta t\}_{j=0}^{k-1}}=\{\frac{\partial\hat{u}(\mathbf{x}_{i},t_{ i})}{\partial t_{i}},\frac{\partial\hat{u}(\mathbf{x}_{i},t_{i}+\Delta t)}{\partial(t_{i}+ \Delta t)},\dots,\frac{\partial\hat{u}(\mathbf{x}_{i},t_{i}+(k-1)\Delta t)}{ \partial(t_{i}+(k-1)\Delta t)}\} \tag{5}\] \[\frac{\partial\{\hat{u}(\mathbf{x}_{i},t_{i}+j\Delta t)\}_{j=0}^{k-1}} {\partial\mathbf{x}_{i}}=\{\frac{\partial\hat{u}(\mathbf{x}_{i},t_{i})}{ \partial\mathbf{x}_{i}},\frac{\partial\hat{u}(\mathbf{x}_{i},t_{i}+\Delta t)}{ \partial\mathbf{x}_{i}},\dots,\frac{\partial\hat{u}(\mathbf{x}_{i},t_{i}+(k-1)\Delta t )}{\partial\mathbf{x}_{i}}\}\] This scheme for calculating the gradients of sequential approximations with respect to sequential inputs can be easily extended to higher-order derivatives and is applicable to residual, boundary, and initial points. However, unlike the general PINNs optimization objective in Equation 2, which combines initial and boundary condition objectives, PINNsFormer distinguishes between the two and applies different regularization schemes to initial and boundary conditions through its learning scheme. For residual and boundary points, all sequential outputs can be regularized using the PINNs loss. This is because all generated pseudo-timesteps are within the same domain as their original inputs. For example, if \([\mathbf{x}_{i},t_{i}]\) is sampled from the boundary, then \([\mathbf{x}_{i},t_{i}+j\Delta t]\) also lies on the boundary for any \(j\in\mathbb{N}^{+}\). In contrast, for initial points, only the \(t=0\) condition is regularized, corresponding to the first element of the sequential outputs. This is because only the first element of the pseudo-sequence exactly matches the initial condition at \(t=0\). All other generated time steps have \(t=j\Delta t\) for any \(j\in\mathbb{N}^{+}\), which fall outside the initial conditions. By these considerations, we adapt the PINNs loss to the sequential version, as described below: \[\mathcal{L}_{res}=\frac{1}{kN_{res}}\sum_{i=1}^{N_{res}}\sum_{j=0} ^{k-1}\|\mathcal{D}[\hat{u}(\mathbf{x}_{i},t_{i}+j\Delta t)]-f(\mathbf{x}_{i},t_{i}+j \Delta t)\|^{2} \tag{6}\] \[\mathcal{L}_{bc}=\frac{1}{kN_{bc}}\sum_{i=1}^{N_{bc}}\sum_{j=0} ^{k-1}\|\mathcal{B}[\hat{u}(\mathbf{x}_{i},t_{i}+j\Delta t)]-g(\mathbf{x}_{i},t_{i}+j \Delta t)\|^{2}\] \[\mathcal{L}_{ic}=\frac{1}{N_{ic}}\sum_{i=1}^{N_{bc}}\|\mathcal{I }[\hat{u}(\mathbf{x}_{i},0)]-h(\mathbf{x}_{i},0)\|^{2}\] \[\mathcal{L}_{\texttt{PINNsFormer}}=\lambda_{res}\mathcal{L}_{res}+ \lambda_{ic}\mathcal{L}_{ic}+\lambda_{bc}\mathcal{L}_{bc}\] where \(N_{res}=N_{r}\) refers to the residual points as in Equation 2, \(N_{bc},N_{ic}\) represent the number of boundary and initial points, respectively, with \(N_{hc}+N_{ic}=N_{b}\). \(\lambda_{res}\), \(\lambda_{bc}\), and \(\lambda_{ic}\) are regularization weights that balance the importance of the loss terms in PINNsFormer, similar to the PINNs loss. During training, PINNsFormer forwards all residual, boundary, and initial points to obtain their corresponding sequential approximations. It then optimizes the modified PINNs loss \(\mathcal{L}_{\texttt{PINNsFormer}}\) in Equation 6 using gradient-based optimization algorithms such as L-BFGS or Adam, updating the model parameters until convergence. In the testing phase, PINNsFormer forwards any arbitrary pair \([\mathbf{x},t]\) to observe the sequential approximations, where the first element of the sequential approximation corresponds exactly to the desired value of \(\hat{u}(\mathbf{x},t)\). ### Loss Landscape Analysis While achieving theoretical convergence or establishing generalization bounds for Transformer-based models can be challenging, an alternative approach to assess optimization trajectory is through visualization of the loss landscape. This approach has been employed in the analysis of both Transformers and PINNs (Krishnapriyan et al., 2021; Yao et al., 2020; Park and Kim, 2022). The loss landscape is constructed by perturbing the trained model along the directions of the first two dominant Hessian eigenvectors. This technique is often Figure 3: Visualization of the loss landscape for PINNs (left) and PINNsFormer (right) on a logarithmic scale. The loss landscape of PINNsFormer is significantly smoother than conventional PINNs. more informative than random parameter perturbations. In general, a smoother loss landscape with fewer local minima indicates an easier case for the model to converge to the global minimum. We visualize the loss landscape for both conventional PINNs and PINNsFormer. The visualizations are presented in Figure 5. The visualizations clearly reveal that PINNs exhibit a more complicated loss landscape than PINNsFormer. To be specific, we estimate the Lipschitz constant for both loss landscapes. We find that \(L_{\texttt{PINNs}}=776.16\), which is significantly larger than \(L_{\texttt{PINNsFormer}}=32.79\). Furthermore, the loss landscape of PINNs exhibits several sharp cones near its optimal point, indicating the presence of multiple local minima in close proximity to the convergence point (zero perturbation). The rugged loss landscape and multiple local minima of conventional PINNs suggest that optimizing the objective described in Equation 6 for PINNsFormer offers an easier path to reach the global minimum. This implies that PINNsFormer has advantages in avoiding the failure modes associated with PINNs. The analysis is further validated by empirical experiments, as shown in the following section. ## 4 Experiments ### Setup **Goal.** Our empirical evaluations aim to demonstrate three key advantages of PINNsFormer. Firstly, we show that PINNsFormer improves generalization abilities and mitigates failure modes compared to PINNs and variant architectures. Secondly, we illustrate the flexibility of PINNsFormer in incorporating various learning schemes, resulting in superior performance. Thirdly, we provide evidence of PINNsFormer's faster convergence and improved generalization capabilities in solving high-dimensional PDEs, which can be challenging for PINNs and their variants. **PDE Setup.** We use a selection of PDEs, including convection, 1D-reaction, 1D-wave, and Navier-Stokes PDEs. The setups for the convection, 1D-reaction, and 1D-wave equations follow past work (Chen et al., 2018). For baseline model training, including PINNs, QRes (Bu & Karpatne, 2021), and First-Layer Sine (FLS) (Wong et al., 2022), we uniformly sampled \(N_{ic}=N_{bc}=101\) initial and boundary points, as well as a uniform grid of \(101\times 101\) mesh points for the residual domain, resulting in total \(N_{\textit{res}}=10201\) points. In the case of training PINNsFormer, we reduce the collocation points, with \(N_{ic}=N_{bc}=51\) initial and boundary points and a \(51\times 51\) mesh for residual points. The reduction in fewer training samples serves two purposes: it enhances training efficiency and allows us to demonstrate the generalization capabilities of PINNsFormer with limited training data. For testing, we employed a \(101\times 101\) mesh within the residual domain. For the Navier-Stokes PDE, our experiment follows the established setup in Raissi et al. (2017). We sampled 2500 points from the 3D mesh within the residual domain for training purposes. The evaluation was performed by testing the predicted pressure at the final time step \(t=20.0\). **Training and Testing.** We build PINNs, QRes, and FLS as the baselines, along with the proposed PINNsFormer. We maintain approximately close numbers of parameters across all models to highlight the advantages of PINNsFormer from its ability to capture temporal dependencies rather than relying solely on model overparameterization. We train all models using the L-BFGS optimizer with Strong Wolfe linear search for 1000 iterations. For simplicity, we set \(\lambda_{\textit{res}}=\lambda_{\textit{ic}}=\lambda_{\textit{bc}}=1\) for the optimization objective in Equation 6. Detailed hyperparameters are provided in Appendix A. In terms of evaluation, we adopted commonly used metrics in related works (Krishnapriyan et al., 2021; Raissi et al., 2019; McClenny & Braga-Neto, 2020), including the relative Mean Absolute Error (rMAE or relative \(\ell_{1}\) error) and the relative Root Mean Square Error (rRMSE or relative \(\ell_{2}\) error). The detailed formulations of the metrics are provided in Appendix A. **Reproducibility.** All models are implemented in PyTorch (Paszke et al., 2019), and are trained separately on a single NVIDIA Tesla V100 GPU. All code and demos are included and reproducible at [https://github.com/AdityaLab/pinnsformer](https://github.com/AdityaLab/pinnsformer). ### Mitigating Failure Modes of PINNs Our primary evaluation focuses on demonstrating the superior generalization ability of PINNsFormer in comparison to PINNs, particularly on PDEs that are known to challenge PINNs' gen eralization capabilities. We focus on solving two distinct types of PDEs: the convection equation and the 1D-reaction equation. These equations pose significant challenges for conventional MLP-based PINNs, often resulting in what is referred to as "PINNs failure modes" (Mojgani et al., 2022; Daw et al., 2022; Krishnapriyan et al., 2021). In these failure modes, optimization gets stuck in local minima, leading to overly smooth approximations that deviate from the true solutions. The objective of our evaluation is to showcase the enhanced generalization capabilities of PINNs-Former when compared to standard PINNs and their variations, specifically in addressing PINNs' failure modes. The evaluation results are summarized in Table 1, with detailed PDE formulations provided in Appendix B. We showcase the prediction and absolute error plots of PINNs-Primary on convection equation in Figure 4, all prediction plots available in Appendix C. The evaluation results demonstrate significant outperformance of PINNsFormer over all baselines for both scenarios. PINNsFormer achieves the lowest training loss and test errors, distinguishing PINNsFormer as the only approach capable of mitigating the failure modes. In contrast, all other baseline methods remain stuck at global minima and fail to optimize the objective loss effectively. These results show the clear advantages of PINNsFormer in terms of generalization ability and approximation accuracy when compared to conventional PINNs and existing variants. An additional consideration when assessing PINNsFormer is its computational and memory overheads relative to PINNs. While MLP-based PINNs are known for their efficiency, PINNsFormer, with Transformer-based architecture in handling sequential data, naturally incurs higher computational and memory costs. Nonetheless, our empirical evaluation indicates that the overhead is tolerable, benefiting from the reliance on only linear hidden layers with non-linear activation functions, avoiding complicated operators such as convolutional or recurrent layers. For instance, when setting the pseudo-sequence length \(k=5\), we observe an approximate 2.92x rise in computational cost and a 2.15x rise in memory usage (as detailed in Appendix A). These overheads are deemed acceptable in exchange for the substantial performance improvements achieved by PINNsFormer. While PINNs and their various architectural adaptations may encounter challenges for certain scenarios, prior research has explored sophisticated optimization schemes to mitigate these issues, including learning rate annealing (Wang et al., 2021), augmented Lagrangian methods (Lu et al., 2021), and neural tangent kernel approaches (Wang et al., 2022). These modified PINNs have shown significant improvement of PINNs under certain scenarios. Notably, when these optimization strategies are applied to PINNsFormer, they can be easily incorporated to achieve further performance improvements. For instance, the _Neural Tangent Kernel (NTK)_ method to PINNs has shown success in solving the 1D-wave equation. In such context, we demonstrate that when combining _NTK_ with PINNsFormer, we can achieve further outperformance in approximation accuracy. Detailed results are presented in Table 2, and comprehensive PDE formulations are available in Appendix B, with prediction plots in Appendix C. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{Convection} & \multicolumn{3}{c}{1D-Reaction} \\ & Loss & rMAE & rRMSE & Loss & rMAE & rRMSE \\ \hline PINNs & 0.016 & 0.778 & 0.840 & 0.199 & 0.982 & 0.981 \\ QRes & 0.015 & 0.746 & 0.816 & 0.199 & 0.979 & 0.977 \\ FLS & 0.012 & 0.674 & 0.771 & 0.199 & 0.984 & 0.985 \\ PINNsFormer & **3.7e-5** & **0.023** & **0.027** & **3.0e-6** & **0.015** & **0.030** \\ \hline \hline \end{tabular} \end{table} Table 1: Results for solving connection and 1D-reaction equations. PINNsFormer consistently outperforms all baseline methods in terms of training loss, rMAE, and rRMSE. Figure 4: Prediction (left) and absolute error (right) of PINNs (up) and PINNsFormer (bottom) on convection equation. PINNsFormer shows success in mitigating the failure mode than PINNs. ### Flexibility in Incorporating Variant Learning Schemes Our evaluation results show both the flexibility and effectiveness of incorporating PINNsFormer with the _NTK_ method. In particular, we observe a sequence of performance improvements, from standard PINNs to PINNsFormer and from PINNs+_NTK_ to PINNsFormer+_NTK_. Essentially, PINNsFormer explores a variant architecture of PINNs, while many learning schemes are designed from an optimization perspective and are agnostic to neural network architectures. This inherent flexibility allows for versatile combinations of PINNsFormer with various learning schemes, offering practical and customizable solutions for accurate solutions in real-world applications. ### Generalization on High-Dimensional PDEs In the previous sections, we demonstrated the clear benefits of PINNsFormer in generalizing the solutions for PINNs failure modes. However, those PDEs often have simple analytical solutions. In practical physics systems, higher-dimensional and more complex PDEs need to be solved. Therefore, it's important to evaluate the generalization ability of PINNsFormer on such high-dimensional PDEs, especially when PINNsFormer is equipped with advanced mechanisms like self-attention. We evaluate the performance of PINNsFormer compared to PINNs on the 2D Navier-Stokes PDE, a problem previously investigated by Raissi et al. (2019). The training loss is shown in Figure 5, and the evaluation results are shown in Table 3. The detailed formulations of the 2D Navier-Stokes equation can be found in Appendix B, and the prediction plots are provided in Appendix C. The evaluation results demonstrate clear advantages of PINNsFormer over PINNs on high-dimensional PDEs. Firstly, PINNsFormer outperforms PINNs and their MLP-variants in terms of both training loss and validation errors. Firstly, PINNsFormer exhibits significantly faster convergence during training, which compensates for the higher computational cost per iteration. Secondly, while PINNs and their MLP-variants predict the pressure with good shapes, they exhibit increasing magnitude discrepancies as time increases. In contrast, PINNsFormer consistently aligns both the shape and magnitude of predicted pressures across various time intervals. This consistency is attributed to PINNsFormer's ability to learn temporal dependencies through Transformer-based model architecture and self-attention mechanism. ## 5 Conclusion In this paper, we introduced PINNsFormer, a novel Transformer-based framework, designed as an extension of PINNs, aimed at capturing temporal dependencies when approximating solutions to PDEs. To adapt conventional PINNs to Transformer-based models, we introduced the Pseudo Sequence Generator, a mechanism that translates vectorized inputs into pseudo time sequences, and incorporated a modified Encoder-Decoder layer along with a novel Wavelet activation function. Our empirical evaluations demonstrate that PINNsFormer consistently outperforms conventional PINNs \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{1D-Wave} \\ & Loss & rMAE & rRMSE \\ \hline PINNs & 1.93e-2 & 0.326 & 0.335 \\ PINNsFormer & 1.38e-2 & 0.270 & 0.283 \\ PINNs +_NTK_ & 6.34e-3 & 0.140 & 0.149 \\ PINNsFormer + _NTK_ & **4.21e-3** & **0.054** & **0.058** \\ \hline \hline \end{tabular} \end{table} Table 2: Results for solving the 1D-wave equation, incorporating the NTK method. PINNsFormer combined with NTK outperforms all other methods on all metrics. Figure 5: Training loss vs. Iterations of PINNs and PINNsFormer on the Navier-Stokes equation. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Navier-Stokes} \\ & Loss & rMAE & rRMSE \\ \hline PINNs & 6.72e-5 & 13.08 & 9.08 \\ QRes & 2.24e-4 & 6.41 & 4.45 \\ FLS & 9.54e-6 & 3.98 & 2.77 \\ PINNsFormer & **6.66e-6** & **0.384** & **0.280** \\ \hline \hline \end{tabular} \end{table} Table 3: Results for solving 2D Navier-Stokes equation, PINNsFormer outperforms all baselines on all metrics. across various scenarios, including handling PINNs' failure modes and addressing high-dimensional PDEs. Furthermore, PINNsFormer retains computational simplicity by using only linear layers with non-linear activation functions, making it a practical choice for real-world applications. It can flexibly integrate with existing learning schemes for PINNs, leading to superior performance. Beyond its application in PINNsFormer, the novel Wavelet activation function holds promises for the broader machine learning community. We provided a sketch proof demonstrating Wavelet's ability to approximate arbitrary target solutions using a two-hidden-layer infinite-width neural network, leveraging the Fourier decomposition of these solutions. We encourage further exploration, both theoretically and empirically, of the Wavelet activation function's potential. Its applicability extends beyond PINNs and can be leveraged in various architectures and applications.
2304.08832
Making Thermal Imaging More Equitable and Accurate: Resolving Solar Loading Biases
Thermal cameras and thermal point detectors are used to measure the temperature of human skin. These are important devices that are used everyday in clinical and mass screening settings, particularly in an epidemic. Unfortunately, despite the wide use of thermal sensors, the temperature estimates from thermal sensors do not work well in uncontrolled scene conditions. Previous work has studied the effect of wind and other environment factors on skin temperature, but has not considered the heating effect from sunlight, which is termed solar loading. Existing device manufacturers recommend that a subject who has been outdoors in sun re-acclimate to an indoor environment after a waiting period. The waiting period, up to 30 minutes, is insufficient for a rapid screening tool. Moreover, the error bias from solar loading is greater for darker skin tones since melanin absorbs solar radiation. This paper explores two approaches to address this problem. The first approach uses transient behavior of cooling to more quickly extrapolate the steady state temperature. A second approach explores the spatial modulation of solar loading, to propose single-shot correction with a wide-field thermal camera. A real world dataset comprising of thermal point, thermal image, subjective, and objective measurements of melanin is collected with statistical significance for the effect size observed. The single-shot correction scheme is shown to eliminate solar loading bias in the time of a typical frame exposure (33ms).
Ellin Q. Zhao, Alexander Vilesov, Shreeram Athreya, Pradyumna Chari, Jeanette Merlos, Kendall Millett, Nia St. Cyr, Laleh Jalilian, Achuta Kadambi
2023-04-18T08:56:04Z
http://arxiv.org/abs/2304.08832v1
# Making Thermal Imaging More Equitable and Accurate: Resolving Solar Loading Biases ###### Abstract Thermal cameras and thermal point detectors are used to measure the temperature of human skin. These are important devices that are used everyday in clinical and mass screening settings, particularly in an epidemic. Unfortunately, despite the wide use of thermal sensors, the temperature estimates from thermal sensors do not work well in uncontrolled scene conditions. Previous work has studied the effect of wind and other environment factors on skin temperature, but has not considered the heating effect from sunlight, which is termed _solar loading_. Existing device manufacturers recommend that a subject who has been outdoors in sun re-accimate to an indoor environment after a waiting period. The waiting period, up to 30 minutes, is insufficient for a rapid screening tool. Moreover, the error bias from solar loading is greater for darker skin tones since melanin absorbs solar radiation. This paper explores two approaches to address this problem. The first approach uses transient behavior of cooling to more quickly extrapolate the steady state temperature. A second approach explores the spatial modulation of solar loading, to propose single-shot correction with a wide-field thermal camera. A real-world dataset comprising of thermal point, thermal image, subjective, and objective measurements of melanin is collected with statistical significance for the effect size observed. The single-shot correction scheme is shown to eliminate solar loading bias in the time of a typical frame exposure (33ms). ## 1 Introduction Infrared thermometers (IRTs) offer the potential to measure human body temperature in a fast, non-invasive manner. These devices can be based on single point (hereafter, non-contact infrared thermometer (NCIT)) or image based measurements (hereafter, thermographic imaging). IRTs have been used for rapid screening of temperature to help maintain public health during the SARS, H1N1, Ebola and now, COVID-19 outbreaks. A common use case is shown in Fig. 1, where a woman enters an office building on her walk to work. The lobby kiosk uses a thermal camera to screen the woman for elevated temperature, to assess if she is safe to proceed a crowded office. If she has elevated temperature, she is advised to stay home. Such screening schemes are placed at airports, hotels, and stadiums and have proven useful for minimizing spread of infectious disease. Given the importance of measuring temperature it is critical that thermal cameras measure temperature accurately and equitably. Recent findings about thermal camera bias have not shown a clear link between skin tone and thermal bias. However, this is only true in controlled indoor scenarios. When a subject is exposed to the sun, their skin absorbs solar radiation and heats up--we term this phenomenon "solar loading". While the thermal sensor correctly measures increased skin temperature, the thermometer does not correct for the sun-induced temperature elevation. Addressing solar loading is an important problem because the effect is significant. In fact, the solar loading bias is often more than the threshold for fever. Observe Figure 1: **Solar loading describes a form of measurement bias in thermal cameras, where outdoor sunlight heats the skin, leading to overestimates of temperature.** This is an important problem because thermal cameras are placed at entrances of schools, workplaces, and airports to screen people for a fever. Solar loading bias affects everyone, but disproportionately affects darker skinned humans. Solar loading is particularly problematic because it can be confounded with a false positive for fever, as the fever threshold is only 2\({}^{\circ}\)C. Previous works address the solar loading problem by recommending anywhere a 10 to 30 minute waiting time [1], enabling the skin to cool down. Having subjects wait this long is impractical for a screening tool placed at building entrances, and also disadvantages people with darker skin for whom this effect is stronger. It is also unclear how the author choose the waiting times. This paper uses computational light transport to propose a single-shot correction that minimizes solar loading bias, and reduces the waiting time to a single image frame (33ms). the human subjects data shown in Fig. 2(b). The number line at bottom plots the deviation in temperature per subject with respect to a ground truth device based on contact thermography. All subjects were healthy, but most of them are falsely identified as having a fever. Moreover, there is also an issue with equity. Absorption of solar radiation in skin depends on the constituent chromophores, such as melanin. This suggests that the magnitude of solar loading scales with skin pigmentation, resulting in poorer IRT performance for select demographics. Thus, in our work we aim to not only correct solar loading, but to also study and remedy skin tone bias due to the solar loading effect. While our focus is on solar loading, our broader vision aligns with a surge of recent scholarship in generalizing IRT operation to outdoor and less controlled environments. Ogawa et al. [2], Dzien et al. [3], Spindel et al. [4] evaluated IRIs in cold weather and found that IRT measurements were correlated with the cold. Tay et al. [5], Ravi et al. [6] tested IRTs in hot weather and found a similar correlation. It is generally accepted in the scientific community that IRTs are accurate in limited settings: a patient must be acclimated to an indoor measurement location, held at room temperature (\(20-22^{\circ}C\)) for 10-30 minutes [1; 7] and this is typically the recommendation of NCIT manufacturers as can be seen in Table 1. In contrast to previous work, we focus our attention on the less characterized, but no less important, issue of solar loading biases. Our contributions follow. Contributions:In this paper, we offer two solutions to solar loading. Solution I exploits temporal modulation of solar loading to moderately reduce the waiting period. This is useful for NCITs. However, if spatial information is available with a camera, then we propose Solution II which offers single shot correction. Our approach is in the family of _computational light transport_ techniques that use optical and computational principles to reveal and interpret the flow of light in our everyday world [9]. The insights Figure 2: **Experimental data showing the solar loading effect.****(a)** The experiment setup. **(b)** Solar loading results in elevated skin temperature measurements that are biased against dark skin toned subjects compared to light skin toned subjects. Commonly accepted fever threshold (\(>37.9^{\circ}C\)) [8] would misclassify solar loaded, healthy, dark skin toned subjects. The mean biased measurement in light skin toned subjects is \(38.72^{\circ}C\) while the value for dark skin toned subjects is \(39.44^{\circ}C\). from light transport are practical: they inform learning-based inductive biases to correct for solar loading. A real dataset of subjects is collected and ground-truthed with contact-based thermometers, to be released upon acceptance. Metadata is stored with objective measures of melanin concentration, enabling an analysis of equity. The end result from this study is that solar loading is corrected. Since solar loading affects both light skinned and dark skinned people, solving for solar loading is a win-win where equity is improved as well as accuracy for everyone. ## 2 Related Work Broadly, there are two types of infrared thermometers: those that use point measurements and those that capture spatial temperature fields. The former, which this paper refers to as non-contact infrared thermometers (NCITs), measure the forehead or temple to estimate core temperature. The latter captures larger field-of-views (FoVs) using thermal cameras and will be referred to as infrared thermography. A temporal artery thermometer (TAT) falls between these two groups: the device is manually scanned across the face to obtain multiple temperature measurements. While a TAT is highly accurate, we do not consider it in this work because (1) proper operation of the device requires training and (2) they are _contact-based_ infrared thermometers. In this section, we discuss the accuracy and bias in NCITs and thermographic methods. Non-contact infrared thermometers (NCITs): While NCITs are accurate in standard cases, in general, their performance depends on climate, measurement location and metabolic activity. Erenberk et al. [13] monitored forehead NCIT measurements in children exposed to cold weather (\(0-4^{\circ}C\)). NCIT measurements underestimate core temperature, but increase and stabilize after 10 minutes indoors. Spindel et al. [4] compared NCIT performance in different measurement locations based on proximity to outdoors conditions. They found that accuracy improved as measurement locations moved further indoors. The influence of sunlight is considered but not explored. Kistemaker et al. [14] assessed the reliability of NCITs after exercise, which is expected to raise the metabolic rate. After 15 minutes of exercise, NCITs overestimated by \(1.2^{\circ}C\). Shajkofci [15] corrects environmental perturbations on NCIT measurements by regressing a linear relationship between skin, \begin{table} \begin{tabular}{l l l l} \hline \hline **Manufacturer** & **Model** & **IR Sensor** & **Waiting Period** \\ \hline Welch Allyn & 105801 & NCIT & 30 min [10] \\ ADC & Adtemp 429 & NCIT & 30 min [11] \\ Joytech Sejoy & DET-306 & NCIT & 10 min [12] \\ \hline **Solution I (Ours)** & Generic & NCIT & **3-5 min** \\ **Solution II (Ours)** & Generic & Thermal Camera & **0 min** \\ \hline \hline \end{tabular} \end{table} Table 1: Today, commercial thermal IR sensors require a waiting period for solar loading to cool down. Solution I from this paper uses transient statistics to roughly halve the waiting period as compared to listed devices, while Solution II exploits spatial correlation to eliminate a waiting period. outdoors and ambient temperature. Additionally, they correct for diurnal variations by recording the time of day. This work, as noted by the authors, is limited in the ethnic distribution of the dataset. _Infrared Thermography:_ In some ways, infrared thermography is preferred over NCITs. NCITs require the device to be held \(1-2\)cm from the skin, while thermographic methods do not suffer from this limitation due to the larger FoV. Infrared cameras are however more expensive than NCITs as the hardware is more complex. In infrared thermography, an image of the subject is taken, usually of the face or neck. A region-of-interest (ROI) is selected and the aggregate skin temperature is mapped to core temperature. The inner canthus and forehead are suggested ROIs due to high perfusion from the carotid and orbital arteries respectively [7]. It is worth noting that the ROIs comprise barely 1% of the entire thermal image, meaning 99% of captured data is discarded. IR thermography suffers from the same limitations as NCITs. Svantner et al. [16] found that measurements were influenced by ambient conditions. Wang et al. [17] use regression on multiple ROIs to achieve better accuracy then NCITs, but impact of ambient conditions are not studied. _Machine Learning for Infrared Thermometers:_ Machine learning has often been used with infrared thermometers in two ways: to improve the accuracy of infrared thermography, or to predict infections and illnesses using NCITs and other physiological measurements. Dagdanpurev et al. [18] used a combination of facial infrared thermography images, axillary temperature measurements and ambient temperature values to obtain optimum temperature estimates using linear regression analysis. This optimized temperature was further used for infection screening using the kNN algorithm. A skin heat transfer model for converting facial infrared thermography images into blood-perfusion maps was proposed by Wu et al. [19] to improve facial recognition model performance. The approach helped alleviate the effect of environmental conditions on infrared images. NCITs and thermographic cameras were used to detect temperatures which helped classify COVID-19 cases using recurrent neural networks (RNN) with long short term memory (LSTM) model [20]. Random forests were utilized in Li et al. [21] for the prediction of thermal comfort of healthy subjects in indoor HVAC settings using facial infrared thermography. _Bias in Infrared Thermometers:_ Recent works have evaluated bias in medical devices and proposed general solutions to eliminating bias, such as collection of diverse datasets [22, 23]. Other works, specifically in heart rate estimation, have demonstrated strides towards equitable technology both through hardware and software [24, 25, 26]. Recently, understanding the limitations of infrared thermometers has become important, but studies on potential biases reveal conflicting findings. Adams et al. [27] found that age and gender impacted IR thermography measurements. Regarding skin tone bias, Khan et al. [28] evaluated NCITs in subjects grouped by researcher labelled skin tone: "light" and "medium and dark". No skin tone bias was found, although the dataset was heavily skewed towards lighter skin. Strasse et al. [29] compared the performance of NCITs across body locations as well as ethnic groups (Black, White and mixed race). Again, no bias was found. Most recently, Bhavani et al. [30] compared TATs against oral thermometers. From self-reported race, performance on Black and White patients were compared, revealing that TAT measurements underestimated temperature in Black patients (\(-0.07^{\circ}C\)). Charlton et al. [31] found that emissivity of skin does not depend on skin pigmentation. These conflicting studies show that race, ethnicity and skin tone groupings preclude an understanding of bias in IRTs. The mechanism behind recorded biases is unclear. ## 3 Thermal Light Transport Preliminaries We overview the image signal processor (ISP) chain that thermal imagers use to estimate temperature of the human body depicted in Fig. 3. Let us begin by linking an object's temperature to its emission of light. In particular, every object hotter than absolute zero (\(-273.15^{\circ}C\)) emits radiation proportional to its temperature--this radiation is called _thermal radiation_. The relationship between thermal radiation and temperature is wavelength dependent. We observe this in everyday life: an iron bar heated to a very high temperature will initially glow red and then as its temperature rises it will glow orange, then yellow, and so on, until it is perceived as glowing violet. This relationship is described more precisely by Planck's law. For a given wavelength \(\lambda\), an object with emissivity \(\varepsilon\) and temperature \(T\) has spectral radiant exitance \(I(\lambda,T)\): \[I(\lambda,T)=\frac{2\pi\varepsilon hc^{2}}{\lambda^{5}}\frac{1}{e^{hc/( \lambda kT)}-1}\ \ \ Wm^{-2}, \tag{1}\] where \(h=6.63\cdot 10^{-34}J\,s\) (Planck's constant), \(k=1.38\cdot 10^{-23}J\,K^{-1}\) (Boltzmann constant) and \(c=3\cdot 10^{8}m\,s^{-1}\). Planck's law in Eq. (1) can be simplified. Assume that we have a sensor that can capture readings over the support of wavelengths considered. Figure 3: **Thermal image signal processing (ISP).****(a)** Raw data from a thermal sensor is converted to temperature through a series of processing blocks. For thermal cameras, additional processing for temporal drift and gain control (shaded) are added. **(b)** Infrared thermometers convert sensor temperature to core temperature using a device-specific calibration. We propose an additional solar loading correction (shaded). Then we can integrate Eq. (1) to obtain the total radiant flux emitted by a surface as: \[I(T) =\int_{\lambda}M(\lambda,T)\,d\lambda, \tag{2}\] \[=\varepsilon\sigma T^{4}. \tag{3}\] This equation is known as the _Stefan-Boltzmann law_ which is convenient as it relates thermal radiation to only three terms. The thermal radiation received is proportional to the product of the fourth power of temperature multiplied by emissivity and a fixed constant, \(\sigma=5.67\cdot 10^{-8}W\,m^{-2}\,K^{-4}\), known as the _Stefan-Boltzmann constant_. Estimation of temperature can proceed by taking the fourth root of Eq. (3) as. \[T=\sqrt[4]{\frac{I(T)}{\varepsilon\sigma}}. \tag{4}\] We can measure \(I(T)\), using for instance, a thermal camera and \(\sigma\) is the fixed Stefan-Boltzmann constant, leaving us with only the emissivity to plug in. Emissivity determines how efficient a surface is at emitting thermal energy and varies across objects: \(\varepsilon=1.0\) describes a perfect emitter of thermal radiation (blackbody) and \(\varepsilon=0\) describes an object that does not emit thermal radiation (e.g. polished metals). The emissivity value of human skin is known in literature as \(\varepsilon=0.98\).1 To obtain the temperature of the human, we can take the fourth root of received radiation _vis a vis_ Eq. (4) by setting \(\varepsilon=0.98\) to obtain temperature of the human skin surface. This assumes that the human is the only object in the scene. Footnote 1: In the context of equitable sensing, one must be careful about using a single material constant for all humans. However, bias has not been observed in thermal cameras [32], and so this value is considered safe to use. \begin{table} \begin{tabular}{l l} \hline \hline **Notation** & **Meaning** \\ \hline \(\overline{T}_{\text{skin}}\) & Baseline skin temperature \\ \(\overline{T}_{\text{skin}}^{*}\) & Estimate baseline skin temperature \\ \(T_{\text{skin}}(t)\) & Time varying skin temperature \\ \(T_{\text{skin}}^{*}(t)\) & Estimated time varying skin temperature \\ \(\beta_{\text{solar}}(\mathbf{x},t)\) & Solar loading bias \\ \(\beta_{\text{solarpeak}}\) & Maximum solar loading bias \\ \(\overline{T}_{\text{core}}\) & Core temperature \\ \(T_{\text{NCIT}}\) & NCIT estimated core temperature \\ \(T_{\text{oral}}\) & Oral estimated core temperature (ground truth) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of math notation symbols. ### Multipath Temperature Estimation Real world measurement of human body temperature must account for multiple thermal emitters in a scene. In any practical setting, the human body is in a _multipath environment_. A sensor measures thermal radiation from all the objects in a scene. Consider a person sitting in a room. Their skin emits thermal radiation proportional to Eq. (3) and reflects radiation from the wall to the camera. This is a form of global illumination reminiscent of the Cornell Room in computer graphics, except that this setting is at thermal wavelengths. Thermal radiation arriving at the camera is compounded with radiation from the atmosphere. The object, background and atmosphere contribute to the thermal multipath. To address this multipath problem, thermal sensor ISPs used in clinical medicine have a _radiometric chain_ that abstracts multipath into a model with three steady state terms. A radiometric chain relates radiation intensity \(I_{\text{sensor}}\) at a single scene point as: \[\boxed{\text{\emph{Steady State Radiometric Chain (Used in Commercial NCIT)}}}\] \[I_{\text{sensor}}=I_{\text{skin}}+I_{\text{amb}}+I_{\text{ atm}}. \tag{5}\] Here, \(I_{\text{skin}}\) is the intensity of thermal emission from the human body, and more precisely, the skin surface. The radiation from objects in the scene that reaches the measured spot is referred to as \(I_{\text{amb}}\) and is the multipath of objects in the background environment. The ambient term is difficult to write out explicitly as multipath reflections depend on the ambient surroundings, which are generally unknown. Finally \(I_{\text{atm}}\) describes the intensity from atmospheric air particles. The goal of the radiometric chain in commercial devices is to rewrite Eq. (5) into a form that extracts \(\overline{T}_{\text{skin}}\). Consider the illustration of the three radiation components in Fig. 4. In literature, the human body is assumed to be _opaque_ to thermal radiation, such that thermal radiation does not pass through the body. The emissivity of the human body can then be described by Kirchhoff's law: \(\varepsilon=\alpha\), where \(\alpha\) is absorptivity or the fraction of incident radiation that is absorbed by the object. In analogy to visible light optics, conservation of energy on absorbed, reflected, and transmitted radiation is expressed as: \[1 =\alpha+r+\tau, \tag{6}\] \[1 =\varepsilon+r+\tau, \tag{7}\] where \(r,\tau\) are the fraction of reflected and transmitted radiation respectively. For opaque, human skin, \(\tau=0\) and consequently \(r=1-\varepsilon\). When measuring a human body target in the setting of Fig. 4, multiple radiative phenomena are recorded by a thermal sensor. Due to opacity, the human only emits radiation (Eq. (3)) and reflects a portion of background radiation. The emitted and reflected components then pass through an atmospheric volume. The atmosphere both attenuates the incoming radiation by \(\tau_{\text{atm}}\), the transmission factor, and emits its own radiation. Putting this together, we can use the emissivity of the human skin as well as human temperature (\(\overline{T}_{\text{skin}}\)), ambient temperature (\(\overline{T}_{\rm amb}\)) and atmospheric temperature (\(\overline{T}_{\rm atm}\)) to write Eq. (5) as \[I_{\rm sensor} =\tau_{\rm atm}\left(I_{\rm skin}+(1-\varepsilon)I_{\rm amb} \right)+(1-\tau_{\rm atm})I_{\rm atm}\] \[=\sigma\tau_{\rm atm}\left(\varepsilon\overline{T}_{\rm skin}^{4 }+(1-\varepsilon)\overline{T}_{\rm amb}^{4}\right)+\sigma(1-\tau_{\rm atm}) \overline{T}_{\rm atm}^{4}. \tag{8}\] Note the explicit use of a bar on top of the temperature symbols. This emphasizes that the temperature is at steady state and not fluctuating in time. A summary of notation is shown in Table 2. Eq. (8) can be algebraically rearranged to solve for the human skin temperature as \[\overline{T}_{\rm skin}=\sqrt[4]{\frac{\frac{I_{\rm sensor}}{\sigma}-\tau_{\rm atm }(1-\varepsilon)\overline{T}_{\rm amb}^{4}-(1-\tau_{\rm atm})\overline{T}_{ \rm atm}^{4}}{\tau_{\rm atm}\varepsilon}}. \tag{9}\] Estimation of skin temperature from Eq. (9) requires values for all parameters on the right hand side of the equation. This is problematic because \(\tau_{\rm atm}\) is not known. It is possible to hard-code a value as some existing thermal monitors do, by estimating the working distance and priors on air concentration. However, since our derivations in this paper will not depend on the specific value of \(\tau_{\rm atm}\), we will without loss of generality set \(\tau_{\rm atm}=1\). At a close range of less than 3 meters \(\tau_{\rm atm}=0.98-0.99\), and this value is also a good approximation when the camera is sufficiently close to the human subject. Eq. (9) now simplifies to \[\overline{T}_{\rm skin}=\sqrt[4]{\frac{I_{\rm sensor}}{\sigma}-(1-\varepsilon )\overline{T}_{\rm amb}^{4}}{\varepsilon}. \tag{10}\] Here all terms on the right hand side are measured or estimated in commercial devices for healthcare. As discussed, \(I_{\rm sensor}\) is the intensity measured by the instrument and \(\varepsilon=0.98\) for the human body. Clinical devices include an additional calibration step to measure \(\overline{T}_{\rm amb}\), the ambient temperature. While temperature of human skin can now be estimated in multipath conditions, there is another problem: skin temperature is not the same as clinically useful core temperature. Figure 4: **Thermal temperature estimation accounts for multipath.** Shown here is a three path model, where the sensor receives thermal radiation from the atmosphere, from the background’s reflection on an object, and from the object itself. ### Core Temperature Estimation We now discuss how existing devices estimate core temperature from skin temperature. The core temperature, denoted as \(\overline{T}_{\rm core}\), is the temperature of our internal organs and differs from peripheral skin temperature by a few degrees Celcius. Clinicians seek the core temperature, not the skin temperature. Existing instruments estimate \(\overline{T}_{\rm core}\) from \(\overline{T}_{\rm skin}\) by finding a map: \[\mathcal{C}:\overline{T}_{\rm skin}\rightarrow\overline{T}_{\rm core}, \tag{11}\] where the form of \(\mathcal{C}\) varies across manufacturers, and use configurations. Existing devices use a mapping that assumes the subject is _thermoneutral_, i.e., the skin temperature is at a steady state temperature with no transient effects. Such mappings are often linear, of the form: \[\overline{T}_{\rm core}=b_{0}+b_{1}\overline{T}_{\rm skin}, \tag{12}\] where \(b_{0}\) and \(b_{1}\) are scalar coefficients used by a manufacturer. Unfortunately, the assumption of a steady state model is invalid. In the specific case of this paper, if sun is shining on a person's skin, there is a transient effect to \(\overline{T}_{\rm skin}\) but that same transient effect will not propagate to \(\overline{T}_{\rm core}\). The core mapping, \(\mathcal{C}\), is opaque to users because it is built from proprietary data collected by a manufacturer. As such, instead of directly solving for core temperature, we aim to estimate the solar loading bias in skin temperature. With a bias value, we can estimate the thermoneutral skin temperature and use existing device mappings to obtain the core temperature. ### Gap in Prior Models: Solar Loading The gap in the current radiometric chain in Eq. (5) is that heating from the sun is not modeled. As discussed from the introduction of the paper, the problem that we seek to address is _solar loading_, where the skin heats up outdoors due to sunlight. A valid indoor measurement can then only be taken many minutes later. The focus of this paper is to correct for solar loading while minimizing the waiting period. It should also be noted that while solar loading is a source of error for all human subjects, it particularly disadvantages darker skin subjects who heat up more due to melanin absorption. We model tissue in the next section with varying melanin to underscore this disparity. ### Equity of Solar Loading The magnitude of the solar loading effect depends on absorption of solar radiation by the skin. Melanin absorbs solar radiation in the visible range of the EM spectrum and conservation of energy dictates that absorbed the energy must exit the skin. It does so in the form of long wave radiation, also known as thermal radiation. Eq. (7) dictates that absorptivity is equal to emissivity (\(\alpha=\varepsilon=0.98\)), which is constant for human skin. This appears to contradict the claim that absorption of solar radiation is melanin dependent. However, \(\alpha\) in Eq. (7) is defined for thermal wavelengths, while absorption of solar radiation mainly occurs for visible wavelengths. Heat transfer literature provide models of skin temperature that must be numerically solved. Using the model posited by Wang et al. [33], we simulate skin temperature during and after solar loading while varying melanin concentration. We show the results in Fig. 5(a) and observe that the theory confirms that darker skin is more impacted by solar loading. Additionally, we simulate tissue temperature over depth in Fig. 5(b), showing that \(\overline{T}_{\text{core}}\) is indeed invariant to solar loading. ## 4 Transient Model and Correction (Solution I) Here, we present Solution I for solar loading that uses only the transient temperature profile at a single spatial point to correct for solar loading. The advantage of Solution I (compared to Solution II) is the requirement of only one spatial location. This is useful because temperature scanners are often single pixel as they are much cheaper than a thermal camera. The downside of Solution I (compared to Solution II) is that transient information alone only reduces but does not eliminate a waiting period. For readers who seek full elimination of the waiting period, skip to camera-based Solution II in Section 5. ### Transient Model of Solar Loading Solar loading involves transience. The existing three component model from Eq. (5) needs to be generalized to include time as a parameter. Let us do so using \(t\) for time and writing \[\boxed{Transient\ Radiometric\ Chain\ (\text{(used\ in\ Solution I)}}\boxed{Transient\ Radiometric\ Chain\ (\text{(used\ in\ Solution I)}}\boxed{Transient\ Radiometric\ (\text{(13)}}\) Figure 5: **Solar loading simulations. Heating of the skin due to solar radiation and subsequent cooling is simulated according to [33]. (a) Heating and cooling of the skin surface for varying melanin levels (i.e. optical depth) is shown. (b) The tissue profile at peak solar loading is shown for dark (left) and light (right) skin.** In contrast to the previous steady state model, \(I_{\rm skin}(t)\) depends on both steady state and time-varying components of the skin temperature. Solar loading skin temperature can be written as: \[T_{\rm skin}(t)=\overline{T}_{\rm skin}+\overline{\beta}_{\rm solarpeak}f(t), \tag{14}\] where \(\overline{\beta}_{\rm solarpeak}\) is the maximum solar loaded bias at steady state in units of degrees. Note that the maximum solar loading is a steady state quantity, but transience arises _vis a vis_ multiplication by time-varying function \(f(t)\) with range from 0 to 1 representing the fraction of maximum solar loading. For example, a person who has been outdoors for a while might have a maximum temperature increase of \(\overline{\beta}_{\rm solarpeak}=5^{\circ}C\). When the person returns indoors they will start cooling down as \(f(t)\) starts to decay from a maximum value of 1 (fully loaded) to 0 (no solar loading bias). It is useful to also think of a transient parametrization of solar loading bias, \(\beta_{\rm solar}(t)\). This can be written by rearranging Eq. (14) as \[\beta_{\rm solar}(t)=T_{\rm skin}(t)-\overline{T}_{\rm skin}=\overline{\beta} _{\rm solarpeak}f(t). \tag{15}\] Heat transfer literature studies how human body tissue heats up with radiation and offers differential equations for bio-heat. One such model, from Wang et al. [33], can be approximated such that skin temperatures approximate an exponential model during solar loading and cooling. This is visualized in Fig. 6. Using this principle, an estimate of temperature \(T_{\rm skin}^{*}(t)\) can be obtained by replacing \(f(t)\) in Eq. (14) with an exponential. This returns two equations, one for heating/loading and another for cooling: \[T_{\rm skin}^{*}(t)\approx\begin{cases}\overline{\beta}_{\rm solarpeak}(1-e^{ -r_{h}t})+\overline{T}_{\rm skin}&\text{heating},\\ \overline{\beta}_{\rm solarpeak}\,e^{-r_{c}t}+\overline{T}_{\rm skin}&\text{ cooling}.\end{cases} \tag{16}\] where \(r_{h}\), \(r_{c}\) are the rate of heating and cooling respectively and have non-negative values. The rates depend on factors such as skin tone, blood flow and environmental temperature. ### Solar Loading Correction via Transient Measurements Here, our goal is to estimate the steady state temperature \(\overline{T}_{\rm skin}\), which is at a single scene point. From Eq. (16) we see that skin temperature is parameterized by three values: the maximum solar loading, the level of solar loading and the steady state skin temperature. With a time series of skin temperature measurements, we can extrapolate \(\overline{T}_{\rm skin}\) by fitting to the exponential model. Based on Eq. (16), only three temperature measurements are needed to fully determine \(T_{\rm skin}^{*}(t)\); that is, we only need to find \(\overline{\beta}_{\rm solarpeak}\), \(r_{c}\) and \(\overline{T}_{\rm skin}\). Our estimated steady state skin temperature \(\overline{T}^{*}_{\text{skin}}\) can be passed through a standard IRT core estimation block to retrieve a solar loading invariant core temperature: \[\underbrace{\overline{T}_{\text{skin}}}_{ground\,truth}\rightarrow\underbrace{ Solar}_{Load}\rightarrow\underbrace{\begin{bmatrix}T_{\text{skin}}(t_{\text{min}})\\ \vdots\\ T_{\text{skin}}(t_{\text{max}})\end{bmatrix}}_{measurement}\rightarrow\underbrace{ \begin{bmatrix}Expn.\\ Fit\end{bmatrix}}_{estimate}\rightarrow\underbrace{\overline{T}^{*}_{\text{skin}}}_{ estimate}\rightarrow\underbrace{Core}_{estimate}\rightarrow\underbrace{\overline{T}^{*}_{\text{core}}}_{ estimate}, \tag{17}\] where \(t_{\text{min}}\) is the time the first temperature sample was captured and \(t_{\text{max}}\) is the time the last sample was captured. The total time window observed is then \(W_{t}=t_{\text{max}}-t_{\text{min}}\). Unfortunately, thermal sensors exhibit drift and introduce a non-Gaussian temporally distributed noise that is difficult to correct, resulting in a low signal-to-noise ratio (SNR). The distribution of noise is also non-stationary. Environmental fluctuations such as wind and other factors must also be accounted for. For this reason using transient data in the real world is difficult. Solution I requires many more than three measurements and a suitably long time window of observation for the steady state extrapolation to work well. This is illustrated on real data in Fig. 6, where one observes that multiple minutes, between 3-5 minutes of transient data are needed to extrapolate the exponential. Nonetheless, Solution I is still an advance over previous methods which do zero solar loading correction. In particular, the waiting period is on average reduced as compared to existing solutions. However, we still seek a single shot solution, enabling mass screening without a waiting period. In Solution II, we leverage the spatial temperature field of a face to demonstrate single shot correction. ## 5 Spatial Modulation and Correction (Solution II, Single Shot) Although solar loading seems like a solely transient effect, there is a spatial dependence as well. Solar loading is a variational quantity with respect to the spatial fields of geometry and material that characterize a human face. Figure 6: **Solution I requires transient data to extrapolate, which increases the waiting time.** Transient solution is sensitive to the waiting time. Fitting 1 minute of data results in a poor estimate of \(\overline{T}_{\text{skin}}\), while fitting on 5 minutes of data estimates \(\overline{T}_{\text{skin}}\) accurately. Data from one subject is shown, and this is representative of all subjects. ### Spatial Modulation in the Radiometric Chain The three path radiometric chain in Eq. (5) describes neither temporal nor spatial modulation. It can be generalized to include space as a parameter. Let us do so, using \(\mathbf{x}\) for space to write: \[\boxed{\begin{array}{c}\text{Spatial Radiometric Chain (used for Soln. II)}\\ \\ \text{I}_{\text{sensor}}(\mathbf{x})=I_{\text{skin}}(\mathbf{x})+I_{\text{amb} }+I_{\text{atm}}\end{array}} \tag{18}\] A few direct simplifications have been made: * The ambient multipath and atmospheric intensity is assumed to not be spatially dependent. * Temporal information is not involved because we are restricting scope to single shot correction. The first question we must answer is, how does solar loading relate to spatial modulation? We follow an analogous path to Equations 13 to 16 from Solution I by defining solar loading bias as a spatial functional: \[\beta_{\text{solar}}(\mathbf{x})=T_{\text{skin}}(\mathbf{x})-\overline{T}_{ \text{skin}}(\mathbf{x}). \tag{19}\] By itself, Eq. (19) does not rigorously show \(\beta_{\text{solar}}(\mathbf{x})\) is spatially dependent. For example, the RHS of Eq. (19) could be a constant. Therefore, we offer the following claim: _Proposition 1:__Solar loading \(\beta_{\text{solar}}(\mathbf{x})\) has variation across the spatial dimension of a human face._ _Justification:__We need to show that \(T_{\text{skin}}(\mathbf{x})\) and \(\overline{T}_{\text{skin}}(\mathbf{x})\) are both functions of space, and that they are different functions of space (so they do not cancel to a constant). In particular, the two quantities \(T_{\text{skin}}(\mathbf{x})\) and \(\overline{T}_{\text{skin}}(\mathbf{x})\) are defined by a partial differential equation known as Penne's Bio-heat equation whose mechanics are illustrated in Fig. 7. This equation is derived from the standard heat equation, where heat storage is balanced with diffusion (\(q_{diff}\)), internal heat generation and external heating sources. In our simplified setting, heat generation is from in vivo physiology which we abstract as Figure 7: **Skin heat transfer.** Skin temperature is determined by the heat transfer of endogenous and exogenous factors. Penne’s bio-heat equation describes these heat transfer processes. (\(q_{blood}\)), external heating is from the sun (\(q_{rad}\)), and the tissue properties are defined by the constants: tissue density (\(\rho\)) and specific heat of tissue (\(c\)). For brevity, we do not include the boundary conditions and full definition of all terms. For further details, we direct the reader to Wang et al. [33]. We then have:_ \[\rho c\frac{\partial T_{\text{skin}}(\mathbf{x},t)}{\partial t}=q_{diff}( \mathbf{x},t)+q_{blood}(\mathbf{x},t)+q_{rad}(\mathbf{x}) \tag{20}\] \[\rho c\frac{\partial\overline{T}_{\text{skin}}(\mathbf{x},t)}{\partial t}=q_{ diff}(\mathbf{x},t)+q_{blood}(\mathbf{x},t) \tag{21}\] _The primary difference between the two is \(q_{rad}\) that we define as:_ \[q_{rad}=\alpha(\mathbf{x},\mu_{mel})\,E_{sun}\,\max(0,\,\mathbf{l}\cdot n( \mathbf{x})) \tag{22}\] _where \(\alpha(\mathbf{x},\mu_{mel})\) is the proportion of incident solar energy absorbed given the melanin concentration, \(\mu_{mel}\), in the skin, \(E_{sun}\) is the incident solar radiation power, \(\mathbf{l}\) is the direction of solar radiation, and \(n(\mathbf{x})\) is the surface normal vector at the location \(\mathbf{x}\)._ _Note that \(T_{skin}(\mathbf{x},t)\) and \(\overline{T}_{skin}(\mathbf{x},t)\) are both modulated by spatial heterogeneity in biology of the skin (spatial variation of blood perfusion, melanin concentration, etc.). However, when sunlight is incident on the face, \(\mathbf{l}\cdot n(\mathbf{x})>0\), \(T_{skin}(\mathbf{x})\) and thereby \(\beta_{solar}(\mathbf{x})\) are spatially dependent on the interaction of the solar rays with the geometry of the face: \(\mathbf{l}\cdot n(\mathbf{x})\)._ Unfortunately, _Proposition 1_ suggests that the bias term is jointly dependent on time and space. In the following, we show that there exists a space-time equivalence in the bias term that allows us to write it strictly as a function of space if certain conditions are met, thus enabling one-shot methods. We show a representative example that can be generalized to more complex scenes. _Proposition 2: There exists a Space-Time Equivalence in Solar Loading Bias if \(\overline{T}_{skin}(\mathbf{x},t)\) is constrained to be spatially homogeneous and already under steady state conditions._ _Justification: The spatially varying solar loading bias \(\beta_{solar}(\mathbf{x})\) can be solved for with linear equations if \(\overline{T}_{skin}(\mathbf{x},t)\) is spatially homogeneous and in steady state such that it is written as \(\overline{T}_{skin}\). Then, the solar loading bias can be solved for given at least two regions whose temperatures and surface normals are known and different from each other. The temperature of their regions can then be modeled from Eq. (14) at a particular time instant \(t_{o}\):_ \[\begin{split} T(x_{1},t_{o})&=\overline{T}+ \overline{\beta}\ f(t_{o})\ \left(\vec{l}\cdot\vec{n}(x_{1})\right),\\ &\vdots\\ T(x_{n},t_{o})&=\overline{T}+\overline{\beta}\ f(t_{o}) \left(\vec{l}\cdot\vec{n}(x_{n})\right).\end{split} \tag{23}\] _Given the temperatures of the regions, as well as the solar vector and surface normals, it is possible to solve for \(\beta_{solar}(\mathbf{x})f(t_{o})\). Then, the solar bias can be determined at any location whose surface normal and solar vector is known at one point in time._ ### Correcting for Solar Loading in Space-Time The spatial-modulation model in Eq. (21) describes the governing equations for solar loaded skin temperature, parameterized by \(T_{\mathrm{skin}},T_{\mathrm{core}}\) and melanin. Theoretically, the equations can be inverted to solve for \(T_{\mathrm{core}}\) given a series of observations. Unfortunately Eq. (21) is difficult to solve analytically due to the non-standard spatial domain of the face. We know that skin temperature varies spatially, but the lack of an analytical solution precludes our understanding of the specific temperature patterns and gradients induced by solar loading. There is, however, no dearth of facial thermal imagery. By taking a data-driven approach, we can correct solar loading by learning a canonical pattern of heat distribution from real faces that allows us to relax the spatially homogeneous constraint in Eq. (23). While spatial temperature and normal vector distributions [34] of the face can be learned through a data-driven approach, a potential drawback of the method is if the relative orientation of the surface normals and light vector vary at a high frequency during solar loading due to motion. If this occurs, than Eq. (23) is invalid due to the light vector being a function of time, \(\mathbf{l}(t)\). ## 6 Experimental Method and Protocols We show through simulation and _in situ_ experiments that the solar loading bias affects IRT measurements and is skin tone-dependent. To do so, we acquire a unique dataset of temperature and skin color information. Our imaging prototype consists of an IR camera (FLIR Lepton 3.5 LWIR camera) mounted next to a RGB webcam (ArduCam). The RGB camera is only used for better facial landmark detection when processing the dataset. In lieu of a block-body reference, the wall in the background of the images is used as a pseudo-reference to help correct for camera flat-field correction effects. Environmental conditions are recorded using a solar power meter (Tenmars Solar Meter TM-206) and a handheld anemometer (BTMeter Anemometer 866A). Throughout the experiment, the study subject is measured using multiple NCITs, an oral thermometer (Boncare Digital Thermometer MT-601A) and our RGB-IR camera setup. Subject's skin tone information is recorded during the study; as there is no universal non-invasive skin color measurement, we discuss some existing methods of skin color measurement. ### Skin Color Measurement The standard for measuring skin tone in previous works has been subjective classification of skin darkness and using ethnicity as a proxy for skin tone. Classification of skin tones is done using Fitzpatrick skin tone (FST) scale, which groups skin tones into 6 categories of variable skin darkness. We record FST using the ratings of 2 or more study coordinators. The FST scale has known limitations [35] so we collect objective skin tone measurements to separate optical biases from race-related physiological biases. The FST and melanin demographics are shown in Fig. 9. We use the DSM III Colormeter (Cortex Technology) to measure melanin index \((MI)\). This device has been widely used in various clinical studies for skin tone and scar color measurements [36, 37]. The colormeter measures red light reflected by the skin \((I_{r})\), to obtain: \[MI=100\cdot\log\left(\frac{1}{I_{r}}\right). \tag{24}\] In subsequent analysis, we group subjects into dark and light skin tone groups based on the \(MI\). We select \(MI=45\) as the threshold between dark and light skin tones because it is the median value of our dataset. ### Solar Loading Correction The hardware required for our data acquisition is off-the-shelf and relatively inexpensive. The novelty of our method is in our computational imaging algorithms, which we discuss next. #### 6.2.1 Transient Solution Solving for \(\overline{T}_{\text{skin}}\) using Eq. (16) can be done in many ways. Due to the unknown steady state term, the function is not linearizable but can be solved numerically using a non-linear least squares algorithm or using a grid search as in [38, 39]. Knowledge of typical skin parameters allows use to bound the variables; in our case, we constrain skin temperature to be between \(27-43^{\circ}C\) and the solar loading bias to safely be between \(0-20^{\circ}C\). We find that fitting an exponential to the data is extremely sensitive to the camera calibration, so we additionally weight the data according to the calibration status, determined by finding temperature spikes in the background image. #### 6.2.2 Single-Shot Solution To demonstrate the utility of spatial information, we train a lightweight convolutional neural network (CNN) to learn solar loading bias \(\overline{\beta}_{\text{solarpeak}}f(t)\) from single-shot data. The network is exceedingly simple and still performs well, consisting only of two convolutional layers and three fully connected layers. We train using the ADAM optimizer for 20 epochs on cropped (\(50\times 50\)) facial thermal images. While we do our best to collect a diverse data from training, there will always be gaps in the acquired dataset. To combat this, we augment the dataset in multiple ways. First, we apply random horizontal flips to training data, since the level of solar loading bias in the image does not change due to flips. We simulate fever images by adding a constant offset (\(1.6^{\circ}C\)) to the face images [40]. The linear effect of core temperature on skin temperature is shown in Wang et al. [33]. The simulated fever images help ensure that the model is not overfitting to temperature intensity, rather than spatial patterns, when correcting for solar loading. We use a statistical power analysis to determine that 14 is the minimum number of subjects needed, and we collect data from 21 subjects. The network is trained using leave-one-out cross validation; each subject is used as a test subject, 4 subjects are used for validation and remaining subjects are used for training. ### Performance and Bias Measures The American Society for Testing and Materials (ASTM) states that an acceptable accuracy for clinical infrared thermometers should be an error at or less than \(0.2^{\circ}\)C in the range of \(37-39^{\circ}\)C for research devices and at or less than \(0.5^{\circ}\)C for commercial devices. [41]. In this work, we assess our methods on mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error (MAPE). ## 7 Results ### Experimental and Implementation Details We collect data from subjects under multiple conditions: thermoneutral, solar loading, and cooling. We record the subject's temperature, skin color and the experimental conditions. The setup is shown in Fig. 2(a). Prior to the experiment, the subject rests indoors to acclimatize to the experiment location. After resting, the subject's oral temperature, NCIT temperatures and skin color are recorded. A thermal video of their face provides steady state skin temperature. Next, the subject stands in sunlight for five minutes, after which they are immediately measured using NCITs. Back indoors, the subject cools down for five minutes and their skin temperature is recorded continuously by our thermal camera setup. Finally, NCIT temperatures are measured after five minutes of cooling. Solar loading bias depends not only on skin tone, but also on environmental conditions, such as wind, solar azimuth, and cloudiness. To control for these covariates, we only collect data under the following conditions: * Solar zenith close to 0 (12:00-2:00PM) * Minimal to no clouds (\(E_{sun}\geq 1900\,Wm^{-2}\)) * Minimal to no wind (\(v_{wind}<3\,ms^{-1}\)) Collecting data in a limited time of day also controls for diurnal variations. 21 subjects with melanin index (\(MI\)) values over the range \([35.34,80]\) were selected to ensure a skin tone diverse dataset and the dataset demographics are summarized in Fig. 9. In the following, we assess our solar loading correction methods against the thermal camera measurement of steady state skin temperature, \(\overline{T}_{\rm skin}\). We do not directly compare the camera method to the NCITs since the mapping function from \(T_{\rm skin}\) to \(T_{\rm core}\) is unknown as mentioned in Section 3.2. We compare our temporal and spatial solutions against the uncorrected thermal measurement after solar loading and cooling. ### Transient Solution (Solution I) Since most point sensors measure the forehead temperature, we choose this location for our transient experiments. The forehead is also the most accessible and tends to have a steady temperature, making it easy to continuously measure this location. An example forehead temperature curve during cooling is shown in Fig. 6, along with exponential curves fit with variable window lengths. In both cases, the exponential fitting is successful in minimizing the residual, but an accurate \(\overline{T}^{*}_{\rm skin}\) does not always accompany this. In Table 3, the performance of this method is shown as Sol. I. Assuming an exponential model causes the model to suffer on easy cases, such as the steady and cooled states, where temporal noise may present as a skin transient. However, for the solar loaded case, this model corrects a minimal amount of solar loading (MAE=2.48 compared to MAE=2.86 for the uncorrected skin temperature). ### Spatial Solution (Solution II) Our learned spatial solution is a data-hungry model. Since the problem of solar loading is unexplored, we collect our own dataset to test our spatial correction method. For each subject, we acquire their steady state facial temperatures as well as a continuous capture of cooling after solar loading. The steady state is used to generate \(\beta_{\text{solar}}(t)\) for solar loaded frames for the model to regress on. Fig. 8 shows facial temperatures and results from our trained CNN on multiple subjects. We observe that the model tends to perform worse when correcting the first minute of cooling. The solar loading effect is the strongest in the early minutes of cooling and the magnitude of solar loading may be difficult to correct. Additionally, recall that skin temperature cools exponentially, so there are far less frames of strong solar loading compared to moderate solar loading. Overall, the accuracy does not suffer too much due to this effect. One failure case is shown in Fig. 8. The model attempts to correct solar loading but achieves a constant error in correction over time. Despite this, the model still recognizes that solar loading is present and corrects \(\approx 2.5^{\circ}C\) of solar loading. Quantitative results from the CNN are given in Table 3 as Sol. II. Sol. II significantly outperforms other methods in the case of solar loading, estimating level of solar loading with an MAE=0.97. Note too, that Solution II is not returning false positive corrections, i.e., making a correction to temperature when solar loading does not exist. As illustrated in the curves in Fig. 8 the temperature prediction from Solution II is relatively uncorrelated with the amount of solar loading. \begin{table} \begin{tabular}{l l l l l} \hline \hline & **Method** & **MAE (\({}^{\circ}\)C) \(\downarrow\)** & **RMSE (\({}^{\circ}\)C) \(\downarrow\)** & **MAPE (\({}^{\circ}\)C) \(\downarrow\)** \\ \hline \multirow{4}{*}{\begin{tabular}{l} **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ **Solar** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{l} \(T_{\text{skin}}\) \\ 1.63 \\ **S Figure 8: **Solution II enables single shot correction and does not over correct when solar loading is not present.** From left to right: RGB image, steady state thermal image (ground truth), indoor cooling after solar loading (0 min, 5 min) and the results of our correction. In the plots, we compare the solar loaded skin temperature (blue, \(T_{\text{skin}}\)), the baseline temperature (dotted, \(\overline{T}_{\text{skin}}\)) and our corrected skin temperature (black, \(T^{*}_{\text{skin}}\)) Our method successfully removes solar loading (up to \(3^{\circ}\)C) on multiple subjects. The plots show frame-by-frame inference of Solution II, showing that Solution II is not overfitting to a fixed temperature offset. ## 8 Equity Analysis In Section 3.4, we proposed melanin bias in solar loading using theory and now we further validate this finding with experimental data. The lower-level skin bias is seen in Fig. 2(b). Dark skin samples have, on average, a higher solar loading bias than light samples by \(+0.72^{\circ}C\). After our spatial solar loading correction, the absolute difference between dark and light solar loading bias is \(-0.24^{\circ}C\) (MAE for dark samples is \(0.82^{\circ}C\), MAE for light samples is \(1.06^{\circ}C\)). While our experimental conditions control for non-subject covariates, we can further improve our analysis by collecting data from subject pairs. Two subjects, one with dark skin and one with light skin, complete the experiment at the same time, hence undergoing the same environmental conditions. We are interested in how the NCIT error, \(\epsilon=T_{\text{NCIT}}-T_{\text{oral}}\), varies between skin tones. Paired \(\epsilon_{\text{dark}},\epsilon_{\text{light}}\) measurements were compared using paired t-tests (\(p=0.05\)). The subsequent analysis shows only the results for device NCIT 1; we do not report specific values for the other tested NCITs since they exhibit the same behavior. Prior to solar loading, there is no statistical difference in NCIT error between dark and light samples. However, after five minutes of solar loading, NCIT temperatures are higher for dark subjects with a mean bias of \(+1.49^{\circ}C,95\%\) CI \([0.46,2.51]\). After five minutes of cooling, there is again no statistical difference in NCIT error between skin tones. We conclude that--under solar loading--NCITs overestimate temperatures more severely for dark skin compared to light skin subjects. NCIT performance is summarized in Section 8. While there is no bias after 5 minutes of cooling, the NCITs are still inaccurate: NCIT values are uncorrelated with oral temperatures (\(r=0.07\)). This is compared to moderate correlation for before solar loading (\(r=0.65\)) and poor correlation after 5 minutes of solar loading (\(r=0.02\)). Daanen et al. [1] suggest subjects should rest for up to 30 minutes after heat exposure to ensure accurate NCIT readings. The poor accuracy of the NCIT after five minutes of cooling agrees with the time frame set forth by Daanen et al. [1]. \begin{table} \begin{tabular}{l l l l l} \hline \hline & \multicolumn{4}{c}{**MAE (\({}^{\circ}\)C)**} \\ \cline{2-5} **Method** & **Steady state** & **Solar loaded** & **Cooled (5’)** & **Claimed Acc.** \\ \hline NCIT 1 & 0.39 & 7.10 & 0.65 & 0.1 \\ NCIT 2 & 0.60 & 6.10 & 0.72 & 0.3 \\ NCIT 3 & 0.51 & 7.98 & 0.78 & 0.2 \\ \hline \hline \end{tabular} \end{table} Table 4: **Performance of commercial NCITs.** We test three NCITs which are marketed as being within the ASME and FDA acceptable error limits. However, the devices are not within their specified ranges even during steady state. For solar loading, the devices are drastically wrong, overestimating by up to \(8^{\circ}C\) when compared to oral thermometers. ## 9 Discussion In summary, this paper presents two methods of correction of solar loading. Solution I of using transient correction was a natural method to implement and relates to NCIT devices. Unfortunately, temporal errors in the measurement from today's devices means that transient correction (as we implemented it) requires a large window of sampling observation. We therefore consider our implementation of Solution II of using spatial correction to be desirable. This enables single shot elimination of solar loading with a thermal camera. Limitations:This study characterizes light transport principles and demonstrates the ability to correct for solar loading in a single shot, but it is not a large-scale clinical study of solar loading. The scale of this study was chosen to match calculations of a power size sample statistic. Based on the large effect size of solar loading, only 14 samples were needed to draw significance. A larger scale study, conducted in the clinic, could directly assess the impact of solar loading, not only on temperature measurement, but also clinical decision making. Future Work:In the future it would be interesting to explore how this method relates to other types of thermal transients. For example, fluctuations with respect to cold weather. Fusing the temporal and spatial solutions may yield better results and can be explored in the future. Another area of future work would lie in the estimation of other physiological signals. Finally, we would like to mention that there are other modalities of sensing Figure 9: **Dataset demographics.** Dataset distribution across: **(a)** Fitzpatrick scale skin tones. **(b)** Melanin Index (\(MI\)) values. that have issues with equity, including but not limited to pulse oximeters [22], light-based blood sensors [24, 42, 43] or electrodes for neural sensing [44, 45]. It is possible that some of the metrics and study design we have proposed to achieve more equity for thermal hardware can be translated to equity in other forms of health sensing. _Conclusion:_ This study has discussed solar loading and offered single shot correction (33ms) using spatial information. Before spatial correction, the solar loading bias is \(3.03^{\circ}C\) and post-correction the error is \(0.97^{\circ}C\). Before spatial correction the difference in solar loading effects between dark and light skin groups is statistically significant with test statistic of (one-tail Kolmogorov-Smirnov test, \(p<0.005\)) and after correction the difference is not statistically significant (\(p>0.5\)). We hope this lays a foundation for large scale efforts to improve the ability of thermal cameras to accurately and equitably sense the human body. ## 10 Ethical Considerations The equity numbers from this paper were based on the sampling dataset used by the authors. The study was collected on a University campus and does not match the inclusion criteria of a societal study. Although this paper analyzes skin tone, this is only one particular axis on which one seeks equitable operation. There exist several other axes of variation across humans. The study size in this paper was sufficiently large to demonstrate that solar loading is corrected (based on power analysis measures), but a large-scale clinical study that builds on the ideas in this paper could explore a broader range of inclusion criteria, axes of demographic variation, and improvements in the performance of solar loading correction. ## 11 Acnowledgements We thank members of the Visual Machines Group (VMG) at UCLA for feedback and support. We thank Bahram Jalali for helpful discussions and feedback on the physics of thermal imaging. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. (DGE-2034835). E.Z. was supported by the NSF GRFP. P.C. was supported by a Cisco PhD Fellowship. A.K. was supported by an NSF CAREER award IIS-2046737, Army Young Investigator Program (YIP) Award, and DARPA Young Faculty Award (YFA).
2301.03161
Structural Equivalence in Subgraph Matching
Symmetry plays a major role in subgraph matching both in the description of the graphs in question and in how it confounds the search process. This work addresses how to quantify these effects and how to use symmetries to increase the efficiency of subgraph isomorphism algorithms. We introduce rigorous definitions of structural equivalence and establish conditions for when it can be safely used to generate more solutions. We illustrate how to adapt standard search routines to utilize these symmetries to accelerate search and compactly describe the solution space. We then adapt a state-of-the-art solver and perform a comprehensive series of tests to demonstrate these methods' efficacy on a standard benchmark set. We extend these methods to multiplex graphs and present results on large multiplex networks drawn from transportation systems, social media, adversarial attacks, and knowledge graphs.
Dominic Yang, Yurun Ge, Thien Nguyen, Jacob Moorman, Denali Molitor, Andrea Bertozzi
2023-01-09T04:01:17Z
http://arxiv.org/abs/2301.03161v1
# Structural Equivalence in Subgraph Matching ###### Abstract Symmetry plays a major role in subgraph matching both in the description of the graphs in question and in how it confounds the search process. This work addresses how to quantify these effects and how to use symmetries to increase the efficiency of subgraph isomorphism algorithms. We introduce rigorous definitions of structural equivalence and establish conditions for when it can be safely used to generate more solutions. We illustrate how to adapt standard search routines to utilize these symmetries to accelerate search and compactly describe the solution space. We then adapt a state-of-the-art solver and perform a comprehensive series of tests to demonstrate these methods' efficacy on a standard benchmark set. We extend these methods to multiplex graphs and present results on large multiplex networks drawn from transportation systems, social media, adversarial attacks, and knowledge graphs. _Index terms--_ Subgraph isomorphism, subgraph matching, multiplex network, structural equivalence, graph structure ## I Introduction The subgraph isomorphism problem (also called the subgraph matching problem) specifies a small graph (the **template**) to find as a subgraph within a larger (**world**) graph. This problem has been well-studied especially in the pattern recognition community. The surveys [20, 9], and [28] explain the broad variety of techniques used as well as applications including handwriting recognition [2], face recognition [4], biomedical uses [3], sudoku puzzles and adversarial activity [51]. More recently, subgraph matching arises as a component in motif discovery [23, 37], where frequent subgraphs are uncovered for graph analysis in domains including social networks and biochemical data. Additionally, subgraph matching is relevant in knowledge graph searches, wherein incomplete factual statements are completed by querying a knowledge database [8, 47]. Networks are present in many applications; hence, the ability to detect interesting structures, i.e., subgraphs, apparent in the networks bears great importance. We investigate subgraph matching on a wide variety of networks, simulated and real, single channel and multichannel, ranging from hundreds to millions of nodes. These data sets include biochemical reactions [21], pattern recognition [12], transportation networks [13], social networks [16], and knowledge graphs [52]. This paper addresses exact subgraph isomorphisms: given a template \(G_{T}=(V_{T},E_{T})\), and a world \(G_{W}=(V_{W},E_{W})\), find a mapping \(f:V_{T}\to V_{W}\) that is both injective and respects the structure of \(G_{T}\). For the latter property to hold, we require that if \((t_{1},t_{2})\in E_{T}\), then we must have \((f(t_{1}),f(t_{2}))\in E_{W}\). If this is true, we say that \(f\) is **edge-preserving**. We define subgraph isomorphism as follows: **Definition 1**.: _Given a template \(G_{T}=(V_{T},E_{T})\) and a world \(G_{W}=(V_{W},E_{W})\), a map \(f:V_{T}\to V_{W}\) is a **subgraph isomorphism** if and only if \(f\) is injective and edge-preserving._ Throughout this paper, we use the terms subgraph isomorphism and subgraph matching interchangeably. Related terms are **subgraph homomorphism** which relaxes the injectivity requirement, and **induced subgraph isomorphism** which also requires the map to be non-edge-preserving (if \((u,v)\notin E_{T},(f(u),f(v))\notin E_{W}\)). We are interested in the subgraph matching problem (SMP) [51]: **Definition 2** (**Subgraph Matching Problem**).: _Given a template graph \(G_{T}\) and a world graph \(G_{W}\), find all subgraph isomorphisms from \(G_{T}\) to \(G_{W}\)._ If there is at least one subgraph isomorphism, we call the problem **satisfiable**. Simply finding a subgraph isomorphism is NP-complete [6], suggesting that there is no algorithm that efficiently finds all subgraph isomorphisms on all graphs. In spite of this, significant progress has been made in the development of algorithms for detecting subgraph isomorphisms [18, 22, 27, 45]. As in other NP-complete problems, the literature addressing the Fig. 1: Graph representing a system of biochemical reactions from [21]. Non-gray nodes of the same color are structurally equivalent. enumeration of exact subgraph isomorphisms has focused on performing a full tree search of the solution space. The state-of-the-art algorithms generally focus on iteratively building partial matches and use heuristics for optimizing the variable ordering and pruning branches of the search tree. We are interested in the full enumeration and characterization of the solution space for the SMP. This is important for real-world applications. Consider a template representing transactions of a crime ring and the world is the broader transaction network. If there are thousands of matches, any given match is likely to be a false positive suggesting the need to further narrow down the search by adding more detail to the template. Our work identifies and characterizes the structure of such redundancies due to symmetry in the matching problem. We exploit these symmetries to produce a compressed version of the solution space which saves space as well as aids understanding the problem's solutions. As an example of symmetry, observe the template graph in Figure 1 which is from a system of biochemical reactions [21] and note that each pair of colored nodes is interchangeable in any solution as they have the exact same neighbors. As there are 11 such pairs in the graph, for any found isomorphism, we can generate \(2^{11}\) - \(2048\) more solutions simply by interchanging nodes. By avoiding redundant solutions in a subgraph search, we can significantly reduce the search time (potentially by a factor of \(2048\) or more). This simple form of symmetry is known as **structural equivalence**. Broader notions of equivalence can be used to further accelerate search. In Figure 2, the yellow and blue nodes are each individually structurally equivalent. However, if we proceed by matching A to 1, then we can complete an isomorphism by matching B and C to any of 2, 3, 4, or 5. The presence of additional edges incident to 4 and 5 hides that 4 and 5 may be swapped out for 2 or 3. By identifying when these additional edges may be ignored, we can again dramatically reduce the amount of work. This second notion of equivalence we will refer to as **candidate equivalence**. In the body of this paper, we will formally define these terms and demonstrate how they can be applied in a tree search algorithm. These two notions of equivalence can be broadly classified into two categories: **static equivalence**, which describes equivalence apparent from the problem description, and **dynamic equivalence**, which describes equivalence uncovered in the search process. Structural equivalence falls into the former category while candidate equivalence belongs to the latter. We note that these forms of equivalence are dependent on the structure of the graphs and cannot be used should we interchange one template graph for another. ### _Related Work_ The first significant subgraph isomorphism algorithm proposed by Ullmann [1] generates candidate vertices from the neighbors of already matched world vertices. The widely used VF2 algorithm [7] improves on this by choosing a match ordering that favors template vertices adjacent to already-matched template vertices and adding pruning rules based on the degrees of vertices. The authors more recently published the VF3 algorithm [31] that further extends this approach. In a different approach, Solnon [10] emphasizes the constraint propagation paradigm from artificial intelligence, and her algorithm LAD internally stores candidate lists for every template vertex. By way of a repeated application of a filter based on the vertex neighborhood structure, her algorithm can effectively prune branches of the tree search. She expands on this work in [30] to incorporate more powerful filters. Other solvers including Glasgow [24, 45], SND [19], and ILF [11] each address various ways to enhance filters based on other graph properties including number of paths, cliques or more complicated neighborhoods in order to strengthen the filters. Significant work has been done to exploit symmetry to compress graphs and count isomorphisms. Turbolso [18] exploits basic symmetry in the template graph and optimizes the matching order based on a selection of candidate regions and exploration within those regions. CFL-Match [27] proposes a match ordering based on a decomposition of the template graph into the core, a highly connected subgraph, and a forest which is further decomposed into a forest and leaves. BoostIso [25] exploits symmetry in the world graph and presents a method by which other tree-search-based approaches are accelerated by using their methodology. The ISMA algorithm [17] exploits basic bilateral and rotational symmetry of the template to boost subgraph search and this work was extended into the ISMAGS algorithm [22] to incorporate general automorphic symmetries of the template graph. One closely related problem is the inexact subgraph matching problem. This problem replaces the strict edge-preserving constraints of exact isomorphism with a penalty for edge and label mismatches which is to be minimized. The exact form of the penalty varies across applications and there are a myriad of approaches many taking inspiration from the exact matching problem. These involve a variety of techniques including filtering [40, 47], A* search [39], indexing [32], and continuous techniques [44]. We will not be considering this problem in this paper. ### _Paper Outline_ In this paper, we demonstrate how tree search-oriented approaches can be accelerated by exploiting both static forms of equivalence, apparent at the start of search, as well as dynamic forms of equivalence, which are uncovered as the search proceeds. In Section II, we formally define structural equivalence and how to incorporate it into a subgraph search. Fig. 2: Example subgraph isomorphism problem with template on the left and world on the right. Nodes of the same color are structurally equivalent. In Section III, we introduce candidate equivalence to demonstrate how to expose previously unseen equivalences during a subgraph search. In Section IV, we introduce node cover equivalence, an alternate form of equivalence which is easy to calculate, and unify all the notions of equivalence into a hierarchy. In Section V, we adapt the Glasgow solver [45] to incorporate equivalences and apply it to a set of benchmarks to assess the performance of each of the equivalence levels1. In Section VI, we demonstrate how to succinctly represent and visualize large classes of solution by incorporating equivalence. In Section VII, we extend our algorithm to be able to handle multiplex multigraphs and show our algorithm's success in fully mapping out the solution space on a variety of these more structured networks. Footnote 1: Our implementation of our algorithms can be found at the following repository: [https://github.com/domyang/glasgow-subgraph-solver](https://github.com/domyang/glasgow-subgraph-solver). This paper takes inspiration for the general subgraph tree search structure from [51] and shares many of the same test cases on multichannel networks. In our previous work [42], we introduced a simpler notion of candidate equivalence and candidate structure, and tested it on a small selection of multichannel networks. From these prior works, we observed the high combinatorial complexity of the solution spaces necessitating an approach which can exploit symmetry to compress the solution space and accelerate search. In this work, we expand on both papers by introducing several new notions of equivalence, providing a rigorous foundation for their efficacy in subgraph search, establishing a compact representation of the solution space, and empirically assessing these methods on a broad collection of both real and synthetic data sets. ## II Structural Equivalence **Structural equivalence** is an easily understood property of networks which, if present, can be exploited to greatly speed up subgraph search. Intuitively, two vertices are structurally equivalent to each other if they can be "swapped" without changing the graph structure. This type of equivalence often occurs in leaves that are both adjacent to the same vertex. **Definition 3**.: _In a graph \(G=(V,E)\), we say that two vertices \(v,w\) are **structurally equivalent** (denoted \(v\sim_{s}w\)) if:_ 1. _For_ \(u\in V,u\neq v,w\)_,_ 1. \((u,v)\in E\Leftrightarrow(u,w)\in E\)__ 2. \((v,w)\in E\Leftrightarrow(w,v)\in E\)__ This definition implies that the neighbors of structurally equivalent vertices (not including the vertices themselves) must coincide. The following proposition verifies that this is an equivalence relation. **Proposition 4**.: \(\sim_{s}\) _is an equivalence relation._ Proof.: All proofs for propositions stated in this paper are provided in the appendix. Using this relation, we can partition the vertices of any graph into structural equivalence classes, and interchange members of each class without changing the essential structure of the graph. Checking for equivalence between two vertices simply amounts to comparing neighbors in an \(O(|V|)\) operation in the worst case, but is generally faster for sparse graphs. Computing the classes themselves can be found by pairwise comparison of vertices resulting in \(O(|V|^{3})\) operations in the worst case. Algorithm 1 demonstrates how one could implement a breadth first search algorithm to take advantage of the sparsity of a graph to accelerate the computation. Since For each vertex \(v\) visited, it takes \(O(deg(v)^{2})\) to partition the neighbors of \(v\) into equivalence classes, and so in the worst case the algorithm takes \(O(\sum_{v}deg(v)^{2})\approx O(|V|\overline{deg(v)^{2}})\) where \(\overline{deg(v)^{2}}\) denotes the average over \(deg(v)^{2}\) for all \(v\). For sparse graphs, \(deg(v)\ll|V|\), and so this will be significantly faster than a naive pairwise check. ``` 1:functionFindEqClasses(\(G=(V,E)\)) 2: Let \(Q\) be a queue 3: Pick first vertex \(v\) to put in \(Q\) 4: Let \(EQ=\{\}\) 5: Let visited = \(\{\}\) 6:while\(Q\) not empty do 7: Dequuee \(v\) from \(Q\) 8: Add \(v\) to visited 9: Partition \(N(v)\) into equivalence classes, to \(EQ\) 10: Add representatives from neighbor classes to \(Q\) 11: Check if first vertex \(v\) is in any class and add if so 12: Else add it to its own class 13:return\(EQ\) ``` **Algorithm 1** Routine for computing equivalence classes ### _Interchangeability and Isomorphism Counting_ We now show that given any subgraph isomorphism, we can interchange any two vertices in the template graph and still retain a subgraph isomorphism. Before we do this, we formally define what we mean by interchangeability. **Definition 5**.: _Two template graph vertices \(v,w\in V_{T}\) are **interchangeable** if for all subgraph isomorphisms \(f:V_{T}\to V_{W}\), the mapping \(g\) given by interchanging \(v\) and \(w\):_ \[g(u)=\begin{cases}f(w)&u=v\\ f(v)&u=w\\ f(u)&\text{otherwise}\end{cases}\] _is also a subgraph isomorphism._ _Two world graph vertices \(v^{\prime},w^{\prime}\in V_{W}\) are **interchangeable** if for all subgraph isomorphisms \(f\), if both \(v^{\prime},w^{\prime}\) are in the image of \(f\) with preimages \(v,w\), the mapping \(g\):_ \[g(u)=\begin{cases}w^{\prime}&u=v\\ v^{\prime}&u=w\\ f(u)&\text{otherwise}\end{cases}\] _is an isomorphism. If only one, say \(v^{\prime}\), is in the image, then \(h\) given by_ \[h(u)=\begin{cases}w^{\prime}&u=v\\ f(u)&\text{otherwise}\end{cases}\] is also an isomorphism._ We will qualify this definition later in the paper by restricting the interchangeability only to certain subsets of isomorphisms. The proposition affirming template vertex interchangeability under template structural equivalence follows: **Proposition 6**.: _Given graphs \(G_{T}=(V_{T},E_{T})\), \(G_{W}=(V_{W},E_{W})\), if \(v,w\in V_{T}\) are structurally equivalent, then they are interchangeable in any subgraph isomorphism._ Hence, interchanging the images of two template vertices preserves subgraph isomorphism. As transpositions generate the full set of permutations, we have the following result: **Proposition 7**.: _If we can partition \(V_{T}=C_{1},\ldots,C_{n}\) into structural equivalence classes, and there exists at least one subgraph isomorphism, then there are at least_ \[\prod_{i=1}^{n}|C_{i}|!\] _subgraph isomorphisms._ We can also apply this structural equivalence to the world graph to demonstrate a similar kind of interchangeability. **Proposition 8**.: _If \(v^{\prime},w^{\prime}\in V_{W}\) are structurally equivalent, then in any subgraph isomorphism \(f\), they are interchangeable._ If we apply both template and world structural equivalence to our problem, it is natural to ask how many solutions we can now generate from a single solution. This is given by the following proposition: **Proposition 9**.: _Let \(f:V_{T}\to V_{W}\) be a subgraph isomorphism. Suppose we partition the template graph \(V_{T}=\bigcup_{i=1}^{n}C_{i}\) and the world graph \(V_{W}=\bigcup_{j=1}^{m}D_{j}\) into structural equivalence classes. Let \(C_{i,j}=C_{i}\cap f^{-1}(D_{j})\) represent the set of template vertices in \(C_{i}\) that map to world vertices in the equivalence class \(D_{j}\). Then there are_ \[\prod_{i=1}^{n}|C_{i}|!\prod_{j=1}^{m}\prod_{i=1}^{n}\binom{|D_{j}|-\sum_{k=1} ^{i-1}|C_{k,j}|}{|C_{i,j}|}\] _isomorphisms generated by interchanging equivalent template vertices or world vertices using Propositions 6 and 8._ ### _Application to Tree Search_ We now demonstrate how to adapt any tree-search algorithm to incorporate equivalence. A tree-search algorithm proceeds by constructing a partial matching of template vertices to world vertices, at each step extending the matching by assigning the next template vertex to one of its candidate world vertices. If at any point, the match cannot be extended (due to a contradiction or finding a complete matching), the last assigned template vertex is reassigned to the next candidate vertex. Each possible assignment of template vertex to world vertex corresponds to a node in the tree, and a path from the root of the tree to a leaf corresponds to a full mapping of vertices. The tree search is a recursive routine described by Algorithm 2. In this procedure, we maintain a binary \(\left|V(T)\right|\times\left|V(W)\right|\) matrix \(cands\) where \(cands[i,j]\) is 1 if world vertex \(j\) is a candidate for template vertex \(i\) and 0 otherwise and a mapping from template vertices to world vertices describing which vertices have already been matched. In lines 2-4, we report a match after having matched all vertices. The call to ApplyFilters in line 5 eliminates candidates based on the assumptions made so far in the partial match. In line 7, we save the current state to return to after backtracking, and in lines 11-14, we iterate through all candidates for the current vertex attempting a match until we have exhausted them all. Then in line 18, we restore the prior state and backtrack. ``` 1:functionSolve(partial_match, cands) 2:if MatchComplete(partial_match)then 3: ReportMatch(partial_match) 4:return 5: ApplyFilters(partial_match, cands) 6: Let \(u\) = GetNextTemplateVertex() 7: Let cands_copy = cands.copy() 8:if Using World Equivalence then 9: RecomputeEquivalence(partial_match, cands) 10: Let ws = GenerateWorldVertices(cands, eq) 11:for\(v\) in ws do 12: partial_match.match(\(u,v\)) 13: Solve(partial_match, cands_copy) 14: partial_match.unmatch(\(u,v\)) 15:if Using Template Equivalence then 16:for unmatched \(u^{\prime}\sim u\)do 17: Set cands[\(u^{\prime},v\)] = 0 18: Let cands = cands_copy 19:if Using World Equivalence then 20: RestoreEquivalence() 21:return ``` **Algorithm 2** Generic routine for a tree search Template equivalence can significantly accelerate the tree search; from Proposition 6 we can swap the assignments of equivalent template vertices to find another isomorphism. If we have a partial match, template vertices \(u_{1}\sim u_{2}\), and we have just considered candidate \(w\) for \(u_{1}\), we can ignore branches where \(u_{2}\) is mapped to \(w\) since we can generate those isomorphisms by taking one where \(u_{1}\) is mapped to \(w\) and swapping. Lines 16 and 17 demonstrate how we can incorporate this idea into a tree search (without these, we would have a standard tree search). To incorporate world equivalence into the search, we modify the search so that we only assign any template vertex to one representative of an equivalence class in the search. This can be done by modifying GenerateWorldVertices to pick only one representative vertex of each equivalence class out of the candidates for the current template vertex. Note that after performing the tree search, the solutions found will represent classes of solutions that can be generated by swapping. Some bookkeeping is needed to determine what assignments can be swapped to count the number of distinct solutions. We call the solutions that are actually found (and are not produced by interchanging vertices) **representative solutions**. The set of solutions that can be generated by interchanging equivalent vertices for a given representative solution is a **solution class**. The ability to represent large solution classes with a sparse set of solutions is what allows us to compactly describe massive solution spaces. ## III Candidate Equivalence The equivalence discussed in the prior section is a static form of equivalence, only taking into account information provided at the start of the subgraph search. However, as a subgraph search proceeds, we may be able to discard additional nodes and edges based on information derived from the assignments already made. For example, in Figure 2, after assigning A to 1, we may discard nodes 6 and 7 and edge (4, 5), as it is impossible for them be included in a match if A and 1 are matched. After these nodes are discarded, we discover that with respect to the matches already made, 2, 3, 4, and 5 are effectively interchangeable. In order to make use of this dynamic form of equivalence, we need to introduce an auxiliary structure that takes into account our knowledge of the candidates of each vertex \(u\) (denoted \(C[u]\)). **Definition 10**.: _Given template graph \(G_{T}=(V_{T},E_{T})\), world graph \(G_{W}=(V_{W},E_{W})\), and candidate sets \(C[u]\subset V_{W}\) for each \(u\in V_{T}\), the **candidate structure** is the directed graph \(G_{C}=(V_{C},E_{C})\) where the vertices \(V_{C}=\{(u,c):u\in V_{T},c\in V_{W}\}\) are template vertex-candidate pairs and \(((u_{1},c_{1}),(u_{2},c_{2}))\in E_{C}\) if and only if \((u_{1},u_{2})\in E_{T}\) and \((c_{1},c_{2})\in E_{W}\)._ The candidate structure represents both the knowledge of candidates for each template node and how template adjacency interplays with world adjacency. It removes extraneous information to expose equivalences not apparent when looking at the original graphs. We note that this data structure is similar to the compact path index (CPI) introduced in [27]. However, the CPI is only defined for a given rooted spanning tree of the template graph whereas our candidate structure takes into consideration all edges of the template graph. For our toy example in Figure 2, if we assume that the candidate sets are reduced to the minimal candidate sets so that \(C[A]=\{1,4\}\) and \(C[B]=\{2,3,4,5,6,7\}\). Then the candidate structure for these graphs is as shown on the left in Figure 3. At this point, there is no apparent equivalence to exploit from the candidate structure. However once we decide to map node A to 1, the candidate structure reduces to the right graph in Figure 3. It is visually clear that nodes 2, 3, 4, and 5 are structurally equivalent as candidates of B and C. Similarly, if we assigned A to 4, nodes 5, 6, and 7 will be structurally equivalent in the candidate structure. We want to determine under which circumstances this will ensure interchangeability. We introduce the following definition: **Definition 11**.: _Given a candidate structure \(G_{C}=(V_{C},E_{C})\), \(c_{1},c_{2}\in V_{W}\), we say that \(c_{1}\) is **candidate equivalent** to \(c_{2}\) with respect to \(u\in V_{T}\), denoted \(c_{1}\sim_{c,u}c_{2}\), if and only if \(c_{1},c_{2}\notin C[u]\) or \(c_{1},c_{2}\in C[u]\) and \((c_{1},u)\sim_{s}(c_{2},u)\)._ It is easy to show that if the candidate sets are _complete_ (for each template node \(u\), if there is a matching which maps \(u\) to world node \(v\), then \(v\in C[u]\)), then if \(c_{1}\sim_{s}c_{2}\), then \(c_{1}\sim_{c,u}c_{2}\) for all template vertices \(u\). The exact criteria for interchangeability is a little more complicated. For example, in Figure 2, 4 appears as a candidate for both A and for B and C, so that we cannot simply swap 4 with nodes that are candidate equivalent to 4 with respect to B. To address a more complex notion of interchangeability, we introduce some terms. We say that a subgraph isomorphism \(f\) is derived from a candidate structure \(G_{C}\) if for any \(v\in V_{T},(v,f(v))\in V_{C}\) (i.e., \(f(v)\) is a candidate of \(v\)). We say that world vertices \(w_{1},w_{2}\) are \(G_{C}\)**-interchangeable** if for all isomorphisms derived from the candidate structure \(G_{C}\), \(w_{1}\) and \(w_{2}\) can be interchanged and preserve isomorphism. A simple criterion for interchangeability is provided in the following proposition: **Proposition 12**.: _Suppose that given a specific candidate structure \(G_{C}=(V_{C},E_{C})\), we have that \(c_{1},c_{2}\in C[u]\) and \(c_{1}\sim_{c,u}c_{2}\) for some template vertex \(u\). Suppose that \(c_{1}\) and \(c_{2}\) are not candidates for any other vertex. Then \(c_{1}\) and \(c_{2}\) are \(G_{C}\)-interchangeable._ This proposition suggests a simple method for exploiting candidate equivalence. In our tree search, when we generate candidate vertices for a given vertex \(u\), we find representatives, for each candidate equivalence class, that do not appear as candidates for other vertices. If a class has a vertex appearing in other candidate sets, then we cannot exploit equivalence and must check each member of the class. Furthermore, as we continue to make matches and eliminate candidates, more world vertices will become equivalent, so it is advantageous to recompute equivalence before every match as is done in line 9 of Algorithm 2. Upon unmatching, we need to restore the prior equivalence in line 9. If we have that \(f(v)=c_{1}\) and \(f(w)=c_{2}\), and we want to swap \(c_{1}\) for \(c_{2}\), we need a stronger condition; namely, we need that they are equivalent with respect to both \(v\) and \(w\). In the process of a tree search, we do not know exactly what each vertex will be mapped to so instead we consider an even stronger condition: **Definition 13**.: _Given a candidate structure \(G_{C}=(V_{C},E_{C})\), we say that \(c_{1}\in V_{W}\) is **fully candidate equivalent** to \(c_{2}\in V_{W}\), denoted \(c_{1}\sim_{c}c_{2}\) if for all \(u\in V_{T}\), \(c_{1}\sim_{c,u}c_{2}\)._ Note that if \(c_{1}\sim_{c,u}c_{2}\) for some \(u\), and \(c_{1},c_{2}\) are not candidates for any other vertices, then \(c_{1}\sim_{c}c_{2}\). This condition Fig. 3: Candidate structure for the graphs in Figure 2 before and after assigning template node A to world node 1. enables us to interchange world vertices and still maintain the subgraph isomorphism conditions. This is established by the following proposition: **Proposition 14**.: _Suppose that given a specific candidate structure \(G_{C}=(V_{C},E_{C})\), we have that \(c_{1},c_{2}\in V_{W}\) and \(c_{1}\sim_{c}c_{2}\). Then, \(c_{1}\) and \(c_{2}\) are \(G_{C}\)-interchangeable._ ## IV Node Cover Equivalence An alternate notion of equivalence, introduced in [51], involves the use of a node cover. A **node cover** is a subset of nodes whose removal, along with incident edges, results in a completely disconnected graph. The approach in [51] is to build up a partial match of all the vertices in the node cover followed by assigning all the nodes outside the node cover. After reducing the candidate sets of all the nodes outside the cover to those that have enough connections to the nodes in the cover, what remains is to ensure that they are all different. We formalize this with some definitions. A **partial match** is a subgraph isomorphism from a subgraph of the template graph to the world graph. We list out the mapping as as a list of ordered pairs \(M=\{(v_{1},w_{1}),\ldots,(v_{n},w_{n})\}\). A template vertex - candidate pair \((v,c)\) is **joinable** to a partial match \(M\) if for each \((v_{i},w_{i})\in M\), if \((v_{i},v)\in V_{T}\), then \((w_{i},c)\in V_{W}\) and if \((v,v_{i})\in V_{T}\), then \((c,w_{i})\in E_{T}\). If two world vertices \(w_{1},w_{2}\) are interchangeable in any subgraph isomorphism extending a partial match \(M\), we say that \(w_{1}\) and \(w_{2}\) are \(M\)**-interchangeable**. Since the problem is significantly simpler, it is easier to obtain a form of equivalence on the vertices. **Definition 15**.: _Let \(M\) be a partial match \(M\) on a node cover \(N\) of \(V_{T}\) and suppose that for all \(u\in V_{T}\setminus N\), the candidate set \(C[u]\) is comprised entirely of all world vertices joinable to \(M\). Two world vertices \(w_{1},w_{2}\) are **node cover equivalent** with respect to \(M\), denoted \(w_{1}\sim_{N,M}w_{2}\), if for all \(u\in V_{T}\setminus N\), \(w_{1}\in C[u]\) if and only if \(w_{2}\in C[u]\)._ For example, consider the template and world in Figure 4. Once nodes B and D in the node cover are mapped to 2 and 5, the remaining nodes have candidates that have the associated color in the world graph. We then simply group each of these candidates together into equivalence classes. Note that the edges depicted in red are what prevent structural equivalence, and the node cover approach effectively ignores these edges to expose the equivalence of these vertices. **Proposition 16**.: _Suppose we have a node cover of the template graph \(N\), and a partial matching \(M\) on \(N\) and two world vertices \(w_{1},w_{2}\) not already matched satisfy \(w_{1}\sim_{N,M}w_{2}\). Then \(w_{1}\) and \(w_{2}\) are \(M\)-interchangeable._ Node cover equivalence is easy to check and captures a significant portion of the equivalence posed by other methods. This is often due to interchangeable nodes being composed of sibling leaves which are generally outside of a node cover. We note that the methods discussed in the paper (with the exception of basic structural equivalence for template and world as in Proposition 9) cannot incorporate both template and world equivalence. The combination of allowing template and world node interchanges and having dynamic world equivalence classes significantly complicates the counting process. One approach which can facilitate the use of both forms of equivalence involves a tree search where entire template equivalence classes are assigned at once instead of individual template nodes. This kind of approach for assigning the remaining nodes outside of the node cover is discussed in the Appendix Section D. ### _Equivalence Hierarchy_ There is a relation between node cover equivalence and full candidate equivalence, given in the following proposition: **Proposition 17**.: _Suppose that \(N\) is a node cover of \(V_{T}\), \(M\) is a partial match on \(N\), candidate sets are reduced to joinable vertices, and \(w_{1},w_{2}\in V_{T}\setminus N\). Then \(w_{1}\sim_{N,M}w_{2}\Leftrightarrow w_{1}\sim_{c}w_{2}\)._ Thus, until we have assigned a node cover, we can use candidate equivalence to prevent redundant branching; once we have matched all nodes in the node cover, we can check for node cover equivalence: a simpler condition. The agreement of node cover equivalence and fully candidate equivalence is apparent in the candidate structure presented on the right of Figure 4. From the candidate structure, the yellow nodes and the green nodes are fully candidate equivalent as they have the same neighbors, and that they are node cover equivalent as they only appear as candidates for the corresponding yellow and green nodes in the template. The various notions of equivalence form a hierarchy. Structural equivalence of world nodes has the strictest requirements and implies all other forms of equivalence. Proposition 17 asserts that under mild conditions, full candidate equivalence and node cover equivalence are one and the same. We can also include candidate equivalence with respect to a template vertex as a weaker condition implied by full candidate equivalence that does not guarantee interchangeability. The following proposition summarizes these findings: **Proposition 18**.: _Suppose the assumptions of Proposition 17 hold. Given template vertex \(t\), world vertices \(w_{1},w_{2}\), we have \(w_{1}\sim_{s}w_{2}\Rightarrow w_{1}\sim_{N,M}w_{2}\Leftrightarrow w_{1}\sim_{c}w_ {2}\Rightarrow w_{1}\sim_{c,t}w_{2}\). Under the first three equivalences, \(w_{1}\) and \(w_{2}\) are interchangeable._ Fig. 4: In order from left to right: template, world, and possible candidate structure. The boxed vertices comprise a node cover of the template and the image of the node cover in the world. Nodes of the same color in the world are node cover equivalent. The red edges are extraneous edges which once removed, expose equivalence. From this proposition, we observe that full candidate equivalence and node cover equivalence provide the most compact solution space as they require the weakest conditions while still guaranteeing interchangeability. However, the cost in determining full candidate equivalence may be prove excessive compared to simpler types of equivalence. We address these trade-offs on real and simulated data in Section V. ## V Experiments To demonstrate the utility of equivalence for the SMP, we adapt a state-of-the-art tree search subgraph isomorphism solver, Glasgow [45], using the modifications described in Algorithm 2. We consider seven levels of equivalence: no equivalence (NE) (default), template structural equivalence (TE), world structural equivalence (WE), template and world structural equivalence (TEWE), candidate equivalence as in Proposition 12 (CE), full candidate equivalence as in Proposition 14 (FE), and node cover equivalence (NC). Each equivalence mode is integrated into the Glasgow solver separately. For each test class involving template equivalence (TE, TEWE), we compute the template structural equivalence classes, and for each involving world equivalence (WE, TEWE, CE, FE, NC), we compute world structural equivalence classes at the start using Algorithm 1. Then we make the modifications for template and world equivalence as in Algorithm 2. For algorithms requiring recomputation of the equivalence classes at each node of the tree search, for speed purposes, we only recompute equivalence for nodes that appear as candidates for the current template node under consideration. We check equivalence between each pair of candidates using the definitions directly. We consider single-channel networks from the benchmark suite in [43]. Basic parameters for the datasets are listed in Table I. _SF_ is composed of 100 instances that are randomly generated using a power law and are designed to be scale-free networks. _LV_ is a diverse collection of randomly generated graphs satisfying various properties (connected, biconnected, triconnected, bipartite, planar, etc.). _SI_ is a collection of randomly generated instances falling into four categories: bounded valence, modified bounded valence, 4D meshes, and Erdos-Renyi graphs. The _images-cv_, _meshes-cv_, and _images-pr_ data [12, 26] sets are real instances representing segmented images and meshes of 3D objects drawn from the pattern recognition literature. The _biochemical_ dataset [21] contains matching problems taken from real systems of biochemical reactions. The _phase_ dataset [36] is comprised of randomly generated Erdos-Renyi graphs with parameters known to be very difficult for state-of-the-art solvers. We also include a problem set where the template graph is a small Erdos-Renyi graph and the world graph is composed of the webpages on the Notre Dame university website with directed edges representing links between pages [5]. In these instances, the template is randomly generated with \(n_{t}\) nodes and \(e_{t}\) edges where \(5\leq n_{t}\leq 15\) and \(n_{t}\leq e_{t}\leq 3n_{t}\). The world graph is fairly sparse and has 325,729 vertices and 1,497,135 edges. We refer to this problem set as the \(www\) dataset. We collected 50 template graphs for each value of \(n_{t}\) for a total of 550 templates. For each instance, we run the algorithm for each equivalence level with the solver configured to count all solutions. We record the number of representative solutions found, the total number of solutions that can be generated by interchanging, as well as the total run time for the instance. We terminate each run if the search is not completed after 600 seconds and record the statistics for the incomplete run. For each run, we measure the compression rate which is the number of representative solutions divided by the total number of solutions found. This quantity indicates the factor by which the form of equivalence chosen decreases the size of the solution space. All experiments were performed on an Intel Xeon Gold 6136 processor with 3 GHz, 25 MB of cache, and 125 GB of memory. In Table II, we record the proportion of satisfiable problems for which the solution space is fully enumerated by each algorithm within 600 seconds. For the _biochemical_, _LV_, and _si_ datasets, there is an increase of 5-10% of all problems fully solved when using any form of equivalence. We further note that full equivalence always has the best performance followed by node cover equivalence. This is not surprising given that Proposition 17 states that full equivalence is the most expansive form of equivalence. Figure 5 portrays how many instances are solved by a given time and demonstrates that full equivalence performs best in solution space enumeration for satisfiable problems followed by node cover and the other forms of equivalence. The plot for unsatisfiable problems demonstrates drawbacks to using full candidate equivalence; the additional computation time to check equivalence is not needed if there are no solutions. Checking equivalence in the TE, WE, and NC routines are very cheap operations so there is little difference in the amount of unsatisfiable problems solved compared to the NE routine. Figure 6 demonstrates the variation among and within the data sets by comparing run times for the NE and FE routines. We observe that it is primarily among the _biochemical_, _SI_, and Fig. 5: Number of satisfiable (top) and unsatisfiable (bottom) instances solved after a given amount of time. This is aggregated over all single channel benchmark data sets. For satisfiable instances, “solved” means having fully enumerated the solution space. \begin{tabular}{l|c|c c c c c c|c c c c c c} \hline \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{\# Instances} & \multicolumn{6}{c}{Template} & \multicolumn{6}{c}{World} \\ & & \# Nodes & \# Edges & \# Edges & \# Nodes & \# Edges & \# Edges & \multicolumn{2}{c}{Density} \\ & & Min & Max & Min & Max & Min & Max & Min & Max & Min & Max & Min & Max \\ \hline SF & 100 & 180 & 900 & 478 & 5978 & 0.006 & 0.165 & 200 & 1000 & 592 & 7148 & 0.006 & 0.159 \\ LV & 6105 & 10 & 6671 & 10 & 209000 & 0.001 & 1.000 & 10 & 6671 & 10 & 209000 & 0.001 & 1.000 \\ SI & 1170 & 40 & 777 & 41 & 12410 & 0.005 & 0.209 & 200 & 1296 & 299 & 34210 & 0.004 & 0.191 \\ images-cv & 6278 & 15 & 151 & 20 & 215 & 0.019 & 0.190 & 1072 & 5972 & 1540 & 8891 & 4.89e-4 & 0.003 \\ meshes-cv & 3018 & 40 & 199 & 114 & 539 & 0.022 & 0.146 & 201 & 5873 & 252 & 15292 & 4.40e-4 & 0.022 \\ images-pr & 24 & 4 & 170 & 4 & 241 & 0.017 & 0.667 & 4838 & 4838 & 7067 & 7067 & 0.001 & 0.001 \\ biochemical & 9180 & 9 & 386 & 8 & 886 & 0.012 & 0.423 & 9 & 386 & 8 & 886 & 0.012 & 0.423 \\ phase & 200 & 30 & 30 & 128 & 387 & 0.294 & 0.890 & 150 & 150 & 4132 & 8740 & 0.370 & 0.782 \\ www & 3850 & 5 & 15 & 5 & 45 & 0.071 & 0.750 & 325729 & 325729 & 1497135 & 1497135 & 1.41e-5 & 1.41e-5 \\ \hline \hline \end{tabular} Fig. 6: Comparison of individual run times for full enumeration between no equivalence and full equivalence runs for satisfiable problems (left) and unsatisfiable problems (right). Note the _phase_, _www_, and _meshes_cv_ problems do not terminate for any instance and take the full 600 second runtime, so they can be difficult to discern as each occupies the same spot in the upper right corner of the graphs. Fig. 7: Comparison of isomorphism counts for full enumeration between no equivalence and full equivalence runs for problems with small (\(<10^{9}\)) numbers of isomorphisms (left) and problems with large (\(\geq 10^{9}\)) numbers of isomorphism (right). Take note of the scales chosen for each graph. For 110 instances the solver with full equivalence found greater than \(10^{40}\) isomorphisms (the largest had \(\approx 10^{384}\) isomorphisms), and they are not shown on these graphs. \(LV\) for which the full equivalence routine vastly outperforms the no equivalence routine often by several orders of magnitude. On the other hand, the _images-cv_, _meshes-cv_, and _images-pr_ datasets are more challenging, especially for unsatisfiable problems. This may be due to wasted equivalence checks as there is no solution space to be compressed. Figure 7 includes two plots to illustrate variation in solution counts when using the NE and FE routines. Each has different limits on the axes to emphasize different aspects. The left demonstrates that for problems with fewer than \(10^{9}\) isomorphisms, 10 minutes is often enough time to fully enumerate the solution space without using equivalence. The number \(10^{9}\) functions as an approximate upper bound for the number of solutions found in 10 minutes for any problem using the original Glasgow solver. We note that for the _www_ dataset, the base Glasgow solver finds at most approximately \(10^{6}\) solutions, a significantly smaller number. This happens because that a significant portion of time is taken to load in the large world graph. The right plot shows problems for which the FE routine finds many orders of magnitude more solutions. In these highly symmetric problems, an equivalence-based approach is essential to fully understand the solution space. We observe that for the _biochemical_, _si_, _lv_, and _www_, we find many orders of magnitude more solutions when using full equivalence. The largest disparity found between FE and NE solution counts is not displayed - FE found about \(10^{384}\) solutions and NE found roughly \(10^{9}\) solutions: a difference of 375 orders of magnitude. Figure 8 demonstrates the different average compression rates across each dataset and equivalence level. As expected, FE, NC, and CE perform the best in terms of compression followed by TEWE and then depending on the dataset, either TE or WE. For the _biochemical_, _lv_, _meshes_cv_, and _www_ datasets, we observe on average, the solution space is compressed in size by an order of magnitude or more when using the CE, FE, or NC methods. On specific particularly symmetric problems from these datasets, we find the solution space can be compressed by tens or even hundreds of orders of magnitude when using FE or NC. For these cases, it is necessary to incorporate equivalence to come close to understanding the set of solutions to the subgraph isomorphism problem. ## VI Compact Solution Representation As noted in prior sections, the solution space for subgraph matching is often combinatorially complex. However, the notion of structural equivalence provides methods to diagram the solution space in a compact visual way that a user can understand. We explain this first for the toy example from Figure 2. We begin with the base representation of one solution, \((A\to 1,B\to 2,C\to 3)\), which pairs template vertices with world vertices and extend it to incorporate equivalence. If we use template structural equivalence, equivalent template nodes are interchangeable. We indicate this by pairing template equivalence classes with world vertices; producing a solution amounts to picking a unique representative from each class. In our example, as B and C are equivalent, we write \((A\to 1,\{B,C\}\to 2,\{B,C\}\to 3)\). World equivalence is represented similarly: the representation \((A\to 1,B\to\{2,3\},C\to 4)\) indicates \(B\) can match with 2 or 3. Table III illustrates the different numbers of representative solutions for each different equivalence level for the toy problem in Figure 2. The candidate and node cover equivalence numbers are computed assuming \(A\) is assigned first. As the full candidate and node cover equivalence levels have the broadest notion of equivalence, they have the most compression. If we use candidate or node cover equivalence, equivalence classes are recomputed before each assignment. Hence, it may be the case that a template node is paired with a world equiva Fig. 8: The average compression rate for each dataset and equivalence type. Fig. 9: The template (left) and world (center) from Figure 2 recolored to represent solution \((B\to\{2,3\},A\to 1,C\to\{2,3,4,5\})\). Each world node is colored with the same color as template nodes which can map to it or gray if no node maps to it. The right graph compresses the world graph by dropping nonparticipant nodes and combining nodes of the same color into a node with a label indicating the amount combined. lence class that has been previously assigned but has grown in size due to recomputing equivalence. For example, if we first assign template node B to the equivalence class \(\{2,3\}\), we are forced to assign \(A\) to \(1\). Finally, we recompute equivalence, and we find that \(\{2,3,4,5\}\) comprise an equivalence class, to which we assign our last template node \(C\). We therefore have solution class \((B\rightarrow\{2,3\},A\to 1,C\rightarrow\{2,3,4,5\})\). We diagram this class in Figure 9 where we color each template node and its associated candidates the same color. The subgraph of all nodes and edges that participate in the representative solution is the **solution-induced world subgraph**. The final graph depicts the **compressed solution-induced world subgraph** where we drop all nonparticipant nodes and edges, and we combine like-colored nodes into "supernodes" with a label indicating the number of nodes joined. From this last graph, we can observe the original template graph structure among the participant world nodes. We use these graphical representations depict various symmetric features of our datasets. As an example, we plot the template and world subgraph for an example from the _biochemical_ dataset in Figure 10. From this depiction, we observe that there are multiple different sources from which equivalence may arise. One aspect is the large number of pairs of structurally equivalent nodes that are colored the same in the template graph. A second source is leaf nodes on the template graph that can be mapped to large equivalence classes in the world graph. By using an equivalence-informed subgraph search, we can expose exactly where these complexities arise. The compressed solution-induced world graph is depicted in Figure 11 and clearly shows the role each world node plays with respect to the template graph in a solution. ## VII Application to Multiplex Networks ### _Multiplex MultiGraph Matching_ Often analysts wish to encode attributed information into the nodes and edges of a graph and allow for more than one interaction to occur between nodes. For example, a transportation network may have multiple modes of travel between hubs (e.g., trains and subways). Formally, if we have \(K\) distinct edge labels, then a **multiplex multigraph** is a \(K+1\)-tuple \((V,E^{1},E^{2},\ldots,E^{K})\) where \(V\) is the set of vertices, and \(E^{i}:V\times V\rightarrow\mathbb{Z}_{\geq 0}\) is a function dictating how many edges there are of label \(i\) between two vertices. Intuitively, a multiplex multigraph is a collection of \(K\) multigraphs which share the same set of nodes. The index \(i\) of the edge function is the "channel" \(i\), and we refer to the edges given by edge function \(E^{i}\) as the edges in channel \(i\), and the graph \((V,E^{i})\) as the graph in channel \(i\). A **multiplex subgraph isomorphism**\(f:V_{T}\to V_{W}\) preserves the number of edges in each channel. Given template \(\big{(}V_{T},E^{1}_{T},\ldots,E^{K}_{T}\big{)}\) and world \(\big{(}V_{W},E^{1}_{W},\ldots,E^{K}_{W}\big{)}\), for any \(u,v\in V_{T}\), we require \(E^{i}_{W}(f(u),f(v))\geq E^{i}_{T}(u,v)\), i.e., there need to be enough edges between \(f(u)\) and \(f(v)\) to support the edges between \(u\) and \(v\). Definitions for equivalence also extend naturally: we say \(v\sim_{s}w\) if in each channel \(i\) for each \(u\neq v,w\), \(E^{i}(v,u)=E^{i}(w,u)\), \(E^{i}(u,v)=E^{i}(u,w)\), and \(E^{i}(v,w)=E^{i}(w,v)\). The other forms of equivalence and related theorems all generalize similarly. Recently, significant work has been done on developing algorithms for finding multiplex subgraph isomorphisms. [29] develops an indexing approach based on neighborhood structure in multichannel graphs. [46] extends the single channel package [14] to handle the multichannel case and focuses on using intelligent vertex ordering for finding isomorphisms. [38] utilizes a constraint programming approach for filtering out candidates which is extended in [51]. A similar filtering approach is taken in [41]. [44] relaxes the problem to a continuous optimization problem which is then solved and projected back onto the original space. ### _Multiplex Experiments_ We assess the performance of our equivalence enhancements on the Glasgow solver, adapted to handle multiplex subgraph isomorphism problems. The adaptations involve minimal changes to the base algorithm, to ensure that matches are only made if they preserve the edges in every channel. To eliminate more candidates, we also perform a prefilter using the statistics and topology filters from [38] as well as maintain the subgraphs in each channel as the supplemental graphs used in the Glasgow algorithm. We consider datasets including those from [51] and which represent both real world examples and synthetically generated data. The real world examples include a transportation network in Great Britain [13], an airline network [15], a social network built on interactions on Twitter related to the Higgs Boson [16], and COVID data [52]. For the transportation and twitter networks, the template is extracted from the world graph. The synthetically generated datasets are examples which represent emails, phone calls, financial transactions, among other interactions between individuals and are all generated as part of the DARPA-MAA program [33, 34, 35]. The subgraph isomorphisms to be detected may be a group of actors involved in adversarial activities including human trafficking and money laundering. The statistics regarding these different subgraph isomorphism problems are described in Table IV. For more details on these particular datasets, see [51]. The multiplex datasets are much larger than the single-channel graphs in the previous section, with the largest world graphs having hundreds of thousands of nodes and hundreds of millions of edges. The synthetic datasets are divided into three groups based on which organization generated the dataset: PNNL [34], GORDIAN [35], and IvySys Technologies [33]. For our experiments, we examine the same seven modes of equivalence used in the single channel case, but with a time limit of one hour to count as many solutions as possible. These experiments were run on the same computer using our adapted version of the Glasgow solver. The amount of time required to enumerate all the solutions is displayed in Table V and the number of solutions found with a given method is displayed in Table VI. A quick inspection of the times illustrates that in a few cases (Airlines, GORDIAN, and Higgs Twitter), using full equivalence can enumerate the full solution space an order of magnitude faster than any other approach. This speedup is reflected in the solution count table for which FE finds significantly many more solutions. The other methods only find a mere fraction of the total solutions. The NC method often appears to be the second best both in terms of solutions found and time taken to enumerate all. This makes sense given Proposition 18. TE appears to be the third best method which can be explained by the simplicity of implementation and having no need to recompute equivalence. WE and CE are not competitive with the other methods. The datasets bear different qualities that illustrate why certain levels of equivalence work better than others. We discuss a few datasets in detail. #### V-A1 Pnnl The PNNL template and world graphs [34] are generated to model specific communication, travel, and transaction patterns from real data and the templates are then embedded into the world graph. For these instances, the counting problem is almost entirely solved after applying the initial filter and the solution space is understood by equivalence in the template. For example, observe the template displayed in Fig. 11: The world graph from Figure 10 with equivalent nodes joined into supernodes with numbers indicating the size of the class. Fig. 10: A biochemical reactions [21] template graph (left) and the solution-induced world subgraph (right) for a solution class comprised of \(9.18\times 10^{13}\) solutions. Dark gray nodes are nodes with a single candidate. Nodes with the same non-gray color in the world subgraph are fully candidate equivalent. Nodes with two or more colors were part of one class at an early stage of subgraph search which was later merged into another class. All solutions represented by the compressed solution can be generated by mapping templates nodes of one color to world nodes with the same color. Figure 12. The number of solutions generated equals the count of solutions generated by permutations of the template nodes for a single representative solution. We have a group of 9, a group of 4, and two groups of 3 interchangeable nodes, meaning any solution can generate \(9!4!3!3!3!\) more solutions. All variants on the PNNL problems illustrate this behavior. #### V-A2 Gordian The GORDIAN datasets [35] have a much larger templates and worlds than PNNL and they are generated separately in an agent-based fashion to match the daily routines and travel patterns of a certain population of people. Only the FE method fully enumerates the solution space, but the NC and CE methods come close to a full enumeration. Figure 13 illustrates the symmetries for one solution class; the template graph possesses a large group of leaf nodes. After mapping the central node to a candidate, the leaves need only be mapped to neighbors of this candidate. These graphs demonstrate a trade-off between node specificity and symmetry: template nodes with fewer edges exhibit great amounts of symmetry, whereas dense subgraphs are restricted in their candidates and have minimal symmetry. The right graph in Figure 13 depicts a compressed version of the world graph induced by this solution class from which \(3\times 10^{12}\) solutions may be generated. #### V-B3 IvySys The Ivysys template and world graphs [33] are separately generated to match the degree distribution and email behavior of the Enron email dataset and have the most complex solution space. None of the methods were successful at enumerating all solutions. The vastness of the solution space is in contrast to the size of the graphs which only have thousands of nodes. The complexity emerges from the preponderance of template leaf nodes as shown in Figure 14, depicting one solution class from which \(7.82\times 10^{103}\) solutions may be generated. Figure 15 depicts the compressed representation of the world subgraph for this solution as well as a Venn diagram displaying candidates of certain template nodes. The TE solver finds an astonishing \(10^{47}\) solutions for IvySys v7. However, using the FE method still dramatically increases the solution count, by mapping these large template equivalent classes into larger world equivalence classes. An equivalence-informed subgraph search is essential as the NE method finds only \(1.75\times 10^{9}\) solutions, 90 orders of magnitude less than the FE search. Furthermore, a typical subgraph search would assign each group of leaf nodes sequentially meaning only the candidates of the last group would be explored. Incorporating symmetry gives a fuller vision of the solution space. #### V-B4 Covid We lastly apply our algorithm to the problem of querying a knowledge graph representing known causal relations between a large variety of biochemical entities. This problem arises from a desire to extracting causal knowledge in an automated fashion from the research literature. In [52], a knowledge graph is assembled from multiple sources including the COVID-19 Open Research Dataset [48], the Blender Knowledge Graph [49], and the comparative toxicogenomics database [50]. The authors of [52] then create a query representing how SARS-CoV-2 might cause a pathway leading to a cytokine-storm in COVID-19 patients, but is generalized to detect other possible confounding factors in the pathway. When rephrased as a multichannel subgraph isomorphism problem, template and world nodes represent biochemical entities. Some template nodes are specified, and others are labeled as a chemical, gene or protein. The 9 channels in this problem are various known types of interactions between entities, e.g., activation. A solution is an assignment of each node which has the desired chemical interactions. As can be seen in Tables V and VI, there is an abundance of solutions to this problem, and incorporating equivalence greatly enhances our ability to understand the solution space. Figure 17 depicts the template and Venn diagrams of candidates sets for one solution class and exposes unspecified template nodes with a large amount of candidates. Such information is useful to an analyst for determining confounding factors in a pathway and suggesting label information or interactions to add to better specify the entire solution space. #### V-B5 Higgs Twitter Erdos-Renyi Experiments Lastly, we perform a similar experiment as we did for single channel graphs using small Erdos-Renyi graphs as our templates and our largest graph, the Higgs Twitter dataset, as our world graph. We generate a multichannel template graph by overlaying 4 different graphs corresponding to each channel each generated as an Erdos-Renyi graph with \(p=\frac{\log n_{t}}{8n_{t}}\) where \(n_{t}\) is the number of template nodes. This value \(p\) is chosen so that the Fig. 13: Template (left), solution-induced world subgraph (center) and compressed solution-induced world subgraph (right) for a solution class which can generate about \(3\times 10^{12}\) solutions to GORDIAN v7-2 [35]. World nodes of the same color are fully candidate equivalent and are candidates of the template node of the same color. All solutions represented by this compressed solution can be generated by mapping each colored node to one of groups of world nodes with the same color. Figure 14: Template (left) and solution-induced world subgraph (right) for a solution class from which \(7.82\times 10^{103}\) solutions to IvySys v7 [33] can be generated. World nodes of the same color are fully candidate equivalent and are candidates of the template node of the same color. All solutions represented by this compressed solution can be generated by mapping each colored node to one of groups of world nodes with the same color. Figure 15: IvySys v7 [33] Compressed solution-induced world graph (left) and the Venn diagram representation of intersecting candidate sets in world graph(right) for a solution class from which \(7.82\times 10^{103}\) solutions to can be generated. The number in each section in the Venn diagram represents the size of a node cover equivalence class in the world graph. All solutions represented by this compressed solution can be generated by mapping each colored node in the template to the set in the Venn diagram with the same color. graph will be connected with high probability. We generate 45 connected graphs in this way for each of \(n_{t}=5,7,9,11,13,15\). We then compute the number of isomorphisms counted for each method within 10 minutes. For these problems, we precompute the world structural equivalence classes of the Higgs Twitter graph prior to running our algorithms. The average isomorphism count for each equivalence method and template size is depicted in Figure 16. The overall averages for total runtime and isomorphism count are included in Tables V and VI under the _Twitter-ER_ dataset. From these results, we observe that the NC, FE, and CE methods find significantly more solutions than the base routine whereas the other equivalence methods do not improve on the NE method. NC performs the best both in terms of the number of isomorphisms found and the total amount of time which we speculate is due to its lightweight computation and ability to capture most of the equivalence. FE and CE are a few orders of magnitude worse, and the remaining methods TE, WE, and TEWE fail to provide significant benefit over the base method and in fact does worse when involving world structural equivalence. That these methods do not improve much we can explain by the fact that nodes in multichannel Erdos-Renyi graphs are fairly unlikely to be structurally equivalent. All in all, these experiments demonstrate even when using randomly generated template graphs, significant improvements can be had in incorporating equivalence into the algorithm. However, certain modes of equivalence may be more appropriate for certain classes of graphs and some care must be taken to ensure that the level of equivalence chosen actually helps with solving the problem. ## VIII Conclusion In this work, we have developed a theory for static and dynamic notions of equivalence and presented conditions under which node assignments can be interchanged while preserving isomorphisms. With minimal changes to a subgraph isomorphism routine to incorporate equivalence during a tree search, we can dramatically reduce the amount of time to solve a problem and get a compact characterization of the solution space. For instances with minimal symmetry, little is to be gained, but for problems with large symmetric structures, it is essential to exploit equivalence in order to understand the large solution space. In particular, we demonstrated that the FE and NC methods both perform well in capturing equivalence present in the problem enabling the greatest compression of the solution space. We showed our results apply to standard subgraph solvers by integrating our methods into the state-of-the-art solver Glasgow and extended our methods to the more complex problem spaces of multiplex multigraphs. Future directions for this research include adapting these notions of equivalence to inexact search as well as producing inexact forms of equivalence. We would also like to better understand how to incorporate automorphic equivalence with the different notions of equivalence discussed in this paper. ## Acknowledgements This material is based on research sponsored by the Air Force Research Laboratory and DARPA under agreement number FA8750-18-2-0066. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S.Government. This work was also supported by NSF Grant DMS-2027277.
2307.07089
The Collisional Evolution of the Primordial Kuiper Belt, Its Destabilized Population, and the Trojan Asteroids
The tumultuous early era of outer solar system evolution culminated when Neptune migrated across the primordial Kuiper belt (PKB) and triggered a dynamical instability among the giant planets. This event led to the ejection of approximately 99.9\% of the PKB (here called the destabilized population), heavy bombardment of the giant planet satellites, and the capture of Jupiter's Trojans. While this scenario has been widely tested using dynamical models, there have been fewer investigations into how the PKB, its destabilized population, and the Trojans experienced collisional evolution. Here we examined this issue for all three populations with the code Boulder. Our constraints included the size-frequency distributions (SFDs) of the Trojan asteroids and craters on the giant planet satellites. Using this combination, we solved for the unknown disruption law affecting bodies in these populations. The weakest ones, from an impact energy per mass perspective, were 20 m in diameter. Overall, collisional evolution produces a power-law-like shape for multikilometer Trojans and a wavy-shaped SFD in the PKB and destabilized populations. The latter can explain (i) the shapes of the ancient and younger crater SFDs observed on the giant planet satellites, (ii) the shapes of the Jupiter family and long-period comet SFDs, which experienced different degrees of collision evolution, and (iii) the present-day impact frequency of superbolides on Jupiter and smaller projectiles on Saturn's rings. Our model results also indicate that many observed comets, most which are smaller than 10 km in diameter, are likely to be gravitational aggregates formed by large-scale collision events.
William Bottke, David Vokrouhlicky, Raphael Marshall, David Nesvorny, Alessandro Morbidelli, Rogerio Deienno, Simone Marchi, Luke Dones, Harold Levison
2023-07-13T23:19:06Z
http://arxiv.org/abs/2307.07089v1
###### Abstract ###### Abstract The tumultuous early era of outer solar system evolution culminated when Neptune migrated across the primordial Kuiper belt (PKB) and triggered a dynamical instability among the giant planets. This event led to the ejection of ~99.9% of the PKB (here called the destabilized population), heavy bombardment of the giant planet satellites, and the capture of Jupiter's Trojans. While this scenario has been widely tested using dynamical models, there have been fewer investigations into how the PKB, its destabilized population, and the Trojans experienced collisional evolution. Here we examined this issue for all three populations with the code Boulder. Our constraints included the size-frequency distributions (SFDs) of the Trojan asteroids and craters on the giant planet satellites. Using this combination, we solved for the unknown disruption law affecting bodies in these populations. The weakest ones, from an impact energy per mass perspective, were diameter D - 20 m. Overall, collisional evolution produces a power-law-like shape for multikilometer Trojans and a wavy-shaped SFD in the PKB and destabilized populations. The latter can explain (i) the shapes of the ancient and younger crater SFDs observed on the giant planet satellites, (ii) the shapes of the Jupiter family and long-period comet SFDs, which experienced different degrees of collision evolution, and (iii) the present-day impact frequency of superbolides on Jupiter and smaller projectiles on Saturn's rings. Our model results also indicate that many observed comets, most which are D < 10 km, are likely to be gravitational aggregates formed by large-scale collision events. **The Collisional Evolution of the Primordial Kuiper Belt, Its Destabilized Population, and the Trojan Asteroids** William F. Bottke12, David Vokrouhlicky3, Raphael Marshall4, David Nesvorny2, Alessandro Morbidelli4, Rogerio Deienno2, Simone Marchi2, Luke Dones2, Harold F. Levison2 Footnote 1: Corresponding author [email protected] Footnote 2: Southwest Research Institute, 1050 Walnut St, Suite 300, Boulder, CO, USA Footnote 3: Institute of Astronomy, Charles University, V Holesvökököach 2, CZ-18000, Prague 8, Czech Republic Footnote 4: Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Nice, France Footnote 4: Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Nice, France ###### Abstract The tumultuous early era of outer solar system evolution culminated when Neptune migrated across the primordial Kuiper belt (PKB) and triggered a dynamical instability among the giant planets. This event led to the ejection of ~99.9% of the PKB (here called the destabilized population), heavy bombardment of the giant planet satellites, and the capture of Jupiter's Trojans. While this scenario has been widely tested using dynamical models, there have been fewer investigations into how the PKB, its destabilized population, and the Trojans experienced collisional evolution. Here we examined this issue for all three populations with the code Boulder. Our constraints included the size-frequency distributions (SFDs) of the Trojan asteroids and craters on the giant planet satellites. Using this combination, we solved for the unknown disruption law affecting bodies in these populations. The weakest ones, from an impact energy per mass perspective, were diameter D - 20 m. Overall, collisional evolution produces a power-law-like shape for multikilometer Trojans and a wavy-shaped SFD in the PKB and destabilized populations. The latter can explain (i) the shapes of the ancient and younger crater SFDs observed on the giant planet satellites, (ii) the shapes of the Jupiter family and long-period comet SFDs, which experienced different degrees of collision evolution, and (iii) the present-day impact frequency of superbolides on Jupiter and smaller projectiles on Saturn's rings. Our model results also indicate that many observed comets, most which are D < 10 km, are likely to be gravitational aggregates formed by large-scale collision events. **1. Introduction** The early migration of the giant planets, and their influence on the objects residing in the primordial Kuiper belt (PKB) between ~24 and ~50 au, played a foundational role in sculpting our solar system (Fig. 1). Their co-dependent evolution has been extensively explored using dynamical models. They show that after a giant planet instability takes place, giant planet encounters, both with one another and with destabilized primordial Kuiper belt objects (KBOs), can potentially explain numerous small body reservoirs (e.g., the observed Kuiper belt, Oort cloud, scattered disk, irregular satellites of the giant planets, Jupiter and Neptune Trojans, Hilda asteroids, and D- and P-type asteroids captured in the main asteroid belt (e.g., Tsiganis et al. 2005; Morbidelli et al. 2005; Levison et al. 2008a; Nesvorny 2011; Nesvorny and Morbidelli 2012; Batygin et al. 2012; Nesvorny and Vokrouhlicky 2016; 2019; Nesvorny et al. 2007; 2013; 2016; 2018; 2020; 2021; Kaib and Sheppard 2016; Clement et al. 2018; Vokrouhlicky et al. 2016; 2019; Lawler et al. 2019; see Nesvorny 2018 for a review). Moreover, the objects placed on planet-crossing orbits by the dispersal of the PKB, characterized here as the destabilized population (Fig. 1), are not only responsible for most of the larger craters formed on giant planet satellites over the last 4.5 Gyr (e.g., Zahnle et al. 2003; Wong et al. 2019; 2021), but also include the long-lived survivors that make up our ecliptic and nearly-isotropic comet populations (e.g., Nesvorny et al. 2017; Vokrouhlicky et al. 2019). They may even explain many of the craters formed on prominent worlds like Pluto-Charon and Arrokoth, all which reside in various parts of the Kuiper belt (Singer et al. 2019; Spencer et al. 2020; Morbidelli et al. 2021; Robbins and Singer 2021). PLACE FIGURE 1 HERE Overall, dynamical models of the giant planet instability and its aftermath tell a compelling story of outer solar system evolution. Nevertheless, our understanding of what happened at ancient times is still incomplete because we have yet to adequately test the predictions of these models against bombardment and collisional evolution constraints. The answers to a number of intriguing questions hang in the balance. For example: * Various planetesimal formation models have been proposed to explain the sizes of KBOs in the Kuiper belt (and PKB) (e.g., see review by Johansen and Lambrechts 2017), but our ability to test these results depends on the unknown nature of the PKB's initial size-frequency distribution (SFD). For example, did the SFDs of the PKB follow a cumulative power law slope of q -2 for bodies that are diameter \(D<100\) km, the same slope observed today among the Jupiter Trojans (Yoshida and Nakamura 2005; Wong and Brown 2015; Yoshida and Terai 2017), or did the SFDs start with more shallow slopes (e.g., Napier et al. 2023)? * Destabilized PKB objects that enter the giant planet zone to strike the giant planet satellites should presumably have a SFD with a shape similar to the one found among the Jupiter Trojans, since both come from the same source population. Instead, as we will show below, they appear to have different shapes. Understanding how this could have happened can help us probe the physical properties of comets, Trojans, and KBOs. * Analysis of images from the New Horizons mission indicate there is an extreme paucity of \(D_{\rm crat}<10\) km craters on Charon (Singer et al. 2019; Robbins and Singer 2021). This crater deficit is not unique; a shallow crater SFD can also be found on Europa, Enceladus, Miranda, Hyperion, Phoebe, and other giant planet satellites whose terrains are not dominated by secondary and sesquinary craters (e.g., Zahnle et al. 2003; Porco et al. 2005; Bierhaus et al. 2012; Thomas et al. 2007; 2013; Kirchoff et al. 2022). The explanation for this deficit may be found in how KBOs respond to disruptive collisions (Morbidelli et al. 2021). * Ground-based observations of flashes on Jupiter indicate it is being hit by a substantial population of small bolides (Hueso et al. 2013; 2018). Approximately 10-65 bodies strike Jupiter per year that are \(D>5\)-20 m or larger. Curiously, this rate is \(\sim\)1 to \(\sim\)1.5 orders of magnitudes larger than expectations based on an extrapolation of \(\sim\)1 to 20 km craters (i.e., approximately \(\sim\)0.05 to 1 km projectiles) found on Europa (Zahnle et al. 2003; Schenk et al. 2004). This mismatch suggests we are missing something crucial in our understanding of the small body comet flux hitting outer solar system worlds. * Multikilometer-sized comets, such as 67P/Churyumov-Gerasimenko, often have irregular shapes, high porosities, and structures that suggest limited tensile strength (e.g., El-Maarry et al. 2017). From these and other properties, some argue that comets this size were formed directly by planetesimal formation processes (e.g., Davidsson et al. 2016), while others argue they are mainly fragments produced by large PKB objects bashing into one another (e.g., Morbidelli and Rickman 2015; Jutzi et al. 2016). Our interpretation of comet shapes, compositions, and physical properties, as well as the nature of returned samples, depends on how well we understand the origin of these bodies. Our ultimate goal is to address these questions, and many others, by modeling the collisional evolution of the PKB and all of its daughter populations. To date, most groups have thus far concentrated on collisional evolution of the Kuiper belt alone (e.g., Davis and Farinella 1997; Stern and Colwell 1997; Kenyon and Bromley 2004; 2020; Pan and Sari 2005; Krivov et al. 2005; Charnoz and Morbidelli 2007; Levison et al. 2009; Schlichting et al. 2013; Morbidelli and Rickman 2015; Brunini and Zanardi 2016; Jutzi et al. 2017; Nesvorny et al. 2018; Kenyon and Bromley 2020; Benavidez et al. 2022). Calculating compelling solutions, however, is challenging for several reasons. For example: 1. **Dynamical evolution model of the PKB.** A prerequisite for any good collisional model of the PKB is the inclusion of the dynamical history of both the giant planets and the PKB after the dissipation of the solar nebula (e.g., Nesvorny et al. 2018). Such runs are needed in order to calculate collision probabilities, impact velocities, and depletion factors that control the collision rates between small bodies in the PKB and daughter populations. The lack of such information can be an obstacle to modelers, who are forced to guess several critical parameters (e.g., timing and nature of PKB's excitation; the evolution of its daughter populations). 2. **Limited observational constraints on the Kuiper Belt size distribution.**, Ground-based and space-based observations have difficulties detecting KBOs smaller than a few tens of km in diameter (e.g., Bernstein et al. 2004; Fuentes et al. 2009; Fraser et al. 2014; Parker et al. 2021; Kavelaars et al. 2021). Accordingly, direct constraints of the Kuiper belt SFD at small sizes are limited. One potential way to overcome this problem is to interpret the crater SFDs found on Pluto, Charon, and Arrokoth (Singer et al. 2019; Spencer et al. 2020; Morbidelli et al. 2021). These data are new, however, and only a few groups have tried to include them as modeling constraints to date (e.g., Kenyon and Bromley 2020; Benavidez et al. 2022). The lack of knowledge of small KBOs is a major impediment for modelers; the population of small objects determines the frequency of disruptive impacts among larger bodies, yet they themselves are also susceptible to disruption (e.g., O'Brien and Greenberg 2003; Bottke et al. 2015). 3. **Unknown nature of the initial SFD of the PKB.** A critical component of collisional modeling work is an accurate starting SFD for the PKB (e.g., Johansen et al. 2015; Nesvorny and Vokrouhlicky 2016; Nesvorny et al. 2018). Dynamical evolution scenarios of the giant planet instability using such a SFD must be able to recreate the observed Kuiper belt from Neptune's outward migration as well as generate the right number of captured objects in the outer main belt, Hildas, Trojans, irregular satellite, scattered disk, and Oort cloud populations. They must also be consistent with the observed Kuiper belt SFD for \(D<100\) km bodies, the shapes of the SFDs found in various daughter populations, and the crater SFDs found on outer solar system worlds. 4. **Unknown disruption scaling law.** The scaling law controlling KBO disruption events is not well determined. Many disruption laws for KBO-like ice-rock targets have been proposed in the literature (e.g., Benz and Asphaug 1999; Leinhardt and Stewart 2009; Jutzi et al. 2010; Kenyon and Bromley 2020; Benavidez et al. 2022) but the only direct constraints that exist are those derived from high velocity shot experiments striking icy targets in the laboratory (e.g., Leinhardt and Stewart 2009). A potential problem with determining a disruption scaling law for comets and KBOs is that they likely have highly porous internal structures, a property that is challenging to simulate with existing numerical hydrocode simulations (e.g., Jutzi et al. 2015). As mentioned above, there have been two attempts to model the collisional evolution of the PKB after the New Horizons flybys of Pluto-Charon and Arrokoth: Kenyon and Bromley (2020) and Benavidez et al. (2022). While both models have intriguing attributes, their published model SFDs can only match portions of the predicted shape of the Kuiper belt SFD as determined from various observational constraints as well as the crater SFDs found on Charon and Arrokoth (Morbidelli et al. 2021). This suggests their models may be missing something; perhaps one or more of the items from #1-#4 above. Even if these groups had identified a good match to the predictions of Morbidelli et al. (2021), however, it would not necessarily indicate uniqueness for their parameter choices. As discussed by Bottke et al. (2005a,b), who attempted to model the collisional evolution of the main belt SFD, a single SFD used as a constraint for a collisional model will lead to an envelope of model parameter possibilities, each capable of producing a comparable fit. In such a circumstance, additional constraints are needed to rule out model possibilities. For example, Bottke et al. (2005a,b) used the number and frequency of family-forming events in the main belt to overcome degeneracies in the problem. This takes us to the issue of how to best take on our ultimate goal of modeling the collisional evolution of the PKB and its primary daughter populations, which include the Kuiper belt, the destabilized population, and the Jupiter Trojans. It is a complex problem with many unknowns, even more than represented by #1-#4 above. After some preliminary tests, we decided we had to find a way to make the problem more manageable. Our solution was to set aside the Kuiper belt portion of the problem (for now) and instead focus on the PKB, destabilized population, and Jupiter Trojans. For the last two, we have "easy to use" constraints that cover a wide range of object sizes: crater SFDs from impacts of the destabilized population onto the giant planet satellites and the Jupiter Trojan SFD. Together, they yield enough information for us to solve for a plausible KBO disruption law (#4). For the former, we determined the shape of destabilized population's SFD at early times using crater constraints from two satellites with ancient surfaces, Iapetus and Phoebe. Together, they provide constraints for projectile diameters between a few meters and nearly 100 km. For the latter, we synthesize a reasonable estimate of the present-day Jupiter Trojan SFD (L4 and L5 combined) using the best available ground- and space-based observations. These objects range from a few km to nearly 200 km. Our derivation of these constraints will be discussed in Sec. 3. It is important to emphasize that this does not mean our model ignores key constraints from the current Kuiper belt. Instead, our work builds on existing collisional and dynamical models, especially those where the results have already been tested against many types of Kuiper belt constraints. For example: * Our model of Neptune's migration through the PKB, and its interaction with a large population of Pluto-sized objects, can explain the sizes of the resonant and non-resonant populations in the Kuiper belt (Nesvorny and Vokrouhlicky 2016). In turn, this work sets the initial number of D > 100 and D > 2000 km objects needed in the PKB to explain the current Kuiper belt populations. * Our PKB population also contains enough D > 100 km objects to explain the number captured as Jupiter Trojans, Hildas, and outer main belt asteroids (Nesvorny et al. 2013; 2018; Vokrouhlicky et al. 2016). * Previous work has used the populations of well-separated binaries in the Jupiter Trojans and dynamically hot portion of the Kuiper belt to set the maximum amount of collisional evolution that could take place in the PKB and Kuiper belt populations (Nesvorny et al. 2018; Nesvorny and Vokrouhlicky 2019). They tell us that Neptune's migration through the PKB must start < 100 Myr after the dissipation of the solar nebula. As we will show, our model results are well below this value. The structure of our paper is as follows. Our work will use the results of dynamical simulations of the giant planet instability, PKB evolution, and Jupiter Trojan capture capable of reproducing the available observational constraints. A discussion of these models and their dynamical predictions will be given in Sec. 2, with their use in calculating collision probabilities and impact velocities for the PKB, destabilized population, and Jupiter Trojans discussed in Sec. 4.3 Our estimate of the shape of the PKB's initial SFD will be informed by a collection of sources, namely new observations, dynamical modeling work, collisional modeling work of the main asteroid belt, planetesimal formation models, and the size distributions of PKB daughter populations. We will also discuss potential PKB SFDs used in previous work. This will be discussed in Sec. 4.2 To deal with the unknown disruption scale law for KBOs, our collisional evolution model will test \(\sim\)10\({}^{4}\) possibilities against constraints. The disruption laws that produce the best matches will be compared to those in the literature. A discussion of our methodology can be found in Sec. 4.4. All of these components will be used within the Boulder collisional evolution code (Morbidelli et al. 2009). Our goal is to track the collisional evolution of the PKB, destabilized population, and Jupiter Trojans, with the latter two SFDs tested against constraints. Our model results will be discussed in Sec. 5. From there, we will further verify our model results by testing them against constraints from a variety of sources (e.g., crater SFDs on the giant planet satellites, superbolide impacts on Jupiter, small impacts on Saturn's rings, the debiased SFDs of Jupiter family and long period comets). We will also discuss how our results relate to the predictions of Morbidelli et al. (2021). Comparisons between our best fit model and these data will be given in Sec. 6. Finally, in Sec. 7, we will briefly discuss some of the more provocative implications of our results for the nature of comets, interplanetary dust particles, and interstellar objects. A longer discussion of these issues is reserved for Appendix A. 2. The giant planet instability and Neptune's migration across the primordial Kuiper belt In order to explore the collisional evolution of the PKB, we first need to characterize how it experienced dynamical evolution. Here we use a model of outer solar system evolution that has been developed over the last decade by co-author D. Nesvorny and several colleagues (Nesvorny and Morbidelli 2012; Nesvorny et al. 2013; Nesvorny 2015a; 2015b; Nesvorny and Vokrouhlicky 2016; Nesvorny et al. 2017; Nesvorny et al. 2019a; see also Nesvorny 2018 for a review). It describes how the giant planets dynamically evolved to new orbits as a consequence of a post-nebula giant planet instability (e.g., Tsiganis et al. 2005). It also allows us to quantify how the PKB was transformed and dynamically depleted by the outward migration of Neptune through the PKB. To make things easier to understand, we defined the time periods around the giant planet instability as Stages 1, 2, and 3, respectively (Fig. 1). **Stage 1.** Stage 1 is defined as the time interval At between the end of the solar nebula and when Neptune enters the PKB. In Stage 1, the PKB has become modestly excited by numerous Pluto-sized objects. Here we assume the initial mass of the PKB is several tens of Earth masses, as suggested by models of Neptune's migration and the giant planet instability (e.g., see Nesvorny 2018 and references therein). This large size, combined with modest dynamical excitation from embedded Pluto-sized or larger objects, implies that considerable collisional evolution should take place within Stage 1. **Stage 2.** Stage 2 is defined as the time interval At between Neptune's migration across the PKB and the giant planet instability, which starts when giant planet-giant planet encounters begin. Dynamical models show that such events are typically achieved when Neptune reaches ~28 au (e.g., Nesvorny 2018 and references therein). The precise timing of At depends on the spatial density of objects in the PKB. In the two dynamical models we will use in this project, At lasts 10.5 or 32.5 Myr. The PKB becomes highly excited in this stage, with nearly all KBOs pushed onto giant planet-crossing orbits. This action creates the destabilized population, which continues to undergo considerable collisional evolution. **Stage 3.** Stage 3 are all the actions that take place after the giant planet instability. We define it as the time interval between At and the present day ~4.5 Gyr later. Stage 3 includes the capture of the Jupiter Trojans by giant planet interactions, the origin of the scattered disk, and much of the bombardment of the giant planet satellites. We now discuss the dynamical evolution of each stage in more detail. ### Stage 1. Before the Giant Planet Instability Stage 1 starts shortly after the giant planets form on nearly circular and coplanar orbits within the solar nebula. The likely time of solar nebular dispersion is 2-10 Myr after CAI formation (Kruijer et al. 2017; Weiss and Bottke 2021). Gas driven migration likely caused the giant planets to evolve into a resonant chain (i.e., the planets were trapped in mutual mean motion resonances with one another), but the stability of chains may have been limited after the gas disk went away (see review by Nesvorny 2018). Many different resonant chain models were considered in Nesvorny and Morbidelli (2012; hereafter NM12). Over ~10' numerical simulations, they followed what happened to outer planet systems that contained four, five, and six giant planets (i.e., Jupiter, Saturn, and 2-4 Neptune-sized bodies), with the bodies usually stretched between ~5 and ~17 au. Model success in NM12 was defined by four criteria: there had to be four planets at the end, the planets had to have plausible orbits, Jupiter's eccentricity had to be close to its current value, and the planets had to migrate fast enough satisfy terrestrial planet and asteroid belt constraints (see also Nesvorny 2018). The putative planetesimal population in the giant planet zone that existed between ~5 and ~17 au is poorly constrained at present. Here we assume it did not play a meaningful role in any of the collisional or dynamical set pieces relevant to our modeling work. The starting location for the main portion of the PKB is located just beyond the giant planet zone between ~24 and 30 au, with the latter value set by the current location of Neptune (e.g., Tsiganis et al. 2005). Most of the KBO-like objects in various comet and small body reservoirs captured in Stage 3 come from this part of the PKB population (e.g., Dones et al. 2015). In NM12, the net mass of this part of the PKB was set to ~15-20 Earth masses, though this value does not account for losses by collisional evolution within the PKB. As we will show below, by including collisions, we can achieve reasonable results by starting with ~30 Earth masses of material. Numerical simulations show a less massive portion of the PKB may extend beyond 30 au and could continue to ~47 au, the location of the outer edge of the cold classical KB (Nesvorny et al. 2019a). To prevent Neptune from migrating beyond ~30 au by planetesimal-driven migration (Gomes et al. 2004), however, the spatial density of KBOs in the PKB must decrease as a function of distance from the Sun (Fig. 1). Modeling work shows that Neptune stops its migration when the spatial density of bodies drops below 1-1.5 Earth masses per au (Nesvorny 2018). Here we assume the extended disk contained about 1/400th of the population that existed between 24-30 au (Nesvorny et al. 2020). So, if the initial PKB population was ~30 Earth masses between 24-30 au, the extended disk would have 0.075 Earth masses. This kind of distribution implies that the cold classical Kuiper belt, located between the 3:2 and 2:1 mean motion resonances with Neptune (i.e., between the semimajor axis range of 42 to 47 au), could form in situ with a more limited population (e.g., Fraser et al. 2014). It also provides a rationale for why the cold classicals have a large fraction of well-separated binaries; they never encountered Neptune during its migration (Nesvorny et al. 2019b; 2020). The PKB in Stage 1 becomes dynamically excited in two ways. First, Neptune's perturbations steadily erode its inner edge, allowing some KBOs to escape into the giant planet zone. Most of these bodies are passed down to Jupiter, where they are thrown out of the solar system. The net change in angular momentum causes the planets to undergo slow but steady migration (Tsiganis et al. 2005). Second, the majority of KBOs become excited by gravitational perturbations from the largest objects in the population, namely ~1000-4000 Pluto-sized objects (Nesvorny and Vokrouhlicky 2016). This self-stirring mechanism increases impact speeds between KBOs and produces collisional evolution. In fact, if \(\Delta t_{0}\) lasts long enough, most KBO collisions could occur in this interval. A limit on the number of collisions taking place in the PKB is set by the observed population of well-separated KBO binaries with components that are \(D\) > 100 km (Petit and Mousis 2004; Nesvorny et al. 2018; Nesvorny and Vokrouhlicky 2019). These binaries were presumably created by the planetesimal formation process called the streaming instability (Youdin and Goodman 2005; Nesvorny et al. 2019b; 2020; see review of the mechanism in Johansen et al. 2015;). Collisions can disrupt KBO binaries, so too many collisions in Stage 1 will lead to too few binaries compared to observations. Modeling work shows that \(\Delta t_{0}\) must be less than 100 Myr to explain the capture of the Patroclus-Menoetius binary in the Jupiter Trojans in the aftermath of the giant planet instability (Nesvorny et al. 2018). As we will show, our best fits for \(\Delta t_{0}\) are considerably lower than this threshold. ### Stage 2. Neptune's Migration Across the PKB Stage 2 begins when Neptune enters the PKB and begins to migrate outward by ejecting KBOs into the giant planet zone. We refer to those objects that reach planet-crossing orbits as the destabilized population, and they play a major role in deciphering the nature of collisional evolution in the PKB. As Neptune migrates, a roughly 50-50 mix of objects are thrown inward and outward. The inward ejected objects are eventually handed down to Jupiter, which commonly throws them out of the solar system, while the outward ejected bodies come back for more Neptune encounters and the opportunity to be thrown inward. This continues until Neptune stops migrating at 30 au, with objects in the newly formed scattered disk continuing to have Neptune encounters over the following 4.5 Gyr. Many of these bodies will be lost to inward scattering events over that time (see bottom of Fig. 1). The long-lived survivors of the destabilized population now make up the scattered disk of Neptune, or scattered disk for short (Duncan and Levison 1997). During migration, gravitational interactions between KBOs and Neptune produce dynamical friction, damping Neptune's eccentricity enough for it to reach its current value of ~0.01 (Nesvorny 2020). Concurrently, many KBOs and Pluto itself become trapped in mean motion resonances with Neptune as it moves outward. Neptune encounters with Pluto-sized objects in the PKB, however, produce gravitational jolts that can shake some KBOs out of resonance. The net effect of erratic/grainy outward migration for Neptune and dynamical friction plausibly explains the fraction of resonant and non-resonant objects observed in the Kuiper belt (Nesvorny and Vokrouhlicky 2016). The depletion of the PKB over time generally follows exponential decay, with an e-folding time comparable to the timescale of Neptune's crossing the disk. So, if Neptune takes 10 Myr to get across the disk, the e-folding time of the PKB's depletion is 10 Myr. As mentioned above, numerical simulations of the survival of KBO binaries indicate that Stages 1 and 2 lasted $100 Myr (Nesvorny et al. 2018), but the exact time of the giant planet instability (i.e., \(\Delta t_{0}+\Delta t_{1}\)) is unknown. Shorter transit timescales for Neptune (i.e., shorter \(\Delta t_{1}\)) mean less collisional evolution takes place among objects that will eventually become part of the destabilized population. The results in Nesvorny et al. (2018) indicate that shorter Stage 1 and 2 intervals also yield much better odds that the Patroclus-Menoetius binary would be captured intact within the Trojans. That favors faster passages and lower \(\Delta t_{1}\) times. In our work below, we will test a range of \(\Delta t_{0}+\Delta t_{1}\) values to determine which ones lead to successful outcomes. 2.3 Stage 3. The Giant Planet Instability and its Aftermath In our dynamical models of the giant planet instability, the giant planets break out of their resonant chain as Neptune approaches 28 au or so. The reason is that the planets have been forced to interact with ejected mass from the PKB, which in turn has caused them to migrate (i.e., Jupiter migrates inward, the other planets migrate outward). The breakout allows the giant planets to have encounters with one another while surrounded by a plethora of objects from the destabilized population. Numerical results from NM12 indicate that five planets systems most often produce four giant planets with orbits that resemble the present ones. The missing ice giant is ejected after multiple encounters with Jupiter (Nesvorny 2011; Batygin et al. 2012), but not before getting it to migrate inward rapidly over a timescale of 1 Myr. The dynamical runs used in this paper come from the most successful of the ~10\({}^{4}\) runs, where success is defined by the ability to reproduce additional small body constraints when tested in detail (e.g., see also Nesvorny 2018 and references therein). As the PKB was eviscerated by Neptune's outward migration, KBOs were driven onto unstable orbits with the giant planets. Most of this destabilized population was ejected from the Solar System after being passed down to a close encounter with Jupiter, but some hit the planets, and others were captured in small body reservoirs such as the main belt (Levison et al. 2009, Vokrouhlicky et al. 2016), Jupiter Trojans (Morbidelli et al. 2005, Nesvorny et al. 2013), irregular satellites (Nesvorny et al. 2007, 2014), Kuiper belt (Malhotra 1993, Gomes 2003, Hahn and Malhotra 2005; Levison et al. 2008a; Dawson and Murray-Clay 2012; Nesvorny 2015a,b; Nesvorny and Vokrouhlicky 2016), scattered disk (Brasser and Morbidelli 2013, Kaib and Sheppard 2016, Nesvorny et al. 2016), and Oort cloud (Brasser et al. 2006, 2007, 2008; Levison et al. 2010; Kaib et al. 2011; Brasser and Morbidelli 2013; Vokrouhlicky et al. 2019). A key focus of this paper will be to examine happens collisionally to the unstable ejected bodies and to those captured as Jupiter's Trojans. Overall, the PKB dynamically lost a factor of ~1000 in population as Neptune migrated across it. These objects were driven onto Neptune-crossing orbits, where they become the destabilized population. Accordingly, by assuming a starting population with a few thousand Pluto-sized bodies, it is possible to end up with two Pluto-sized bodies (i.e., Pluto, Eris) in the present-day Kuiper belt and scattered disk. The two numerical simulations of the giant planet instability used in this paper (Nesvorny et al. 2013; 2017) show that ~1% of the destabilized population will eventually hit Jupiter, while about a third as many will end their existence by striking Saturn, Uranus, and Neptune. A much smaller net fraction will hit theicy satellites; the work of Zahnle et al. (2003) suggests ~10\({}^{4}\) hit Io, a large satellite orbiting deep within Jupiter's gravitational well, ~10\({}^{-6}\) for Iapetus, a satellite about half the Moon's size that is far from Saturn, and ~10\({}^{-8}\) for Phoebe, a ~200 km diameter irregular satellite of Saturn. The cratering history of the latter two objects will play a key role in defining our constraints for the destabilized population's SFD. 3. Model Constraints 3.1 The Size Distribution of the Destabilized Population Derived from Craters 3.1.1. Craters on Pluto, Charon, and Arrokoth One of the key clues telling us about the unusual nature of the PKB SFD comes from the crater histories of Pluto/Charon (resonant objects in the Kuiper belt) and Arrokoth (located in the cold classical Kuiper belt). All of these objects were once residents of the PKB. Crater counts of Charon show that its cumulative SFD follows a modestly steep power law slope of \(q=-2.8\pm 0.6\) for \(D_{\rm crat}>10-15\) km, while \(\sim\)1 < \(D_{\rm crat}<10-15\) km craters follow a slope of \(q=-0.7\) \(\pm\) 0.2 (Singer et al. 2019; Robbins and Singer 2021). The latter value is surprising, in that it predicts a severe paucity of craters, and the projectiles that made them, over a large size range. When Arrokoth craters are folded in, the mystery extends to even smaller sizes. Arrokoth has five 0.55-1.15 km craters and one \(\sim\)7 km crater (Spencer et al. 2020). Morbidelli et al. (2021) examined the likelihood that different crater production populations could produce this unusual distribution. They found that their most probable production population had \(q\sim-1.25\), and that shallow slopes of \(q=-0.7\pm 0.2\) were statistically unlikely. By combining crater constraints from both Charon and Arrokoth together, and solving for the likely projectile SFD, they showed that the slope of small impactors on both worlds was probably \(q=-1.1\pm 0.1\), and that this slope extended from 0.03 < \(D\) < 1 km. For \(D>1\) km projectiles, the steeper slope seen on Pluto-Charon for \(D_{\rm crat}>10-15\) km comes into play. One more intriguing issue raised by Morbidelli et al. (2021) is the abundance of Kuiper belt dust estimated by the New Horizons dust counter, presumably produced by collisional processes (Poppe et al. 2019). Calculations show that the mass of particles between 0.5 and 500 \(\upmu\)m derived from the Kuiper belt is 3.5 x 10\({}^{18}\) kg. In order to achieve such a high value, the power slope of the Kuiper belt SFD must become relatively steep at some size below \(D<0.03\) km. Morbidelli et al. (2021) argued that a change slope of q \(\sim-3\) starting near 0.02 km would explain the dust constraint (see their Fig. 5). We concur with this prediction, for reasons we will discuss below. These works provide a fascinating snapshot into the history of the PKB. They tell us that Pluto-Charon-Arrokoth were rarely hit by projectiles between \(D\sim 30\) m and 1 km. For the moment, we will ignore the modest slope differences above between \(q=-0.7\pm 0.2\) and \(q=-1.1\pm 0.1\). It has been postulated that these shallow slopes might mean that few primordial bodies ever formed in this size range, or that PKB impacts produce very few fragments in this size range (e.g., Singer et al. 2019). In this work, however, we favor a different interpretation in agreement with Morbidelli et al. (2021). The impactor SFD identified above is reminiscent of the main belt SFD, which has a fairly steep slope of \(q=-2.7\) for \(D>2-3\) km bodies and a shallow slope of \(q=-1.2\) for \(\sim\)0.2 < \(D<2-3\) km (e.g., Bottke et al. 2005a,b; 2015; 2020). Numerical modeling work indicates these slopes are a byproduct of collisional evolution, with the main belt SFD taking on a wavy shape that is controlled by the asteroid disruption scaling law (see review in Bottke et al. 2015). Given that the PKB was initially massive and became excited, it was inevitable that it and its daughter populations would experience extensive collisional evolution. The difference is that the PKB SFD is governed by a disruption scaling law applicable to KBOs, which are likely to be porous icer rock bodies (Morbidelli et al. 2021). In order to model the collisional evolution of the PKB SFD, we need to identify the most useful constraints. While we considered using craters on Pluto, Charon, and Arrokoth for this purpose ala Morbidelli et al. (2021), we ultimately decided it may not the best place to start. One issue is that the definitive crater counts for Pluto have yet to be reported in the literature (S. Robbins, personal communication). A second issue is that Pluto's craters might have experienced relaxation and erosion in its topography, given that they do not precisely match the crater SFD on Charon (Singer et al. 2019). A third issue is the one mentioned above; there is difference in inferred slope for \(D\) < 10 km craters inferred for Charon alone (_q_ = -0.7 \(\pm\) 0.2; Robbins and Singer 2021) and those derived from Arrokoth (_q_ = -1.1 \(\pm\) 0.1; Spencer et al. 2020; Morbidelli et al. 2021). Morbidelli et al. (2021) gave reasons why the inferred slope for both should be \(q\) ~ -1.2, but the difference in slopes is still an open question. A fourth issue is that while Morbidelli et al. (2021) inferred a change in slope for projectile sizes of 20 m, the crater SFDs on Pluto, Charon, and Arrokoth craters do not go to small enough sizes to prove it is true. All these issues warrant careful study, which we intend to pursue in a follow-up paper. #### 3.1.2 Craters on Tapetus and Phoebe As an alternative, we decided to consider the bombardment histories of the giant planet satellites, which were hit by the destabilized population. Some of these worlds have a larger range of observed crater sizes than the combination of Pluto, Charon, and Arrokoth, leading to valuable constraints. This topic of cratering on the giant planet satellites is extensive, though, so much of our analysis of certain issues, like their relative and absolute surface ages, will be reserved for a follow-up paper. After examining the available crater SFDs for Jupiter's satellites (e.g., Zahnle et al. 2003; Schenk et al. 2004), Saturn's satellites (Porco et al. 2005; Kirchoff and Schenk 2010; Bierhaus et al. 2013; Thomas et al. 2007; 2013), and Uranus's satellites (Kirchoff et al. 2022), we have decided to focus on the crater histories of two worlds: Tapetus and Phoebe. Each one will be discussed below. **Iapetus.** Iapetus is the most distant large satellite of Saturn. It has a mean diameter of 1469.0 \(\pm\) 6 km, roughly 1.5 times the size of Ceres, and a bulk density of 1.088 \(\pm\) 0.013 g cm-3. It has a semimajor axis of about 59 Saturn radii, considerably more distant than that of Titan and Hyperion, which are at 20 and 24 Saturn radii, respectively. This distance is important because it effectively rules out the possibility that Iapetus was meaningfully affected by a putative satellite instability/disruption event occurring among the inner Saturn satellites (e.g., Cuk et al. 2016; see also Ida 2019). Our main interest in Iapetus for this paper come from its most ancient surfaces. Despite being far from Saturn, which minimizes the effects of gravitational focusing, Iapetus has twenty basins that are nearly 100 km in diameter and three that are larger than 400 km (Kirchoff and Schenk 2010) (see Fig. 14). Accordingly, Iapetus's largest craters give us a sense of the impactor SFD's shape for \(D>10\) km projectiles hitting the Saturn system from the destabilized population. **Phoebe**. Phoebe is the largest remaining irregular satellite of Saturn. It has a shape that is close to an oblate spheroid, (218.8 \(\pm\) 2.8) x (217.0 \(\pm\) 1.2) x (203.6 \(\pm\) 0.6 km), with a mean diameter of 213 \(\pm\) 1.4 km (Castillo-Rogez et al. 2012). It has been classified as a C-type body, giving it similar spectroscopic signatures to carbonaceous chondrite meteorites (Porco et al. 2005). Its bulk density of 1.634 g cm-3 is modestly higher than many icy satellites in the Saturn system (Jacobson et al. 2006). Phoebe orbits Saturn with a semimajor axis of 215 Saturn radii, an eccentricity of 0.164, and an inclination relative to the ecliptic of 173.04\({}^{\circ}\). This places it nearly 3.6 times as far from Saturn as Iapetus. We focus on Phoebe because Cassini images provide us with an incredibly wide range of crater sizes. The only comparable crater set among outer solar system worlds to date comes from Hyperion (Thomas et al. 2007), but we find its SFD more complex to interpret at smaller sizes. At the larger end of Phoebe's crater SFD, there are at least seven craters that are 50-100 km in diameter. At the smaller end, there are many thousands of \(D_{\rm crater}<0.2\) km craters have been recorded on a collective surface area of 3000 km\({}^{2}\) (Porco et al. 2005; Kirchoff and Schenk 2010). This means Phoebe's net crater SFD has a large enough dynamical range that it can potentially test the Morbidelli et al. (2021) prediction that the projectile SFD becomes steeper for D \(<20\) m projectiles. If true, Phoebe is a Rosetta Stone for setting the nature of the KBO disruption law. A potential issue with using Phoebe to understand the destabilized population's impact history is its unusual history. Phoebe is likely an escaped PKB object that was captured onto a retrograde orbit around Saturn during the giant planet instability (Nesvorny et al. 2003a; 2007; 2014). That means the dominant bombardment populations striking Phoebe have changed with time. For example, Bottke et al. (2010) showed that the initial irregular satellite population at Saturn was devastated by intense collisional evolution. Their work predicts that Phoebe's surface was shattered and reset multiple times by large early impacts. The timescale for irregular satellite collisional evolution is rapid; most occurs on the order of tens of Myr after capture. This interval is short enough that any craters formed after that time were likely dominated by heliocentric projectiles from the destabilized population. This would explain why the shape of Phoebe's crater SFD for \(D_{\rm crater}>10\)-15 km craters is comparable to that of other Saturnian satellites (Kirchoff and Schenk 2010). An additional factor to consider is that irregular satellites, being captured KBOs, should follow the same disruption laws as KBOs. Collisional algorithms suggest they should grind themselves into an evolved SFD whose shape at small sizes that is similar to the destabilized population's SFD (e.g., O'Brien and Greenberg 2003; Bottke et al. 2015). This assumes, of course, that the ejecta produced by collisions between prograde and retrograde satellites remain within the stable irregular satellite zone rather than take on a small enough orbital velocity that they would fall onto the planet; e.g., Levison et al. 2008b). If collisional evolution of the irregular satellites works as we suggest, it may be challenging to distinguish the source of Phoebe's craters; they could be from a combination of irregular satellites, which dominate early impacts, and heliocentric projectiles, which dominate late impacts, with both producing a similarly shaped crater SFD at small sizes. For the purposes of this paper, however, we only need the shape of Phoebe's crater SFD at small sizes, which constrains what happens to a KBO-like SFD undergoing collisional evolution. The Cassini spacecraft obtained high resolution images of Phoebe's surface when it entered the Saturn system (Kirchoff and Schenk 2010). Both Porco et al. (2005) (counts by P. Thomas) and Kirchoff and Schenk (2010) have measured the spatial densities of craters on Phoebe terrains, and both groups have graciously provided us with spreadsheets of their data (M. Kirchoff, P. Thomas, personal communication). Each group found comparable SFDs for small craters, but we focus here on the craters provided by P. Thomas, who examined three regions observed at different resolutions. His crater sizes go from 0.003 km < \(D_{\rm crat}\) < \(\sim\)100 km, though incompleteness at small sizes means the range where a useful crater SFD can be identified is \(\sim\)0.01 km < \(D_{\rm crat}\) < \(\sim\)100 km. The data are shown as cumulative SFDs in Fig. 14, with the data placed into root-2 size bins. By removing crater counts where incompleteness may be playing a role, we find that the crater SFD between 0.5 km < \(D_{\rm crat}\) < 10 km has a power law slope of \(q\sim-\)1.2. This slope is similar to the that calculated on Charon and Arrokoth over a comparable size range by Morbidelli et al. (2021). Similarly shallow slopes over comparable crater sizes have also been identified on several major satellites (e.g., Europa, Enceladus, Hyperion, Miranda) and many minor ones (Prometheus, Pandora, Epimetheus, Janus, Telesto, Calypso) (Zahnle et al. 2003; Thomas et al. 2007; 2013; Bierhaus et al. 2013; Singer et al. 2019; Kirchoff et al. 2022). This shallow slope is also the basis for the CASE A crater production SFD suggested by Zahnle et al. (2003). Accordingly, we argue that an impactor SFD with this shape has bombarded all of the giant planet satellites. The most intriguing feature in the small crater SFD for Phoebe is that the slope substantially steepens for 0.1 < \(D_{\rm crat}\) < 0.5 km craters. As discussed in Sec. 3.1.1., this slope change was not observed on Arrokoth, possibly because of limited resolution, but it was deduced by Morbidelli et al. (2021) on the premise that an increase in slope was needed to explain the abundance of Kuiper belt dust detected by New Horizons's dust counter. We cannot rule out the possibility that this change was produced by secondary or sesquinary cratering, but both bodies are small enough that returning ejecta should hit at low impact velocities (i.e., < 70 and < 100 m/s for Phoebe and Hyperion, respectively). It seems unlikely this debris would produce craters with shapes identical to those formed at much higher velocities. We would also point out that secondary or sesquinary craters have yet been identified on asteroids comparable in size to Phoebe (Bottke et al. 2020). Our interpretation is that this SFD represents the crater production population, and we will treat it as such below. #### 3.1.3 Synthesis Put together, Iapetus and Phoebe can be used to assemble a crater SFD stretching from 0.1 km to \(\sim\)1000 km. While these surfaces represent an integrated history of past bombardment, the inferred shapes of their SFDs are likely dominated by that of the destabilized population at early times. As such, they are a treasure trove of information on the destabilized population's early collisional history as well as the shape of the disruption law controlling KBO behavior. To use this information in our model, we created a synthesis projectile SFD from these data. We started by first obtaining a synthesis crater SFD, using 0.1 < \(D_{\rm crat}<\) 100 km craters from Phoebe and \(D_{\rm crat}>\) 100 km craters from Iapetus. This involved making certain assumptions for the P. Thomas datasets, who identified crater SFDs for three terrains observed at different resolutions. When two different crater SFDs from Phoebe were found to overlap, we adopted the SFD with the higher spatial density of craters that could still be considered part of the continuous SFD. We also excluded craters at sizes that were clearly suffering from incompleteness (i.e., those trending toward horizontal lines on a cumulative plot). From here, we grafted the shapes of the Iapetus and Phoebe crater SFDs together, assuming they were continuous. Next, we converted the synthesis crater SFD into a projectile SFD using a crater scaling law, specifically the Holsapple and Housen (2007) formulation of the Pi-group scaling law (e.g., used by Marchi et al. 2015 and Bottke et al. 2020; see also Tatsumi and Sugita 2018): \[D_{t}=kd\left[\frac{gd}{2V_{P}^{2}}\left(\frac{\rho}{\delta}\right)^{2v/\mu}+ \left(\frac{Y}{\rho V_{P}^{2}}\right)^{(2+\mu)/2}\left(\frac{\rho}{\delta} \right)^{v(2+\mu)/\mu}\right]^{-\mu/(2+\mu)} \tag{1}\] Here the transient crater diameter, defined by \(D_{t}\), can be found using the impactor properties (impactor diameter \(d\), velocity perpendicular to the surface \(V_{p}\), bulk density \(\delta\)) together with the target properties (density of target material \(\rho\), strength of target material \(Y\), surface gravity \(g\)). Additional parameters (\(k\), \(v\), \(\mu\)) account for the nature of the target terrain (i.e., whether it is hard rock, cohesive soil, or porous material), while the yield strength \(Y\) corresponds to the nature of the target materials. Finally, we account for the collapse of the transient crater, such that the final crater size is \(D_{\rm crat}=\) 1.2 \(D_{t}\) (e.g., as used for asteroids in Bottke et al. 2020). A key issue for our work is determining reasonable values for these scaling law parameters. In the literature, many different choices have been made to model crater formation on the giant planet satellites and Charon/Arrokoth (e.g., see Zahnle et al. 2003; Dones et al. 2009; Singer et al. 2019, Morbidelli et al. 2021; etc.), but testing these results is challenging. For example, we have yet to observe an actual impact event onto anicy satellite, there are no large-scale impact experiments that have been performed into low temperature ice, and basic information on probable target properties for the giant planet satellites is minimal (e.g., we do not know the strength and porosity of near-surface ice). It is easy to show with Eq. (1) that different assumptions can lead to an enormous range of values for a parameter we call f, defined as the ratio of crater-to-projectile sizes. Our path though this thicket is to select parameters for Eq. (1) based on what has been learned about cratering from potentially analogous bodies, such as the carbonaceous chondrite-like asteroid (253) Mathilde, which is 53 km in diameter, and the dwarf planet (1) Ceres, which is 940 km in diameter. These worlds may share many commonalities with the giant planet satellites. For example: * Ceres is comparable in diameter, bulk density, and gravitational acceleration to the mid-sized satellites of Saturn and Uranus. They are all spheroidal and have diameters between 400 and 1500 km in diameter. Similarly, Mathilde and Ceres bracket the sizes of Phoebe and other smaller giant planet satellites. * The depth and diameter of craters on Ceres are comparable to those found on the mid-sized satellites, suggesting their surfaces undergo similar rheologic behavior when hit by projectiles (Schenk et al. 2021). * Ceres may have ice as a dominant crustal component as well as an ocean world-like internal structure (e.g., Fu et al. 2017; De Sanctis et al. 2020; Park et al. 2020; Raymond et al. 2020; Schmidt et al. 2020). With this basis, we lean on the results of Bottke et al. (2020), who reproduced the crater SFDs on Mathilde and Ceres using the following parameters for Eq. (1): strength of cohesive soil parameters \(k=1.03\), v = 0.4, u = 0.41, yield strength Y = 2 \(\times\) 10\({}^{7}\) dynes cm-2, and identical projectile and near surface bulk density values (i.e., both with 1.3 g cm-3). The advantage of their work is the impactors hitting Mathilde and Ceres come from the main belt SFD, whose shape has been substantiated down to small sizes by observational data. If one has excellent knowledge of both the projectile and crater SFDs, the nature of the crater scaling law can be derived empirically. Bottke et al. (2020) used this method to test their Eq. (1) parameters for both worlds (and on many other asteroids observed by spacecraft). In this paper, we will use the Eq. (1) values above for all of the giant planet satellites, except we will assume the projectile and near surface bulk densities are 1 g cm-3. This approximation is arguably basic, but as we will show below, it does allow us to reasonably reproduce the crater SFDs on most giant planet satellites. For Lapetus and Phoebe, we will use impact velocities of ~8 km/s, values derived from our dynamical model using the formalism discussed in Zahnle et al. (2003). Other values, such as surface gravity g, were taken from Table 1 of Zahnle et al. (2003). One last issue concerns the statistical errors on a power law slope derived from twenty Tapetus basins. A projectile SFD derived from these data could potentially have a wider range of values, especially given uncertainties in Eq. (1). After considerable testing, we settled on a shallow SFD for projectiles that are \(D>10\) km (i.e., cumulative \(q=-1.1\)). As we will discuss below, this value matches the expected shape coming from the destabilized population's SFD. The conversion of our target crater function into a target projectile function is shown in Fig. 2a. It was normalized to the predicted number of \(D=50\) km bodies in our starting SFD, which we will discuss in Sec. 4.2. We call this shape hereafter the Lapetus-Phoebe SFD, or IP SFD. PLACE FIGURE 2 HERE 3.2 The Size Distribution of Jupiter's Trojan Asteroids A key constraint for our collisional model is the combined SFD of Jupiter's Trojans. Jupiter's Trojans are small bodies that orbit around the L4 and L5 Lagrange points of Jupiter, which are located 60deg in front of and behind Jupiter, respectively. They are contained within Jupiter's 1:1 mean motion resonance, with Jupiter having a semimajor axis of 5.2 au. Reviews on the Trojan populations and their properties can be found Marzari et al. (2002) and Emery et al. (2015). Numerical simulations show that Jupiter's Trojans are likely to be a daughter population of the PKB (e.g., Morbidelli et al. 2005; Nesvorny et al. 2013). Their origin is linked to giant planet encounters taking place during the giant planet instability. At these times, giant planet encounters frequently occur when the giant planets themselves are surrounded by objects from the destabilized population. In an encounter between Jupiter and a Neptune-sized body, Jupiter "jumps" to a new orbit, which means its L4/L5 zones also jump on top of wandering KBOs that happen to be at the right place at the right time. Numerical simulations show that the capture probability of PKB objects into these stable zones is (5 \(\pm\) 3) \(\times\) 10-7 (Nesvorny et al. 2013). This Trojan provenance model can reproduce the observed orbital distribution of Trojans, including their high inclinations, and the observed number of very large Trojans. It also explains why the Trojans are dominated by D- and P-type bodies, a taxonomic class that is consistent with KBOs. The stochastic nature of Trojan capture events can even produce a modest asymmetry in the populations within L4 and L5, though it is not clear these populations are statistically different from one another when it comes to the largest bodies (e.g., see Fig. 1 of Marschall et al. 2022). The capture process described above is size independent, so it is often assumed that the SFD of the combined L4 and L5 Trojans should be a scaled version of the Kuiper belt SFD, the scattered disk SFD, and so on (e.g., Morbidelli et al. 2021). The accuracy of this statement depends on the degree of collision evolution taking place among the bodies trapped in various PKB sub-populations. Any assessment must not only consider that the Trojans undergo collisions en route to their final destination, but also experience billions of years of collisional evolution after capture. This raises the possibility of some divergence between the various SFDs. As an example, consider the irregular satellites, which are a captured subpopulation of the destabilized population (Nesvorny et al. 2007; 2014). Simulations show they were decimated by collisional evolution to such a degree that their current SFDs have no obvious connection to their source SFD, the PKB SFD (Bottke et al. 2010). Accordingly, in order to test our collisional evolution model, we need the best available estimate of the combined L4/L5 Trojan SFD as a constraint. Our investigation into this issue shows there is no one-size-fits-all solution, partly because there are differences in the literature between estimates but also because the Trojans are observationally incomplete for smaller sizes. For the largest Trojans, complications arise from the unusual shapes of some bodies. Consider the Trojan (1437) Diomedes. A combination of occultations, lightcurve modeling work, and photometry suggest that this Trojan has a three-dimensional shape of (284 \(\pm\) 61 km) \(\times\) (126 \(\pm\) 35 km) \(\times\) (65 \(\pm\) 24 km) and a mean diameter of 132.5 km (Sato et al. 2000). Infrared observations of the same body from the IRAS, WISE, and AKARI space-based telescopes, however, suggest mean diameters of 164.31 \(\pm\) 4.1 km, 172.60 \(\pm\) 3.42 km, and 117.79 \(\pm\) 1.18 km (Tedesco et al. 2004; Usui et al. 2011; Grav et al. 2012). Given the unusual shape of Diomedes, it could be argued that all of these values have some validity. A look through the literature indicates several other large Trojans have modestly divergent diameters when the same sources of observational data are considered. This led us to move toward simplicity, so we adopt a Trojan SFD for larger objects based on the formulation made by Nesvorny et al. (2018). They used IRAS diameters for \(D>80\) km Trojans and then pivoted to another method for the remaining objects. They converted Trojan absolute magnitude \(H\) values from the Minor Planet Center to diameters using the following relationship (Fowler and Chilemi 1992; Appendix in Pravec and Harris 2007): \[D_{ast}(km)=1329\times 10^{-H/5}p_{v}{}^{-1/2} \tag{2}\] A representative value of 0.07 was chosen for the visual geometric albedo \(p_{v}\) of the bodies based on the results of WISE data from Grav et al. (2011). Our SFD for these bodies is shown in Fig. 2b. It was assumed that the Trojan population for \(D>10\) km in the Minor Planet Database was observationally complete. Our resultant SFD in Fig. 2b has a small dip near 60 km. We suspect this feature is an artifact of the Trojan shape effects discussed above. Estimates of the shape of the Trojan magnitude distribution from Grav et al. (2011), Wong and Brown (2015), Yoshida and Terai (2017), and Uehata et al. (2022) show no evidence for such a feature. Whether artifact or real, however, this feature does not meaningfully affect our fits between model and data, as we will discuss below. Our resultant SFD also left off a few of the largest Trojans whose reported sizes had considerable variation in the literature. These bodies may or may not be far from the continuum SFD for \(D>100\) km bodies, and we wanted to avoid having our fitting procedure chase objects with unusual sizes. Our motivation for this strategy comes from our Monte Carlo code experience, where the largest bodies drawn randomly from a SFD can occasionally be much bigger or smaller than expected. If the Trojans were indeed captured as suggested, their largest bodies could easily be anomalously big or small for the same reason. Finally, for \(D<10\) km Trojans, we looked to results from observational surveys of small Trojans obtained from the powerful Suburu telescope teamed with the Suprime-Cam/Hyper Suprime-Cam instrument (Yoshida and Nakamura 2005; Wong and Brown 2015; Yoshida and Terai 2017, Uehata et al. 2022). We caution that these results can be interpreted in different ways. It cannot be ruled out that small Trojans follow a single power law from larger to smaller sizes. Our favored interpretation, however, that the small Trojans follow a broken power law, with power law slopes for the cumulative Trojan SFD changing from \(q=-2.25\) for \(D>5\) km bodies to \(q=-1.8\) for \(D<5\) km. Note that the aforementioned knee in the SFD was observed to occur at \(H\sim 15\), with the diameter calculated from Eq. 2 with \(p_{v}=0.07\). We find that the Trojan SFD in Nesvorny et al. (2018) for \(D>10\) km is consistent with these results, with the power law slope between \(10<D<40\) km found to be \(q=-2.26\). This led us to extrapolate that slope down to objects that were between \(5<D<10\) km. From here, we grafted on the cumulative SFD for \(D<5\) km bodies discussed above. This task was made easier by F. Yoshida (personal communication), who graciously provided us with a list of Trojan diameters from her paper. Our synthesis is shown in Fig. 2b. In Figs. 2a and 2b, we also make direct comparisons of the IP SFD with the Trojan SFD, with each shown as dashed lines. They show that the two shapes are dissimilar to one another. The origin of these shapes will be explored in Sec. 5. ## 4 Modeling Collisional Evolution in the Primordial Kuiper Belt and Trojan Populations ### Boulder Collision Evolution Code Our collisional modeling runs use the code Boulder, which is described in Morbidelli et al. (2009). Note there are two versions of Boulder, one that computes the evolution of the relative velocity of the population of bodies, and one where the velocity is input from a dynamical excitation model (e.g., Nesvorny et al. 2018). We use the latter here, but in Morbidelli et al. (2009), only the former was described. Boulder is capable of simulating the collisional fragmentation of multiple planetesimal populations using a statistical particle in-the-box approach. It is a well-tested code that has been used to model the collisional evolution of planetesimals in the terrestrial planet region, the early asteroid belt, asteroid families, Hildas, Trojans and Trojans families, irregular satellites, and the survival of binary objects in the primordial Kuiper belt (e.g., Morbidelli et al. 2009; Levison et al. 2010; Bottke et al. 2010; Broz et al. 2013; Cibulkova et al. 2014; Rozehnal et al. 2016; Nesvorny et al. 2018; Zain et al. 2020; Nesvorny et al. 2018; Nesvorny and Vokrouhlicky 2019; Marshall et al. 2022). In Boulder, the results of smoothed particle hydrodynamics (SPH)/N-body impact experiments were used to estimate the fragment SFD produced in disruption events (Durda et al. 2004; 2007). For a given impact between a projectile and a target object, the code computes the impact energy \(Q\), defined as the kinetic energy of the projectile per unit mass of the target. The kinetic energy per unit mass needed to disrupt the target and eject \(50\%\) of the material to escape velocity is defined as \(Q_{b}^{*}(D)\) (see Sec. 4.4). The value of \(Q_{b}^{*}(D)\), hereafter \(Q_{b}^{*}\), is a function input into the code. The fragment SFD ejected from the collision is chosen from a look-up table of fragment SFDs previously computed according to \(Q\) / \(Q_{b}^{*}\). This allows it to simulate cratering and supercatastrophic disruption events in a relatively realistic manner. With that said, the reader should be aware of analytical results that suggest the shape of the fragment SFD for smaller breakup events is not an important factor in generating the shape of a larger population's SFD via collisional evolution (O'Brien and Greenberg 2003; see also Bottke et al. 2015). This means our choice for Boulder's fragment SFD for small bodies does not have to perfectly reflect reality to obtain reasonable model SFDs for the PKB, destabilized population, and Trojans. For each of our PKB runs, Boulder requires the following input components, each which will be discussed in some detail below. 1. The initial SFD of the PKB (Sec. 4.2). 2. The collision probabilities and impact velocities for (i) the destabilized population striking one another and the stable remnants of the PKB, and (ii) the population that will become Jupiter Trojans (Sec. 4.3). 3. A \(Q^{*}_{D}\) disruption scaling law suitable for KBOs, which are likely porous ice-rock mixtures (Sec. 4.4). ### Choosing the Initial Size Distribution of the Primordial Kuiper Belt One of the most sought-after components of the PKB is its initial SFD. With it in hand, modelers can constrain the kinds of planetesimal formation mechanisms that were active early in the most distant reaches of the outer solar system. It is also needed as a starting point for our collisional evolution modeling work. The problem is that it is difficult to know what to choose; as discussed above, observations of smaller KBOs are limited, and the PKB/Kuiper belt experienced an unknown degree of collisional evolution. #### 4.2.1 Insights from the main asteroid belt A limiting factor on the initial SFD chosen for use in our model is that it must have the kind of shape that allows it to fulfill existing dynamical modeling constraints (e.g., it must be able to deliver reasonable numbers of KBOs to various sub-populations during the giant planet instability and reproduce the nature of the observed Kuiper belt during Neptune's migration; Nesvorny and Vokrouhlicky 2016). As a result, we do not attempt to test certain SFDs suggested in the literature (e.g., the initial SFD suggested by Schlichting et al. 2013) The challenge in choosing an initial SFD is similar to the one faced by Bottke et al. (2005a,b), who modeled the collisional evolution of the primordial asteroid belt. To make progress, one needs to deduce the initial SFD of the primordial main belt and be confident that choice works in concert with dynamical evolution models. Given the uncertainties involved with such efforts, Bottke et al. (2005a) decided to consider what has been learned from numerical SPH simulations of asteroid disruption events (e.g., Benz and Asphaug 1999). They showed that asteroids in the gravity regime, defined as \(D>0.2\) km bodies, are increasingly difficult to disrupt as they become larger. For enormous asteroids like Vesta (530 km) or Ceres (930 km), the self-gravity is so high that they cannot be disrupted short of being hit by a projectile that rivals the size of the target body itself. This implies that the shape of the main belt SFD is very difficult to change for objects that are several hundreds of km in diameter; in effect, it can be considered a primordial shape, though dynamical mechanisms have decreased the population at all sizes over time (Bottke et al. 2005a,b). This same argument was adopted for the PKB by Nesvorny et al. (2018), among others, and we will use it here. The main belt, KBO, and Trojans SFDs all have a "bump" in their cumulative size frequency distribution (SFD) for \(D\) \(\sim\) 100 km objects (i.e., the SFD is substantially steeper for \(D\) \(>\) 100 km bodies and is substantially shallower for \(D\) \(<\) 100 km objects). That means most of the mass contained within the connected slopes of these SFDs are in \(\sim\)100 km bodies. Accordingly, the bump is telling us something important about the nature of the initial SFD and planetesimal formation processes. Aspects of the origin of the main belt bump were explored in Bottke et al. (2005a). Using a collisional model, they examined what happened with a variety of initial SFDs. They focused on those where the observed cumulative power law slope of \(q\) = -4.5, found between objects that were 100, 110, and 120 km and those that were many hundreds of km, was extended to smaller sizes. They showed that collisions could not reproduce the observed shape of the main belt SFD, which has a fairly shallow slope for \(D\) \(<\) 100 km bodies. The reason is that \(D\) \(\sim\) 50 to \(\sim\)100 km objects are exceedingly difficult to disrupt, so if you start with too many of them, you cannot get rid of them without producing noticeable collisional damage elsewhere in the main belt SFD. Bottke et al. (2005a) concluded from this that the bump near \(D\) \(\sim\) 100 km had to be a "fossil" from planetesimal formation processes, and that relatively few \(D\) \(<\) 100 km bodies were produced by the same mechanism. These results are consistent with modeling work on planetesimal formation processes that suggest that asteroids and KBOs were "born big" (Morbidelli et al. 2009). Models and analytical work show that turbulent gas motion in protoplanetary disks can create regions where small particles, or pebbles, can become aerodynamically concentrated. When their spatial densities become high enough, they undergo collapse by their mutual gravity and form a planetesimal. The process is likely to produce 100-km-class bodies (e.g., Youdin and Goodman 2005; Cuzzi et al. 2008; 2010; Nesvorny et al. 2019b; 2021a; Klahr and Schreiber 2020; 2021). It has also been demonstrated that this scenario can reproduce the sizes, orbits, and inclination distributions of well separated binaries in the Kuiper belt region. #### 4.2.2 Candidate SFDs for the Primordial Kuiper Belt The next question is what to expect from planetesimal formation processes for \(D\) \(<\) 100 km bodies. Given that constraints are limited, the literature offers different options. For example, numerical results from Nesvorny et al. (2020) indicate the same gravitational collapse processes that can make a 100-km-class well-separated binary can also, at the same time, eject numerous smaller clumps. These refugees might explain the origin of Arrokoth, the unusual two-lobed object observed by the New Horizons spacecraft. Alternatively, perhaps Arrokoth was formed directly by the streaming instability (McKinnon et al. 2020) or some other turbulent concentration mechanism (e.g., Cuzzi et al. 2008; 2010). Here we consider two candidate shapes for the initial SFD for \(D\) \(<\) 100 km bodies. **Candidate A. The initial PKB had a shallow SFD for \(D\) \(<\) 100 km objects. Collisional evolution models of the main belt indicate that the best fits come from initial SFDs where the \(D\) \(<\) 100 km objects follow a shallow slope to small sizes (i.e., Bottke et al. 2005a,b used a cumulative slope of \(q\) = -1.2). This starting shape implies that most objects in the main belt smaller than a few tens of km are likely to be fragments from the disruption of larger bodies (Bottke et al. 2005a,b). Given that most main belt asteroids are akin to carbonaceous chondrites (e.g., DeMeo and Carry 2014), and such bodies are most likely planetesimals formed in the giant planet zone (e.g., Kleine et al. 2020), we can infer that the initial shape of the main belt SFD should be close to that of giant planet zone planetesimals. In other words, in the region immediately adjacent to the PKB, the initial SFD was shallow for \(D<100\) km planetesimals. A reasonable inference would be that the PKB should start with a similar shape. Additional justification for a shallow shaped SFD comes from recent observational studies of the cold classical Kuiper belt (CCKB) by the DECam Ecliptic Exploration Project (DEEP) (Napier et al. 2023). Using the 4-meter Blanco telescope at Cerro Tololo Inter-American Observatory in Chile with the Dark Energy Camera (DECam), the goal of DEEP was to probe the faint end of the Kuiper belt by discovering thousands of objects down to magnitude \(r\sim 26.5\) (roughly 30 to 40 km) and measure the SFD. The CCK is considered the least collisionally evolved component of the PKB (e.g., Morbidelli et al. 2021), so its shape is arguably a good proxy for the initial PKB SFD. DEEP's analysis of 20 half-nights of data, covering 60 sq. deg. of sky, allowed them to go deeper than previous CCK studies, while their methodology allowed them to carefully characterize the efficiency and false-positive rates for their moving-object detection pipeline. They determined that the CCK had a faint end slope of \(q\sim-1.35\pm 0.2\). We note that their slope predictions yield results that agree with estimates of the small KBO population faintward of magnitude \(r\sim 24.5\) from Bernstein et al. (2004), who used Hubble Space Telescope observations to make their assessment. Their shallow slope does disagree with predictions from the literature (e.g., Adams et al. 2014; Fraser et al. 2014; Kavelaars et al. 2021), who argued for \(q\sim-2\) by assessing the shape of the SFD for relatively large KBOs and then extrapolating that slope to smaller KBOs. Given that direct measurements are preferable to extrapolations, and that the Napier et al. (2023) results are consistent with expectations from Bottke et al. (2005a,b), we argue it is reasonable to use Candidate A SFD in our modeling work. **Candidate B. The initial PKB had a Trojan-like SFD for smaller objects.** A second possibility is that PKB had an initial SFD for \(D<100\) km bodies that was similar to that of the Trojan asteroids. This school of thought was proposed by Fraser et al. (2014) and was recently summarized in Morbidelli et al. (2021) (see also Bernstein et al. 2004; Fuentes et al. 2009; Adams et al. 2014; Emery et al. 2015). It argues that the initial SFD for \(D>100\) km was steep, with \(q\) = -4.5 to -7.5 for the hot and cold sub-populations, respectively (Fraser et al. 2014), and was Trojan-like for \(D<100\) km bodies, with \(q\sim-2\) for bodies that were many tens of km. Given that dynamical models show the Jupiter Trojans were captured from the PKB (e.g., Morbidelli et al. 2005; Nesvorny et al. 2013), and that small Trojans are more easily observable than small KBOs, it arguably makes sense to connect the observed KB and Trojan SFDs together to deduce the shape of the PKB SFD. Accordingly, the Candidate B SFD is also plausible for our modeling work. 4.2.3. Choosing a Candidate SFD A key aspect of our work is that our model SFDs have to be able to reproduce the shapes of the two SFDs discussed in Sec. 3, namely the ancient crater SFDs found on Iapetus/Phoebe, presumably created by the destabilized PKB population shortly after the giant planet instability, and the present-day Trojan SFD. Assuming the differences are a byproduct of collisional evolution, our task in this section is to identify the most likely starting SFD from the available information. For Candidate A, our constraints provide us with a straightforward time order to collisional evolution. The initially shallow SFD in the PKB evolves to a wavy SFD in the destabilized population that can reproduce the IP SFD (Fig. 2a). The captured Trojan population has to go even further, evolving from a wavy SFD to a \(q\sim-2\) power law over 4.5 Gyr (Fig. 2b). For Candidate B, we have a more complicated situation, in that the initial PKB SFD and the modern-day Trojan SFD are assumed to have the same shape. This suggests that the Trojans preserve the original SFD (i.e., \(q\sim-2\) for \(D<100\) km bodies) and that evolution toward the IP SFD in the destabilized population occurs after the giant planet instability (i.e., it would explain the differences between the two SFDs in Fig. 2a,b). The problem is that the destabilized population undergoes rapid dynamical decay after the instability, leaving limited time for collisional evolution. In order to quantify whether the latter scenario was possible, we examined runs in Nesvorny et al. (2018). In their Supplementary Fig. 3a, an initial Trojan-like SFD was found to evolve to the IP SFD-like shape, but it required the order of 400 Myr of collisional evolution within a massive PKB. That high degree of comminution is implausible for the destabilized population. Our additional test runs showed the Candidate B SFD does not change very much if collision evolution is limited to < 100 Myr in a massive PKB (e.g., see Supplementary Fig. 3b in Nesvorny et al. 2018). The other possibility for Candidate B is that the Trojans, Kuiper belt, and destabilized population all have SFDs with similar shapes, at least in a statistical sense. We argue the existing data does not favor this scenario, but that is not the same as saying it cannot be true. For example: * As discussed above, Napier et al. (2023) finds that the faint end slope of the CCK is \(q\sim-1.35\pm 0.2\), not -2.0 as observed in the Trojans. This result potentially rules out the Candidate B SFD, but the DEEP observations have yet to be verified by comparable observations. More KBO observations, perhaps with the Vera Rubin telescope, may be needed to settle this issue. * Inspection of the crater/basin SFDs found on the giant planet satellites in Schenk et al. (2004) (Jupiter satellites) and Kirchoff et al. (2013) (Saturnian satellites) indicates their power law slopes become fairly shallow for \(D_{\rm crat}>200\) and 100 km craters, respectively, reaching values near \(q\sim-1.2\). This change is consistent with the Candidate A SFD and the inferred shallow shape of the IP SFD for \(D>10\) km projectiles (Fig. 2a). The complicating issue here is that there are relatively few craters/basins that are made by such projectiles. When error bars are included for these crater/basin SFDs, it can be shown that both the Candidate A and B SFDs fit the data, though chi-squared testing methods tell us the fits are better with Candidate A. * The Candidate B scenario predicts that the Trojans of Jupiter and Neptune should have the same SFDs, given that they were captured from the destabilized population at approximately the same time. Given the greater distance of Neptune's Trojans from the Sun, they should have lower intrinsic collision probabilities than Jupiter's Trojans and thus should be less collisionally evolved. Observations by Sheppard and Trujillo (2010), however, indicate that Neptune's Trojans are missing intermediate-sized planetesimals relative to expectation from Jupiter's Trojans (i.e., \(D<100\) km planetesimals). This apparent difference can be most easily explained by the Candidate A scenario, where Neptune's Trojans started with shallow SFD for \(D<100\) km planetesimals. At this time, though, it is unclear whether the existing observational evidence for the Neptune Trojans is strong enough to rule out Candidate B. * The modeling work by Bottke et al. (2005a,b) indicates that planetesimals in the giant planet zone started with a shallow SFD for \(D<100\) km bodies. We assert that it makes sense that bodies in the PKB, which is adjacent to the giant planet zone, would start with a similarly-shaped SFD. This argument, however, is not proof, and nature may have a mind of its own. The preponderance of evidence given above favors the Candidate A SFD over the Candidate B SFD. Accordingly, we will focus on it in our work. Nevertheless, we cannot rule out the Candidate B SFD at this time. Further testing of Candidate B will be left for a future paper, depending on whether the evidence supporting the Candidate B SFD becomes stronger with time. Our initial SFD for the PKB is shown in Fig. 3, with some of its characteristics described in Table 1. It is identical to predictions made by Nesvorny et al. (2018) for D > 100 km bodies, who found it by combining observational constraints, model results of hydrodynamical simulations of the streaming instability (e.g., Simon et al. 2017; Nesvorny et al. 2019b; 2020; see their Supplementary Fig. 4), and dynamical simulations of the capture of KBOs in mean motion resonances by an outward migrating Neptune (Nesvorny and Vokrouhlicky 2016; see their Fig. 15). PLACE FIGURE 3 HERE PLACE TABLE 1 HERE The characteristic size of planetesimals formed by the streaming instability in their simulations was \(D\sim 100\) km, and the initial disk mass (\(M_{\rm disk}\)) was set to 30 Earth masses. Here we assume there were \(\sim\)2000 Pluto-sized objects in the PKB (Nesvorny and Vokrouhlicky 2016). It is possible even larger objects once existed in the PKB, but additional work is needed to confirm this possibility. This SFD follows \(q=-2.5\) between \(D\sim 300\) km and Pluto-sized bodies (\(D=2370\) km), as suggested by observations of large KBOs (Brown 2008). Our power law slope for \(D<70\) km objects in Fig. 3 does not follow that of Nesvorny and Vokrouhlicky (2016), who assumed it followed a Trojan-like SFD power law slope of q = -2.1. It was instead given a value of \(q=-1.1\), similar to that assumed for the primordial main belt by Bottke et al. (2005a,b). Using Boulder, we track collisions over the full range of objects found in Fig. 3, which goes from nearly meter-sized bodies to objects larger than Pluto. The largest bodies, however, are almost impossible to disrupt, so they do not play a meaningful role in the story presented here. ### Selecting Collision Probabilities and Impact Velocities for Collisional Evolution Our next task is to choose the appropriate collision probabilities and impact velocities for objects striking one another as they move from the PKB to the destabilized population and to Jupiter's Trojans. As discussed in Sec. 3, these values are time dependent, with the PKB undergoing excitation and dynamical depletion, so characterizing how they change is critical to obtaining an accurate collisional evolution solution. #### 4.3.1 Example collision probabilities for test bodies between 0.1 and 50 au In order to try to make our results easier to understand, especially given the wide-ranging orbits of our bodies across the outer solar system, we performed the following test calculation. We created a random population of 10,000 test bodies spread in semimajor axis \(a\) between 0.1 and 50 au. The eccentricity (_e_) and inclination (_i_) values of the bodies were chosen to follow a Rayleigh distribution with mean \(e\) = 0.2 and mean \(i\) = 0.1 rad. From this, we calculated the intrinsic collision probabilities _P_i and mean impact velocities _V_imp for each test body against all other test bodies that crossed its orbit using the formalism of Bottke et al. (1994). Our results are shown in Fig. 4. PLACE FIGURE 4 HERE We find that the intrinsic collision probabilities fall off with _P_i \(a\) = 3.5, while impact velocities decrease with a dependance _V_imp \(a\) = 0.5. These values can also be deduced from the collision probability equation \(P\) = \(n\) _c_, with the density of bodies per volume of space decaying as _a_-3, \(c\) being the mutual cross section, and _V_decaying as _a_-0.5. Accordingly, in our test population, an object at 8 au is 90 times more likely to be struck than an object at 30 au (i.e., (30 / 8)3.4 = 90), with the impact velocities at 8 au twice as high as 30 au (i.e., (30 / 8)0.5 = 1.9). The reason for this change comes from two factors: (i) the volume of space available for orbits increases away from the Sun, and (ii) the orbital velocities of bodies orbiting the Sun decreases away from the Sun. Our test case is just an example, but it allows us to draw some inferences about collisional evolution in the outer solar system. Consider that while the PKB is enormous, its distant location, with most of the mass between ~24-30 au, means it has relatively small intrinsic collision probabilities, generally ranging from ~10-21 to ~10-22 km-2 yr-1, and low impact speeds, generally ranging from ~0.5-3 km s-1. Conversely, Jupiter's Trojans, located at 5 au, have values that are between ~10-17 to ~10-18 km-2 yr-1, nearly four orders of magnitude higher than the PKB, while their mutual impact velocities are also higher (~5 km s-1) (e.g., Marzari et al. 1996; Dahlgren 1998). In addition, while the ratio of the Jupiter Trojan population to the PKB is < 10-6 (e.g., Vokrouhlicky et al. 2019), the ratio of their possible residence times is 4.5 Gyr compared to a few tens of Myr for objects in the destabilized population, yielding the order of ~10\({}^{2}\) (Nesvorny et al. 2018). Putting these components together, the Jupiter Trojans, despite their relatively small population, may experience as much collision evolution over 4.5 Gyr as objects residing in the PKB for tens of Myr. These calculations also tell us that the Jupiter Trojans, despite their smaller population size, are probably more collisionally evolved than the scattered disk, which are the long-lived survivors of the destabilized population. This outcome is more consistent with the Candidate A SFD than the Candidate B SFD. Inferences can also be made about collisions among bodies in the destabilized population. For example, after the giant planet instability, the vast majority of destabilized objects eventually enter into deep giant planet-crossing orbits with Jupiter, Saturn, etc. Their P\({}_{\rm i}\) values will increase while they reside in that zone, with individual objects lasting a few Myr to tens of Myr on average (Nesvorny et al. 2018). When put together, it means that that substantial collisional evolution can occur for KBOs just prior to them being trapped within stable zones in the main belt, Hilda, Jupiter Trojans, and Jupiter irregular satellite populations. Conversely, long-lived members of the destabilized population, many which end up in the scattered disk between 30-50 au and beyond, generally have P\({}_{\rm i}\) values that are lower than those for objects residing in the PKB. This limits how much collisional evolution scattered disk objects experience from other scattered disk objects. With that said, some scattered disk objects reside for billions of years on orbits that cross the stable Kuiper belt population, which is ~0.1% the size of the PKB. Accordingly, we can expect objects in the Kuiper belt and destabilized population/scattered disk to show slow but steady collisional evolution over billions of years. These changes might be revealed in the crater SFDs found on younger terrains on outer solar system worlds (e.g., Europa, Ganymede, Enceladus, Ariel). #### Collisions among the destabilized population and Jupiter Trojans With those concepts in place, we are ready to calculate the collision probabilities and impact velocities of PKB objects in the destabilized population/scattered disk and Jupiter Trojans. Here we followed the strategies used in Bottke et al. (2005b) and Nesvorny et al. (2018) to find the parameters needed to model a population undergoing dynamical upheaval. For Stage 1, our initial PKB was made of 10\({}^{6}\) particles spread between 23.5 and 30 au. The surface density of the particles followed 1 / r from the Sun, with r being heliocentric distance. This value means the same number of objects would exist in any chosen \(\Delta r\) bin. In previous work, co-author D. Nesvorny tested how rapidly Neptune would migrate through dynamically cold and hot disks of different masses. In general, he found faster migration came from colder and/or more massive disks and slower migration from hotter and/or less massive disks. A complicating issue is that ~10\({}^{3}\) Pluto-sized bodies within the disk take varying times to get different masses excited, so the relationship is not one to one. As an approximation, we assumed here that the PKB disk was quickly excited by embedded Plutos, with the steady state orbits of the bodies following a Rayleigh distribution with mean e = 0.05 and mean i = 0.025 rad (Nesvorny et al. 2018). This behavior is reasonable when our main purpose is to calculate collision probabilities and impact velocities between PKB objects. We also assumed the PKB remained relatively unchanged for a time \(\Delta t_{0}\), the time between the dissipation of the gas disk and Neptune entering the PKB. In our model, we tested values where \(\Delta t_{0}=0\), 10, 20, and 30 Myr. For Stage 2, we consider Neptune's interaction with the PKB. We defined \(\Delta t_{1}\) as the time from when Neptune enters the PKB (and began to migrate across it) to the start of the giant planet instability. Numerical simulations of this behavior capable of matching solar system constraints require careful work, so here use two pre-existing giant planet instability runs from the literature. 1. Nesvorny et al. (2017) tracked the evolution of the scattered disk with a giant planet instability taking place at \(\Delta t_{1}=10.5\) Myr. 2. Nesvorny et al. (2013) studied the capture of the Jupiter Trojans with a giant planet instability taking place at \(\Delta t_{1}=32.5\) Myr. The \(\Delta t_{1}\) values differ between the runs because each simulation made contrasting assumptions about the mass in the PKB's outer disk, with the former having more mass (a net mass of 20 Earth masses) than the latter (a net mass of about 15 Earth masses). This means that our dynamical runs are not self-consistent with the PKB's mass when Neptune enters the disk or when the instability occurs. At best, they are approximate; our PKB starts with 30 Earth masses but steadily loses debris from collisional evolution, with longer \(\Delta t_{1}\) values corresponding to more mass loss via comminution. A fully self-consistent model with collisions and dynamics that also matches giant planet instability constraints will have to wait for future efforts with faster codes. Finally, for Stage 3, we tracked what happened to test bodies that were ejected onto planet-crossing orbits by Neptune's migration. This population defines the destabilized population. All told, the combination of four \(\Delta t_{0}\) values and two \(\Delta t_{1}\) values means we have eight different model histories to examine. A summary of these values is provided in Table 2. PLACE TABLE 2 HERE Our next step was to choose a representative sample of 5000 test bodies from Stages 1-2 that were destined to reach the destabilized population in Stage 3 and become long-lived enough to become part of the scattered disk. These test bodies were used to calculate collision probabilities (\(P_{1}\)) and impact velocities (\(V_{\rm imp}\)) against the background test body population at every timestep in our simulations. Before they can be included in the Boulder code, though, we must also account for dynamical depletion within the PKB and destabilized population. In Bottke et al. (2005b), we tracked \(P_{i}\) and dynamical depletion separately and accounted for the latter in the model SFD at every timestep using the code CoDDEM. This method works but it is cumbersome to implement. For our Boulder runs, we instead normalized our collision probability by the starting population at every timestep: \[P_{pop}\ =\ P_{i}\ \left(\frac{N_{surv}}{N_{pop}}\right) \tag{3}\] with \(N_{\rm surv}\) being the number of test bodies left in the simulation at time t and \(N_{\rm pop}\) being the starting population. Our derived values \(P_{\rm pop}\) and \(V_{\rm imp}\) for the destabilized population are shown in Fig. 5. PLACE FIGURE 5 HERE The situation for the Jupiter Trojans is more challenging. Numerical simulations show that the capture rate of Jupiter Trojans from the PKB is \(\sim\)5 x 10-7 (Nesvorny et al. 2013). This means that in our standard giant planet instability simulations with 10\({}^{6}\) test bodies, almost none will be captured. To overcome this problem, we took advantage of a residence time map of orbital locations from Nesvorny et al. (2013). It shows the orbital parameters most likely to yield Jupiter Trojans (i.e., objects with low to modest eccentricities residing for some interval near 5 au). Using our giant planet instability runs, we identified destabilized population test bodies that reached those orbits at the same time Jupiter was undergoing encounters with a Neptune-sized body. Jupiter's orbit is jolted by these encounters, allowing it to serendipitously capture KBOs into L4 and L5 that are at the right place and time. The dynamical tracks of these specific test bodies were used to calculate \(P_{\rm pop}\) and \(V_{\rm imp}\) values for our model Trojans. This method allows us to account for collisions on model Trojans from PKB bodies, the destabilized population before and after capture, and "background" Trojans after capture. Our results are shown in Fig. 6. PLACE FIGURE 6 HERE These collision probability and impact velocity values were entered into the Boulder collision evolution code as look-up tables. The code then interpolates to get the appropriate parameters for every collision timestep. ### Selecting a Disruption Law for Kuiper Belt Objects A major challenge in modeling the collisional disruption of KBOs is that we do not know the disruption scaling law applicable to such objects. Disruption scaling laws are commonly defined by the parameter \(Q_{B}^{*}\), defined as the critical impact specific energy or the energy per unit target mass needed to disrupt the target and send 50% of its mass away at escape velocity. The projectile capable of disrupting an object \(D_{\rm target}\) is defined as \(d_{\rm disrupt}\): \[d_{\rm disrupt}=\left(\frac{2Q_{B}^{*}}{V_{\rm imp}^{2}}\right)^{1/3}D_{target} \tag{4}\] There are several potential formulations in the literature for KBO disruptions that are based on numerical hydrocode modeling work (e.g., Benz and Asphaug 1999; Leinhardt and Stewart 2009; Jutzi et al. 2010). The issue is that there is no easy way to test which one is best. KBOs also have poorly understood physical properties and substantial porosities, a combination which is already challenging to treat for carbonaceous chondrite asteroids, let alone porous ice-rich bodies (e.g., Jutzi et al. 2015). The alternative way to assess \(Q_{D}^{*}\) functions is to use collisional evolution models like Boulder, with the model SFDs and disruption predictions compared against observational constraints. Several groups have done this for the main belt (e.g., see review by Bottke et al. 2015, with a different perspective provided by Holsapple et al. 2019). For example, spacecraft imaging of main belt asteroids and their crater SFDs provide us with information on the projectile SFD making those craters (e.g., Bottke et al. 2020). For the Kuiper belt SFD, which is more difficult to observe, modelers to date have mainly tried to constrain their simulations using the known KBOs larger than several tens of km in diameter (e.g., Davis and Farinella 1997; Kenyon and Bromley 2004; Pan and Sari 2005; Schlichting et al. 2013; Nesvorny et al. 2018; see Kenyon et al. 2008 for a review). Only recently has information on small KBOs become available via the craters found on Pluto, Charon, and Arrokoth (Singer et al. 2019; Spencer et al. 2020; Morbidelli et al. 2021). This has allowed recent studies to probe the nature of the impactor population making those craters (e.g., Kenyon and Bromley 2020; Benavidez et al. 2022). As discussed in the introduction, the components that allow a collisional model to test different \(Q_{D}^{*}\) functions for KBOs are as follows: a) A dynamical evolution model of the PKB that matches constraints (which allows accurate computation of \(P_{\rm pop}\) and \(V_{\rm imp}\)), and, b) An accurate estimate of the initial SFD of the PKB, c) A set of robust constraints for the shape of the model SFDs over all sizes. Assuming our choices for (a)-(c) are reasonable, we can use the Boulder collisional evolution model test a variety of \(Q_{D}^{*}\) functions. We defined them using the following equation from Benz and Asphaug (1999) that was rewritten by Bottke et al. (2020): \[Q_{D}^{*}\left(R\right)=aR^{a}+bR^{\beta} \tag{5}\] \[a=\frac{Q_{D_{LAB}}^{*}}{R_{LAB}^{a}}\frac{1}{1-\frac{\alpha}{\beta}\big{(} \frac{R_{LAB}}{R_{min}}\big{)}^{\beta-a}} \tag{6}\] \[b=-\frac{\alpha}{\beta}aR_{min}^{a-\beta} \tag{7}\] Here the \(Q_{D}^{*}\) function has the shape of a hyperbola that passes through a normalization point \((Q_{DLAB}^{*},D_{LAB})\) determined using laboratory impact experiments, with \(R=D\) / 2 and \(R_{LAB}=D_{LAB}/2\). The values \(\alpha\) and \(\beta\) help define the left and right slopes of the hyperbola, which is shaped like a "\(\vee\)". The minimum \(Q_{D}^{*}\) value in the hyperbola \((Q_{Dmin}^{*})\) is defined to be at the location \(R_{\rm min}=D_{\rm min}\) / 2. In order to make our choice of \((Q_{DLAB}^{*},D_{LAB})\), we turned to the laboratory ice shot experiments described in Leinhardt and Stewart (2009) (see their Fig. 11). They show the outcomes of numerous vertical shot experiments into ice, with \(D_{LAB}\) ranging from 3 to 27 cm and \(Q^{*}_{LAB}\) ranging from ~4 x 10\({}^{4}\) to 9 x 10\({}^{6}\) erg g-1. Rather than choose a single point for our runs, we decided it was better to sample across this cluster of experimental results. Accordingly, we assumed \(D_{LAB}=10\) cm and that \(Q^{*}_{DLAB}=6.20\) x 10\({}^{4}\), 2.65 x 10\({}^{5}\), 1.13 x 10\({}^{6}\), and 4.84 x 10\({}^{6}\) erg g-1. These values, as well as those provided below, are also given in Table 2. For \(\alpha\), which helps control the slope of the \(Q^{*}_{D}\) function in the strength regime (i.e., where \(D<D_{\min}\)), we chose values between 0 and -1 incremented by -0.1. For \(\beta\), which helps control the slope in the gravity regime (i.e., where \(D>D_{\min}\)), we chose values between 0.1 and 2 incremented by 0.1. Finally, we varied \(D_{\min}\) between 10 and 100 m incremented by 10 m. In preliminary work, we tried a larger \(D_{\min}\) range, but the runs with \(D_{\min}>100\) m were so unsuccessful that we discarded them for our production runs. The reason these runs failed will become clear in our discussion of our modeling results. Taken together, this gives us 4 x 11 x 20 x 10 = 8800 \(Q^{*}_{D}\) functions to test for each choice of (\(\Delta t_{0}\), \(\Delta t_{1}\)). This is too many to put onto a single plot, or even a series of plots, so we only display a small sample of our \(Q^{*}_{D}\) functions in Fig. 7. It shows the three sets of \(Q^{*}_{D}\) functions. The highest set of curves has (\(Q^{*}_{DLAB}\), \(\alpha\), \(\beta\), \(D_{\min}\)) = (4.84 x 10\({}^{6}\) erg g-1, -0.5, [0.1, 2], and 20 m), the middle set has (2.65 x 10\({}^{5}\) erg g-1, -0.8, [0.1, 2], and 50 m), and the lowest set has (4.84 x 10\({}^{6}\) erg g-1, -1.0, [0.1, 2], and 70 m). For reference, we have also plotted the asteroid disruption law from Benz and Asphaug (1999) in Fig. 7. Using their Eq. 6, it assumes the target's bulk density is 2.7 g cm-3, \(Q_{0}=9\) x 10\({}^{7}\) erg g-1, a = -0.36, b = 1.36, B = 0.5 erg cm-3 g-2, and impact velocity of 5 km s-1. PLACE FIGURE 7 HERE Given all of these possible test cases, we have no choice but to employ automated testing metrics to evaluate the differences between our model results and constraints. ## 5 Model Runs and Results Using the information from Sec. 4.1-4.4, we have everything we need for our Boulder model runs. Here we are running two distinct sets of trial runs: one for the destabilized population and one for the population that goes on to make Jupiter's Trojans. Each trial run includes as input the initial SFD of the PKB, a \(Q^{*}_{D}\) function (defined by the parameters in Table 2), and a look up table of \(P_{\rm pop}\) and \(V_{\rm imp}\) values for collisions that are defined for each set of \(\Delta t_{0}\) and \(\Delta t_{1}\) (Figs. 5 and 6). A random seed is also entered to account for stochastic variation in the timing/nature of breakup events. We assigned a single bulk density of \(\rho=1.0\) g cm-3 for all KBOs and their fragments. This value splits the difference between smaller objects, which are likely to have \(\rho<1.0\) g cm-3 (A'Hearn 2011; Blum et al. 2006; Sierks et al. 2015) and larger KBOs, many which have \(\rho>1.0\) g cm-3 (Bierson and Nimmo 2019). The Boulder code does not have the means to assign distinct bulk densities to its objects; they are instead treated statistically within size bins. For each set of trial runs, we are combining 8800 potential \(Q_{D}^{*}\) functions with our four possible values of \(\Delta t_{0}\) and our two choices for \(\Delta t_{1}\). This means we need to sift through 70,400 trials to find our best fit cases. Recall that each set of trial runs has its own set of \(P_{\mathrm{pop}}\) and \(V_{\mathrm{imp}}\) values, so in the end, we are evaluating 70,400 x 2 = 140,800 trials runs. This quantity of model data cannot be examined by eye, so we developed an automated way to score the quality of the fits between model and observational constraints. ### Scoring the Results For our runs tracking the destabilized population, our constraints on the SFD come from the projectile population derived from the IP SFD (Fig. 2a). Recall that the crater populations used here do not provide the bombardment SFD at a single moment in time but are instead integrated history; they record impacts from the moment a surface can retain craters to the present day. Accordingly, our model SFD can only be tested by constructing a model crater SFD in the same manner. We did this as follows. First, we tabulated the impact rate of PKB bodies on Jupiter over 4.5 Gyr from the numerical simulations of Nesvorny et al. (2013, 2017, 2019a). It is shown in Fig. 8. The values have been normalized over the total number of Jupiter impacts, so they can be readily applied to our PKB population. They show that the impact rate drops by four orders of magnitude over the course of the simulation. Accordingly, we would expect the highest impact flux (and moat of the largest impacts) to occur on Jupiter relatively soon after Neptune enters the PKB. As the impact flux decreases, the odds become increasing small that a large impact will occur, though stochastic events can and do occur from time to time. PLACE FIGURE 8 HERE We found that 1.1% of the PKB population strikes Jupiter over time. Accordingly, if we start with 2000 Pluto-sized objects and ~10% \(D>100\) km bodies (Fig. 6), and that collisional evolution is minimal for these large bodies, we predict that ~20 and ~10% should have struck Jupiter, respectively. From here, we divided the Jupiter impact flux curve into time segments that matched our output from the Boulder code (i.e., we printed out results every Myr between the start of the simulations and 100 Myr of elapsed time, every 50 Myr between 100-1000 Myr, and every 100 Myr between 1000 and 4500 Myr). Starting from the present day, namely 4500 Myr in simulation time, we worked backwards in time and tabulated the average SFD for the destabilized population in a given time bin (e.g., the SFD between 4400 and 4500 Myr, 4300 and 4400 Myr, etc.). These were multiplied by the appropriate normalized Jupiter impact flux for that time. By adding the results together in a cumulative sense, they could be compared with the shape of the IP SFD at every time step. In this paper, our main concern is how well our integrated model SFD can reproduce the shape of the IP SFD. The model age of the Iapetus and Phoebe surfaces, as well as those of many other satellites, will be discussed in Paper II. To determine the quality of the fit between our model SFD at a given timestep and the IP SFD, we applied the chi-squared relationship (e.g., Bottke et al. 2020): \[\chi^{2}=\sum_{i=1}^{M}\frac{\left(N_{model}(>D_{i})-N_{obs}\left(>D_{i}\right) \right)^{2}}{N_{obs}\left(>D_{i}\right)} \tag{8}\] Here \(D_{i}=1,\ldots,\ M\), stands for the diameters of observed and model projectiles. To obtain reduced \(\chi^{2}\) values, we divide them by the value M, yielding the value we define here as \(\chi^{2}_{norm}\). These values were calculated for all output timesteps over 70,400 runs. For the Trojan runs, we followed a similar approach. First, we took the Trojan SFD from Fig. 2b and created a uniform distribution of 30 points across the SFD. This was necessary because our observed SFD is a combination of the known bodies and estimates of the SFD from the Suburu telescope with Suprime-Cam/Hypersuprime-cam. Second, we scaled the results of the PKB SFD from each Trojan trial by the fraction of bodies captured into the Trojan population (5 \(\times\) 10-7). Third, we used Eq. 8 to evaluate the \(\chi^{2}\) values at 4500 Myr, the end of the simulation. This calculation corresponds to checking whether the model SFD matches the observed one in the present day. The next question is how to interpret these values. We effectively have two independent sets of \(\chi^{2}\) values: one set for the destabilized population, where \(\chi^{2}\) was evaluated at every output timestep, and an individual value for the Trojans, where \(\chi^{2}\) was only evaluated at the end of the simulation. This leads to our first confounding issue, namely for a given set of input parameters, the destabilized population might have low \(\chi^{2}\) values at certain timesteps in its trial run, while its counterpart Trojan value might be high, or vice versa. Only a relatively small collection of input parameters yields low \(\chi^{2}\) values for both sets. A second confounding issue is trying to determine what kind of match between our trial runs and the two SFDs in Fig. 2 constitute a "minimal" good fit. Recall that our \(\chi^{2}\) method tells us how the model SFD, with equidistant values in log diameter space, conforms to the full shape of the observed SFD. This means the \(\chi^{2}\) value for the destabilized population and the Trojans has effectively been turned into a relative score that cannot be evaluated using statistical methods. Given this, it is not clear how to determine the relative scores that are barely good enough to meet a goodness of fit metric. One can define threshold values for these relative scores, but the success space would have an arbitrary size. Our compromise solution was to multiply both \(\chi^{2}\) values together, giving us a single value that could tell us the relatively quality of a given fit for a set of input parameters. The best fit trial runs were then evaluated by eye to see how well our automated fitting procedure had worked against the test functions in Fig. 2. ### Results The goal of our scoring system was to find those parameters in Table 2 that did a good job of (i) reproducing the IP SFD over a long period of time during the early stages of the run, when bombardment of the outer planet satellites would have been at its highest, and (ii) the Trojan SFD in the present day. In many cases, we found low \(\chi^{2}\) scores for (i), representing an excellent fit, but more moderate \(\chi^{2}\) scores for (ii), representing a modest fit to the Trojans (i.e., they were still too wavy, with the model SFD not quite reaching the \(q\sim-2\) power law slope needed between \(5<D<100\) km bodies). More infrequently we found the converse; a good fit with (ii) but a poor fit with (i). By sorting the multiplied scores from our two sets of 70,400 trials, we generated a "Top 10" list of best fit cases. Their parameters and \(\chi^{2}\) scores are reported in Table 3, and the \(Q_{D}^{*}\) functions are plotted in Fig. 9. Before discussing the trends in this list, we will first examine in detail our best fit examples for both the destabilized population and the Jupiter Trojans. PLACE TABLE 3 HERE PLACE FIGURE 9 HERE #### 5.2.1 Best fit for the destabilized population of the PKB Our best fit run for the destabilized population is shown in Fig. 10. It is identified as #2 in Table 3, and its disruption law is one of the two blue curves in Fig. 9. Here we show the evolution of the model SFD as a series of four snapshots. We find that a small fraction of the \(D>100\) to 300 km bodies in the PKB undergo disruption over 4.5 Gyr. The fragments from these collisional events undergo further comminution, eventually creating a SFD that takes on a wave-like shape. The SFD is seen to match constraints at the 40 Myr timestep in Fig. 10 (see also Table 1). PLACE FIGURE 10 HERE The explanation for this wavy shape has roots that go back to Dohnanyi (1969), who analytically explored how collisional evolution would proceed among bodies in a SFD if their disruption scaling law was independent of size. He showed that such SFDs would evolve to a steady state with a cumulative power law exponent \(q=-2.5\). Using this as a starting point, O'Brien and Greenberg (2003) used analytical and numerical methods to demonstrate that if the disruptions have some dependance on size, namely the strength per unit mass decreases with size, as shown in Figs. 8 and 9, \(q\) can become steeper than -2.5. Here \(q\) depends on the slope of the \(Q_{D}^{*}\) function for \(D<D_{\min}\), which means it is controlled by the \(\alpha\) parameter in Table 3. The best fit SFD in Fig. 10, where \(D_{\min}=20\) m, has a Dohnanyi slope for \(D<D_{\min}\) of \(q\sim-2.7\). The transition between the strength and gravity regimes occurs at \(D_{\min}=20\) m in Fig. 9. Beyond this point, the self-gravity of objects plays an increasingly important role in making massive objects more difficult to disrupt. Bodies just beyond the \(D>20\) m boundary are still quite weak, though, and they are being struck by numerous smaller objects that make up the Dohnanyi slope SFD. The outcome is that \(D>20\) m bodies are readily obliterated, and the SFD evolves to a shallow slope. Our model results indicate the slope is generally near \(q\sim-1.2\). This goes on until we reach target sizes that are close to those that can be disrupted by \(D_{\rm min}=20\) m projectiles. For the \(Q_{b}\) function controlling Fig. 10, that point is reached at \(D\sim 1\) km. The deficit or "valley" created near \(D\sim 20\) m means there are few projectiles that can disrupt \(D\sim 1\) km bodies, so the latter look like a "bump". For \(D>1\) km, the slope becomes steep again because there are few objects between \(20\) m \(<D<1\) km that can disrupt \(D>1\) km bodies. We note that a similar collisional cascade occurs in the main belt, but asteroids have \(D_{\rm min}\sim\)200 m (Bottke et al. 2005a,b; 2015; 2020). This also leads to a wavy SFD, except the shallow slope of \(q\sim-1.2\) is found between \(0.2<D<2\) km, with a steeper slope starting at \(D>2\)-3 km. If our starting SFD was a power law \(q=-2\) for \(D<100\) km bodies (i.e., Candidate B SFD), and sufficient collisional evolution were to take place, the relative excess near \(D\sim 1\) km would create yet another valley at the sizes disrupted by \(D\sim 1\) km bodies, namely those \(10<D<30\) km in diameter. In our situation, however, the reaction of the input SFD is contingent on the shape of the starting SFD. As shown in Fig. 10, disruption events slowly reduce the initial bump between \(50<D<300\) km, with some fragments repopulating the limited population of \(10<D<30\) km bodies. This effect helps keep this size range fairly steady in Fig. 10. The interesting consequence is that fragments from these larger disruption events slowly build up the population of objects between several km \(<D<10\) km in Fig. 10, making it steeper. Over time, they even cause the bump near \(D\sim 1\) km to shift over to \(D\sim 2\) km. This occurs because there are relatively few objects between several tens of meters and 1 km that can disrupt these bodies. As we will discuss in a follow-up paper, this behavior may explain the crater SFDs found on the younger terrains of Ganymede (Schenk et al. 2004). #### 5.2.2 Best fit for Jupiter's Trojans Our best fit run for Jupiter's Trojans, identified as entry #1 in Table 3, is shown in Fig 11 (see also Table 1). The first two snapshots, at 1 and 10 Myr, show the collisional evolution of the PKB SFD that will eventually be captured in Jupiter's L4 and L5 zones. The results have been multiplied by the capture efficiency of the Trojans from Nesvorny et al. (2013), namely 5 x 10-7, so they can be compared to the observed population of the Trojans. PLACE FIGURE 11 HERE The initial shape of the PKB SFD is different from that of the Trojans. The timestep of 1 Myr in Fig. 11 shows the model SFD has too many objects that are \(D>30\) km and far too few that are \(D<30\) km, not a surprise because the objects have yet to be captured. Comminution in the PKB gradually makes the fit better by 10 Myr, but the shape is still more reminiscent of the IP SFD than the Trojans. Major changes take place after the giant planet instability; in this run, \(\Delta t_{0}=10\) Myr and \(\Delta t_{1}=10.5\) Myr (Table 3). The pre-Trojan population is placed onto giant planet crossing orbits, where they are then passed down by giant planet encounters to nearly circular orbits at ~5 au, the orbits where they can be captured as Jupiter's Trojans. In this interval, the destabilized population is near its maximum size. The intrinsic collision probabilities between small bodies increases by orders of magnitude for those moving down to Jupiter's semimajor axis, but this is offset by the rapid depletion of the population (Fig. 5). Eventually, the Trojans are captured, but they continue to be hit for some period of time by remnants of the destabilized population. As shown at the 40 Myr timestep of Fig. 11, the combination is enough to remove many \(D>30\) km bodies and steepen the SFD of \(D<30\) km bodies. For the remaining time, Trojan collisional evolution is dominated by mutual impacts with other Trojans within their particular cloud (i.e., within L4 or L5). While the population is much smaller than the PKB, there is billions of years to work with and the collision probabilities are relatively high (Fig. 4). This allows the Trojan SFD to take on a nearly power law shape of \(q=-2\) between 5 < \(D<100\) km at 4.5 Gyr (Fig. 11). The reader may notice that our model SFD is slightly wavy between few km < \(D<70\) km, with an inflection point near \(D\sim 20\) km. We find it intriguing that the Trojan SFD determined from WISE infrared observations shows a similar feature (see Fig. 12 and 14 from Grav et al. 2011). While this is a potentially positive sign that our model results are reasonable, we caution that the veracity of this feature must be considered suspect until more Trojans near these sizes have confirmed diameters. Unfortunately, the deduced SFD from WISE observations could only be calculated down to 10 km in diameter. In order to find the true SFD between 5 < \(D<100\) km, the Trojan population may need to be observed by a next generation instrument (e.g., ground-based observations from the Vera Rubin telescope; space-based infrared observations from the NEO Surveyor mission). #### 5.2.3 Parameter trends from the runs Overall, our Top 10 runs yield several trends telling us what is needed for our model SFDs to match constraints (see Table 3 and Fig. 9). First, we find that there is a strong preference in our \(Q_{D}^{*}\) functions for \(D_{\min}\) to be near 20-40 m, with the top choices given to 20 m. This is the same size range predicted by Morbidelli et al. (2021). This value sets the nature of the Dohnanyi-type SFD in the strength-scaling regime, which in turn generates the smallest craters found in the IP SFD (Fig. 2a). We will discuss additional constraints related to this portion of the SFD in Sec. 6.1. Second, we find we need just the right amount of collisional evolution in the destabilized population and the Jupiter Trojans to match constraints. This is more easily explained by placing our 70,400 \(\times\) 2 trials into several broad groups: * **Easy.** Easy collision evolution means the \(Q_{D}^{*}\) function has relatively low values (e.g., corresponding to the lower curves on Fig. 7) * **Difficult.** Difficult collision evolution is the opposite, with the \(Q^{*}_{D}\) function having relatively high values (i.e., corresponding to the higher values on Fig. 7) * **Short.** The model uses lower values of \(\Delta t_{0}\) and \(\Delta t_{1}\), which means the PKB only exists for a relatively short time prior to the giant planet instability. * **Long.** Here the \(\Delta t_{0}\) and/or \(\Delta t_{1}\) values are on the higher side of the values given in Table 2, which means the giant planet instability occurs relatively late. We found that runs corresponding to "Easy/Long" can occasionally reproduce the Trojan SFD, but at the cost of too much collisional evolution in the destabilized population's SFD. Specifically, these model SFDs often yield shapes similar to the last timestep shown in Fig. 10, but with the steeper slope for 1 < \(D\) < 10 km occurring at earlier times. In effect, the destabilized population's SFD is trying to follow the same evolutionary path as the Trojan SFD (Fig. 11), but its collision probabilities and impact velocities in Fig. 4 are too low to achieve that level of grinding. Conversely, all "Difficult" runs, regardless of "Short" or "Long", did not produce enough collisional evolution among the Trojans to get rid of their wavy shape; they still look too much like the IP SFD. As a thought experiment, consider scenarios where the giant planet instability occurs at \(\Delta t_{0}\) + \(\Delta t_{1}\) = 100 Myr or even 700 Myr. Those models could only match the IP SFD constraints if the PKB were to undergo very slow collisional evolution, which in turn means the use of \(Q^{*}_{D}\) functions that prevent disruption events in a massive PKB. The problem is that there would be no way for that \(Q^{*}_{D}\) function to reproduce the Trojan SFD. Once the Trojans were captured, a restrictive \(Q^{*}_{D}\) function would prevent subsequent disruptions, and that would leave behind a prominent wavy shaped SFD that would be readily observable today. The best fit results in Table 3 and Fig. 9 appear to provide just the right amount of collisional evolution, in that they prevent too much from occurring in the PKB at early times while allowing the Trojans to get just the right amount at later times to match constraints. These runs correspond to the "Easy/Short" tree of outcomes, though perhaps a better moniker would be "Easy/Moderate". All of the runs in our Top 10 had some combination of \(\Delta t_{0}\) + \(\Delta t_{1}\) that yielded -20 or -30 Myr. The only run that had \(\Delta t_{1}\) = 32.5 Myr also had \(\Delta t_{0}\) = 0 Myr. These results suggest that the PKB had to experience a moderate degree of collisional evolution before the giant planet instability took place. The differences between our top two cases are an amalgam of the above discussion. Our top case nicely fits the Trojans SFD, but its match to the IP SFD is modestly too steep for 1 < D < 10 km objects. Conversely, our second-best case has a slightly worse fit near 10 km for the Trojan SFD (i.e., it is slightly wavier than our Trojan SFD), but it matches the IP SFD nicely between 1 < D < 10 km. While the second pick is slightly worse overall from a combined \(\chi^{2}\) perspective, we favor it over the first case as a better overall choice. The \(Q^{*}_{D}\) functions in Fig. 9 are much lower than the asteroid disruption law from Benz and Asphaug (1999). They tend to favor 8 values between 1.0-1.2, though their \(Q^{*}_{DLAB}\) and \(\alpha\) values are more distributed across a wide range (Table 2). The higher values of \(Q^{*}_{DLAB}\) appear to corollate with larger values of between 30-40 m, while the lower \(Q^{*}_{DLIB}\) values prefer \(D_{\min}=20\) m. It seems likely that the power law slopes generated from these values for \(D<D_{\min}\) are all similar enough that they only modestly affect the \(\chi^{2}\) score. With that said, our top two best fit cases have the lowest tested \(Q^{*}_{DLIB}\) values and \(D_{\min}=20\) m. For reference, we have also plotted the disruption law from Leinhardt and Stewart (2009), which was based on laboratory and numerical hydrocode impact experiments into weak ice, in Fig. 9. Using Eq. 6 from Benz and Asphaug (1999), this curve assumes that bulk density is 0.93 g cm-3, \(Q_{0}=2\times 10^{5}\) erg g-1, a = -0.4, b = 1.3, B = 35 erg cm-3 g-2, and impact velocity is 1 km s-1. Their \(Q^{*}_{D}\) function lies close to the range of \(Q^{*}_{D}\) functions shown by our best fit functions in the gravity regime. The main difference is their value of \(D_{\min}\), which is 200 m rather than 20-40 m. As discussed above, this value would produce a steep slope in the model SFD for \(D<200\) m, and thus would not be able to reproduce the IP SFD. An extrapolation of their gravity regime curve to D \(\sim 20\) m would yield results close to our best fit results. The implication is that small bodies have substantially lower \(Q^{*}_{D}\) values than previously predicted, though they would still be consistent with the lower range of the ice shot targets provided by Leinhardt and Stewart (2009). We postulate that the differences could be caused by the porous, possibly fluffy nature of small KBOs (e.g., Nesvorny et al. 2021). Impacts into such targets are challenging to simulate in a controlled laboratory setting or within numerical codes. Ultimately, ground truth may be needed to understand how small KBOs react to impacts. ## 6 Verification of Model Results Our best fit modeling work makes a number of predictions that can be tested against observational data. For the current era, this includes recent impacts on Jupiter detected by ground-based observers, impacts on Saturn's rings found in Cassini images, and the debiased SFDs of Jupiter family comets, long period comets, and Centaurs. For more ancient times, our results can be checked against crater and basin SFDs on the giant planet satellites. Finally, the lifetime of the PKB must be consistent with the existence of Kuiper belt and Jupiter Trojan binaries. For example, we find that the PKB likely survives 20-30 Myr, well within the \(<100\) Myr limit derived by Nesvorny et al. (2018) and Nesvorny and Vokrouhlicky (2019). Note that some verification issues are left for future papers. For example, interpreting the bombardment history of giant planet satellites and computing their model surface ages from the spatial densities of their craters is deferred to Paper II. The predicted impact histories of the bodies in the Pluto-Charon system, and that of the cold classical Kuiper belt object Arrokoth, will be the subject of Paper III. ### Testing the Small Body Impact Flux in the Outer Solar System The most comprehensive model of the small body impact flux on outer solar system worlds is found in Zahnle et al. (2003) (hereafter Z03). Using the crater SFDs found on several younger terrains (e.g., Europa, those found on Ganymede's Gilgamesh basin), and their preferred crater scaling laws, Z03 estimated the likely projectile SFD striking the Jupiter system. From there, they calibrated the impact rate for this population on Jupiter by examining the close encounter rate of comets near Jupiter over the last several hundreds of years. These close encounter estimates were updated in Dones et al. (2009), and we will use them later in this section. From there, Z03 used numerical results from comet evolution simulations to determine the ratio of impact rates on Saturn, Uranus, and Neptune to that of Jupiter. That allowed them to use an Opik-type model to scale their Jupiter impact rates to the satellites of the giant planets and calculate the impact velocities of the projectiles striking those worlds (Zahnle et al. 1998; 2001; see Table 1 in Z03). The nominal Z03 model for impacts on outer solar system worlds, based on the projectile SFD derived from Europa and Ganymede craters, was called "Case A". A second model called "Case B" was also developed based on the crater SFD found on Triton. In a follow-up study, however, Schenk and Zahnle (2007) argued that most of Triton's craters were not primary craters and therefore could not be used as part of a production function. The chronological system of Z03 was not constructed to include non-primary craters, so the findings of Schenk and Zahnle (2007), if true, would invalidate Case B. Regardless, Case B is still commonly used in the literature, partly because its impactor SFD provides a better match than Case A for \(D_{\rm crat}<\) 10-15 km craters on certain terrains but also because there is still an on-going debate about the nature of the impactor SFD hitting Triton and other outer solar system worlds. For example, some have argued that asteroids from the inner solar system find a way to become planetocentric impactors in the giant planet systems (e.g., Denk et al. 2010). This issue will be discussed in Sec. 6.3. Overall, we find that our model SFD is a good match to the predictions of Case A for projectiles that are approximately \(D>50\) m. This is no surprise because, like Z03, our model uses craters on outer solar system bodies as constraints (i.e., Z03 used craters from Europa/Ganymede, and we used them from Iapetus/Phoebe). Like Z03, we predict that a heliocentric impactor SFD from the destabilized population/scattered disk made most of the larger craters found across the outer solar system. A difference between Z03 and our model, however, comes from our predictions for \(D<50\) m projectiles. This size range was not investigated by Z03. We have shown that our model SFD develops an increasingly Dohnanyi-type SFD for \(D<20\)-50 m bodies, and this signature is needed to reproduce the IP SFD. This raises the question of whether this predicted signature is seen in any other data set. The rest of Sec. 6.1 will concentrate on that issue. #### Calculating the Present-Day Impact Flux on Jupiter In order to calculate the present-day impact flux on Jupiter from our model results, we need to multiply the following components: 1. The model SFD from the destabilized population predicted for the present day, 2. The total fraction of PKB test bodies that have struck Jupiter between the start of the simulation and the present day, and 3. The normalized fractional rate of PKB test bodies hitting Jupiter per year in the present day. Two of these components have already been calculated. For component 1, we can apply the model SFD for the PKB at 4500 Myr from our best fit run #2 in Table 3 (see Fig. 9). For component 2, numerical models of the giant planet instability indicate that 1.1% of all test bodies from the PKB will eventually hit Jupiter (Nesvorny et al. 2013; 2017). For the last component, we took advantage of the increased impact statistics produced by Nesvorny et al. (2019a), where they created numerous clones of test bodies in the destabilized population and tracked them forward in time to better determine the population of Centaurs that would be discovered in the present-day. Taking the Jupiter impacts from these runs, we collected all test body impacts that occurred between 4400 and 4500 Myr and calculated the median fraction that strike Jupiter per year. This worked out to be 4.2 \(\times\) 10-13. Multiplying this value by 1.11%, we find that the fraction of PKB bodies that strike Jupiter in the present day is \(\sim\)4.7 \(\times\) 10-15 yr-1. This value is more intuitive if we multiply it by our destabilized SFD. Using the last frame shown in Fig. 10, we find there are \(\sim\)1012\(D>1\) km bodies. That value would yield an impact interval on Jupiter for \(D>1\) km bodies of 1 / [(1012) (4.7 \(\times\) 10-15 yr-1)] or \(\sim\)210 years. Performing the same calculation for the entire model SFD, we get the Jupiter impact rates shown in Fig. 12. PLACE FIGURE 12 HERE As a check on our work, we have added two calibration points for larger impacts. The first comes from Z03, who argue there have been six encounters within 4 Jupiter radii over the last 350 years that were from \(D>1.0\) [+0.5, -0.3] km objects (their equation 12; see also Schenk and Zahnle 2007; Dones et al. 2009). They also assert that the distribution of perjove distances for JFCs making close encounters with Jupiter is uniform, such that Jupiter impact rate is at least 4 \(\times\) 10-3 yr-1. This value yields an impact interval of \(\sim\)250 years, close to the value provided above. We show this as the blue dot on Fig. 12. The second comes from Dones et al. (2009), who argue from observations and orbital simulations that at least 9 comets have crossed Callisto's orbit between 1950 and 1999 (Callisto's semimajor axis is 26.3 Jupiter radii). The smallest of these objects is thought to be \(D>0.5\) km (Lamy et al. 2004). Assuming these values are complete, they yield an impact rate on Jupiter of 4 \(\times\) 10-3 yr-1 for \(D>0.5\) km comets. This value is shown in Fig. 12 as the red dot, with error bars from Dones et al. (2009). Both of these values are consistent with our calculated impact rate on Jupiter, shown as the solid line in Fig. 12, and that of Case A from Z03. In the next set of tests, however, we consider impactors that are \(D<50\) m. 6.1.2 The impact flux of superbolides striking Jupiter Over the last decade, Jupiter has become a compelling target for amateur ground-based observers. Modern equipment has allowed them to detect sporadic bright flashes in Jupiter's upper atmosphere. The first detection of a flash was made in 2010, and new detections have been made roughly every two years since that time (Hueso et al. 2010; 2013; 2018). The interpretation is that Jupiter is being struck by small bodies, with the so-called superbolides disrupting within the atmosphere after penetrating to some depth. The energy released in these events is comparable to the Chelyabinsk airburst, which was produced by the disruption of a ~20 m projectile about 30 km above the city of Chelyabinsk, Russia in 2013 (Popova et al. 2013; Hueso et al. 2013; 2018). In effect, the events are smaller versions of what was seen when the tidally-disrupted fragments of comet Shoemaker-Levy 9 hit Jupiter in 1994 (e.g., Harrington et al. 2004). This work inspired professional ground-based observers to watch Jupiter with dedicated telescopes, which led to the detection of a large flash in October 2021 (Arimatsu et al. 2022). The kinetic energy of the impactor in this event was the order of two megatons of TNT, an order of magnitude more than previously observed flashes on Jupiter. Observers have used this information to estimate the impact frequency of superbolides on Jupiter, after accounting for various observational biases. Using strengths and bulk densities associated with Jupiter-family comets, the cumulative flux of these small flashes, thought to be created by objects between 5-20 m in diameter or larger, is between ~10-65 impacts per year (Hueso et al. 2018). This range is plotted on Fig. 12 as the green box. The cumulative frequency of megaton-class impact on Jupiter is estimated to be ~1.3 [+3.1, -1.1] impacts per year (Arimatsu et al. 2022). Depending on the bolide's bulk density, its size could be between ~16 m (0.2 g cm\({}^{\text{-3}}\)) to ~24 m (0.6 g cm\({}^{\text{-3}}\)) to ~32 m (2.0 g cm\({}^{\text{-3}}\)). This data range is plotted as the magenta box in Fig. 12, with the black horizontal line representing the best estimate of Arimatsu et al. (2022). Both boxes are consistent with the predicted flux from our model. We believe they provide persuasive evidence that the projectile SFD evolves to a Dohnanyi-type SFD for D < 20 m, as predicted by both Morbidelli et al. (2021) and our model results. Additional tests are also possible by considering impacts on another large witness plate in the outer solar system, namely Saturn's rings. 6.1.3 The impact flux of small impactors on Saturn's rings Saturn's rings Saturn's ring system is a sizable target for small bodies. Consider that Saturn's A, B, and C rings have a collective surface area of 4.1 x 10\({}^{10}\) km\({}^{2}\), comparable to that of Saturn itself (i.e., 4.26 x 10\({}^{10}\) km\({}^{2}\)). During the Cassini mission to Saturn, at a few select times, the spacecraft was able to obtain particular viewing geometries of the rings that made it possible to detect dusty clouds of icy debris (i.e., at Saturn's equinox; at high phase angles to the rings) (Tiscareno et al. 2013). These clouds are believed to be ejecta produced by small impact events on icy bodies within the rings themselves. The size of the projectiles striking the rings was calculated by Tiscareno et al. (2013) from the brightness of the dusty clouds, with assumptions made about the nature of the ejecta particle SFDs within each cloud. Their estimates of the cumulative impact flux of impactors onto the A, B, and C rings are shown in Fig. 4 of Tiscareno et al. (2013). Using different ejecta particle SFDs for the debris clouds, with cumulative power law slopes of \(q=-3\) and -4, they estimated the range of impactor sizes striking the rings was between ~40 cm and potentially ~20 meters in diameter. To plot these data on Fig. 12, we need to convert the flux on Saturn's rings to that on Jupiter. We did this as follows. The results of the numerical giant planet instability runs from Nesvorny et al. (2013; 2017) indicate that the velocity of bodies entering Saturn's Hill sphere, or what is often called V, is close to ~8 km/s, while the ratio of Saturn impacts to Jupiter impacts is \(\varphi=0.33\). The gravitational focusing factor for objects encountering Saturn at a given distance \(d\) is given by: \[F=\left(1+\frac{V_{esc}^{2}\left(d\right)}{V_{o}^{2}}\right) \tag{6}\] with the escape \(V_{esc}\) (\(d\)) at a given distance from Saturn is given by: \[V_{esc}=\sqrt{\frac{2GM_{s}}{d}} \tag{6}\] Here GMs is the gravitational constant multiplied by Saturn's mass, or 37931206.234 km\({}^{3}\) s\({}^{-2}\). These variables allow us to calculate the gravitational focusing factors F for projectiles hitting the Saturn's A, B, and C rings (i.e., \(F_{\rm h}\), \(F_{\rm h}\), \(F_{\rm c}\), or \(F_{\rm ring}\) for short) and that for Saturn itself. (\(F_{\rm Saturn}\)). For the former, we calculated the mean escape velocity of objects at each ring distance from Saturn (i.e., the C ring ranges from 74,568-92,000 km, the B ring ranges from 92,000-117,580 km, and the A ring ranges from 122,170-136,775 km). For the latter, we assumed Saturn's radius is \(R_{\rm s}=58,232\) km, which yields Saturn's escape velocity of \(V_{esc}=35.5\) km/s. Using these values, we converted the cumulative impact flux values in Fig. 4 of Tiscareno et al. (2013) to a Jupiter impact flux by multiplying them by \(\pi\,R_{\rm s}^{2}\) (\(F_{\rm Saturn}\) / \(F_{\rm ring}\)) / \(\varphi\). Our results are shown in Fig. 12 as the blue points with error bars. The converted impact flux values match up well with an extrapolation of our model SPD; we expect the model SFD to follow a Dohnanyi-type power law slope to very small sizes (O'Brien and Greenberg 2003; Bottke et al. 2015). It suggests that the impactor hypothesis offered in Tiscareno et al. (2013), namely that the impactors come from a stream of Saturn-orbiting material produced by the previous breakup of a meteoroid, may not be needed. Instead, the dusty ejecta clouds observed by Cassini are likely to be from heliocentric impactors from the scattered disk. **6.2 The Ancient Crater Size Distributions on Outer Solar System Worlds** Our best fit model SFD for the destabilized population gives us the ability to interpret the cratered surfaces of many different outer solar system worlds, as well as make predictions on the largest impactors that could have hit them early in their history. There is also the fact that the destabilized population's SFD changes with time (Fig. 10), such that younger craters made by impactors that are between a few km and 10 km have a different SFD than older craters (Fig. 10). As discussed above, these topics are extensive enough that they will be the subject of a separate paper. Still, we believe it is useful to show comparisons between model crater SFDs based on our work and observed crater SFDs found on different worlds. **6.2.1 Jupiter's Satellites** We will start with crater data sets from the Galilean satellites, namely Europa, Ganymede, and Callisto (Fig. 13). The crater SFDs for Europa and Ganymede are described in Z03 (see their Fig. 1), and the Callisto counts come from Schenk et al. (2004) (see their Fig. 18.13). We chose them so we could examine the shape of different portions of our model SFD. The crater SFDs have been scaled by arbitrary factors so they can all be viewed on the same plot without the loss of any information. PLACE FIGURE 13 HERE A standard way to estimate the age of a solar system surface is by counting the number of impact craters found upon it. The greater the crater spatial densities, the older the surface. If one also understands the crater production rate, it is possible to use crater spatial densities to estimate the absolute age of the surface. The number of craters counted on Europa and Ganymede per square million kilometers from Z03 correspond to relatively low crater spatial densities. For Europa, the crater SFDs come from counts taken across its surface. When compared to the CASE A crater production model from Z03, Europa's surface is between 30 and 70 Myr old. For Ganymede, the crater SFD shown comes from craters superposed on the large basin Gilgamesh, and they have a CASE A age of 800 Myr old (Z03). The crater SFD for Callisto represent global counts. No age is listed in Schenk et al. (2004), but Z03 suggest Callisto's surface age is very old. Prior to discussing our crater production model, we note that all three of these worlds are 3-5 times larger than Ceres, the largest body where our preferred scaling law in Eq. (1) has been verified against crater data (Bottke et al. 2020). They also have gravitational accelerations that are more analogous to the Moon. These differences could mean that different parameters should be selected for Eq. (1). For this exercise, however, we will continue to use our standard parameters, leaving an exploration of this issue to Paper II. Our model curves in Fig. 13, shown as black lines, represent our crater production function. It was calculated as follows. First, we took the model SFDs from our best fit case in Fig. 10 and used Eq. (1) to turn them into crater SFDs. Next, we multiplied the SFDs by the Jupiter impact rates from Fig. 8 appropriate for their output time. Taking the crater SFD for from the youngest output timestep (i.e., the time nearest the present day) and the next youngest timestep, we determined the mean between the two. This represents the shape of the crater SFD produced on our surface in that time interval (\(\Delta t\)). Next, we scaled the mean crater SFD by multiplying it by \(\Delta t\). Moving backward in time, we added each new scaled model SFDs to the previous ones until reaching the beginning of the simulation. These integrated model crater SFDs, once properly scaled, can be compared with the observed crater SFDs. The missing component needed to calculate the model surface age is the probability factor that scales Jupiter impacts to specific satellite impacts (e.g., Zahnle et al. 2003). These values are dependent on the dynamical model used to create the impactors. We will re-calculate them in Paper II in order to derive model surface ages for the giant planet satellites. Using chi-squared methods to find the best fit between our crater production model and the observed craters SFDs, we find that the shapes of our model crater SFD are similar to the observed crater SFDs in Fig. 13. The lightly cratered surfaces of Europa and Gilgamesh basin on Ganymede are a better match to our results from relatively recent times, while Callisto's more heavily cratered surface is consistent with our results from ancient times. Note that our production SFD does veer away from largest basins on Callisto. This result is a consequence of Eq. (1)'s tendency to strongly increase projectile size for large impacts. This quality was also seen on Vesta and Ceres by Bottke et al. (2020), and it may be in need some future reassessment. Overall, however, the major components of the wavy shaped SFD predicted by our model are confirmed. #### 6.2.2 Saturn's Satellites Our crater production model was also compared with ancient craters on Saturn's satellites in Fig. 14. For the satellites Mimas, Tethys, Dione, Rhea, and Iapetus, we used the crater data from Kirchoff and Schenk (2010). The terrains chosen by those authors for assessment have high spatial densities of craters; they represent some of the oldest surfaces on these worlds. PLACE FIGURE 14 HERE Here we plot craters and basins from a defined surface as filled circles, while the observed global distribution of \(D_{\rm crat}>100\) km craters and basins are shown as filled stars. Note that the circles and stars occasionally include the same data but may be offset from one another because they were scaled by different surface areas. Fig. 14 also includes craters from Phoebe and Hyperion that were reported in Porco et al. (2005) and Thomas et al. (2007). Their counts were placed into root-2 size bins, which explains the equal spacing between the filled circles. To make comparisons between these crater SFDs and our model SFD, we again used the scaling law from Eq. (1) with the parameters provided in Sec 3.1.3. As in Fig. 13, we used chi-squared methods to find the best fit between our crater production model and the observed craters SFDs. Overall, our model SFDs show general alignment with the crater SFDs, but there are some mismatches found for \(D_{\rm crat}<15\) km craters. A possible reason for this is that many of these terrains may be dominated by secondary and/or sesquinary crater populations (e.g., Zahnle et al. 2003). For reference, secondary craters are those formed from the impact of sub-orbital ejecta from a single impact, while sesquinary craters are those formed from the impact of ejecta that initially escaped the target body, orbited the central body in the circumplanetary system, and then re-impacted the target body or another body in the circumplanetary system. Secondary crater fields on the Moon, Mars, and elsewhere often have steep power law slopes (e.g., q = -3 or more). Given that our crater production model has a shallow slope of q = -1.2 for sub-10 km craters, it is relatively easy for secondaries and sesquinaries to overwhelm primary crater populations locally or even regionally (McEwen and Bierhaus 2006; Bierhaus et al. 2018). Support for the apparent dominance of secondaries and sesquinaries on many Saturn satellite terrains come from the elongated crater populations identified on Tethys and Dione by Ferguson et al. (2020; 2022a,b). Of the craters they measured, most which were \(D_{\rm crat}<15\) km, they found most had East-West orientations. Such direction distributions are much more easily produced by sesquinaries than heliocentric impactors. #### Uranus's Satellites Uranus also has a number of mid-sized satellites with extensively cratered surfaces, Miranda, Ariel, Umbriel, Titania, and Oberon. They have sizes ranging in diameter from 472 km for Miranda to 1578 km for Titania. Their crater histories were recently reanalyzed using modern methods by Kirchoff et al. (2022). In their paper, Miranda was divided into four study terrains, Ariel and Titania was divided into two study terrains, and Umbriel and Oberon only had one study terrain apiece. Here we plot the crater SFDs from each world that were defined either as "cratered" terrains or, for Miranda, "cratered dense (or cratered D)" terrains by Kirchoff et al. (2022). Using chronology models from Z03, Kirchoff et al. (2022) showed that the crater spatial densities of these surfaces of Miranda, Umbriel, Titania, and Oberon had ancient surfaces, while those for Ariel were younger. They are shown in Fig. 15. PLACE FIGURE 15 HERE Using chi-squared methods to find a best fit between model and observed crater SFDs, we find the two are largely consistent with one another. The largest observed mismatch is with Ariel craters for \(D_{\rm crat}<15\) km. As suggested with Saturn's mid-sized moons, this could indicate the advent of secondaries or sesquinary craters, as seen on many of Saturn's satellites (Fig. 14). Overall, the crater SFDs on the Uranian satellites provide additional evidence that our model SFD in Fig. 10 is plausible. #### Pluto, Charon, and Arrokoth One of the ongoing mysteries presented in Z03 was the unknown impactor SFD striking various giant planet satellites. Given the differences between their CASE A and CASE B crater production SFDs, the uncertainties about the contribution of secondaries and sesquinaries, etc.., there was no easy way to rule out certain models and scenarios. Some clarity was brought to this issue by the New Horizons mission, which imaged the cratered surfaces of Pluto, Charon, the smaller moons Nix and Hydra, and the cold classical Kuiper belt object Arrokoth (Robbins et al. 2017; Singer et al. 2019; Spencer et al. 2020; Robbins and Singer 2021). These observations provided new constraints on the impactor SFD striking KBOs, which would be a combination of the PKB, the Kuiper belt, and the destabilized population/scattered disk (e.g., Morbidelli et al. 2021). From here, we can glean new insights into the general shape of the destabilized population's SFD striking the giant planets. The crater SFD found on Charon by Singer et al. (2019) showed a very shallow power law slope for craters \(D_{\rm{crat}}<10\)-15 km, and a steeper slope for craters larger than this size. Robbins et al. (2018) found similar results, and the different crater counts were reconciled in Robbins and Singer (2021) (see also Ali-Dib 2022). The new work identified that smaller craters follow a cumulative slope \(q=-0.7\pm 0.2\), while larger craters had -2.8 \(\pm\) 0.6. The former is shallower than any of the crater SFDs found on the giant planet satellites so far. For example, the IP SFD, which has data in the size range, shows \(q\sim-1.2\) for \(D_{\rm{crat}}<10\) km craters. The authors also argue that this shallow shape was not produced by crater erasure processes. A more complex picture emerges if we incorporate into the story the limited number of craters found on Arrokoth (Morbidelli et al. 2021). Given that Arrokoth is a cold classical Kuiper belt object, the populations that can strike it over time are more limited. They can come from the cold classicals themselves, portions of the excited PKB/Kuiper belt, and portions of the destabilized population/scattered disk. Arrokoth has several craters that are sub-km, no craters with 1 < \(D_{\rm{crat}}<7\) km, and one crater that is \(D_{\rm{crat}}>7\) km. Using a Monte Carlo code, Morbidelli et al. (2021) showed that the cumulative power law slope of the projectile population making these craters would likely have -1.5 < \(q<-1.2\) (see also Spencer et al. 2020; who found \(q\sim-1.3\)), and that values of \(q=-0.7\pm 0.2\) were statistically improbable (i.e., see their Fig. 3). The former values are consistent with the results of the model SFD created in Fig. 10. In our collisional model, we have only seen one example of a \(q=-0.7\pm 0.2\) slope in the relevant size range, and that was for a limited time. When Neptune migrates across the PKB, it dynamically excites many KBOs to higher eccentricities and larger impact velocities with one another, as seen in Figs. 1 and 5. For a short interval, while the PKB and destabilized populations are still massive, the higher impact velocities cause the Dohnanyi-type SFD for \(D\) < \(D_{\rm{min}}\) to shift to larger values. This change allows impactors in the Dohnanyi-type SFD to disrupt modestly larger bodies, which in turn decreases the slope of the SFD between several tens of meters < \(D<1\) km to \(q\sim-0.9\) for a limited time. The \(q\sim-0.9\) slope does not last very long, with the system eventually re-equilibrating to \(q\sim-1.2\), but while it lasts, its value is within the error bars of \(q=-0.7\pm 0.2\) (Robbins and Singer 2021). We find this effect to be interesting, but it seems unlikely to us that such early impacts, while intense, could overshadow the impact flux affecting Charon over the subsequent 4.5 Gyr of its history. Accordingly, we are presently unsurve how to explain the slope differences between Robbins and Singer (2021) and Morbidelli et al. (2021). Morbidelli et al. (2021) predicted that the Kuiper belt should have power law slopes of q = -3 for D < 20 m, q = -1.0 to -1.2 for 20 m < D < 2 km, and q = -3 for 2 < D < 100 km. Our preferred slopes for the destabilized population at early times are largely consistent with these predictions: q = -2.7 for D < 20 m, q = -1.2 for 30 m < D < 1 km, q = -2.5 for 1 < D < 10 km, and q = -1.5 for 10 < D < 100 km. The main mismatch comes from their prediction of the Trojan-like SFD for D > 2 km; our model results suggest this size range has a wavy shape driven by collisional evolution. The "odd constraint out" is Charon's observed slope, which is anomalous compared to the other outer solar system bodies studied to date. Until we visit additional KBOs, or large Centaurs whose crater records are still intact from their time in the Kuiper belt/scattered disk, we have no easy way to gauge the importance of this shape difference. With that said, the Kuiper belt SFD has considerable observational uncertainties, particularly at smaller sizes, as shown in Parker (2021) (e.g., their Fig. 4). Perhaps a solution can be found by treating the collisional evolution of the PKB, destabilized population, Kuiper belt, and cold classical Kuiper belt as distinct populations in Boulder, each with their own SFDs. We will explore whether this model can explain the cratered records of Pluto, Charon, Arrokoth, Nix, and Hydra in Paper III. ### 6.3 Comparison of Model SFDs for the Main Belt and Destabilized Population One of the most unusual scenarios in the literature is that numerous craters on Saturn's satellites were produced by ejected main belt asteroids (e.g., Denk et al. 2010). This idea is based on the observation that crater SFDs on Saturn's satellites have a wavy shape broadly similar to crater SFDs found on the Moon, and so both may have been made by the same impactors. The upside of this hypothesis is that if it were true, lunar crater chronology could be used to date surfaces throughout the Saturn system. The downside is that ejected main belt asteroids need to find dynamical pathways that would allow them to be captured into planetocentric orbit around Saturn with low semimajor axis, eccentricity, and inclination values. This would allow them to hit the inner Saturn satellites at low impact velocities and create a crater SFD similar in shape to observations. A problem with this scenario is that nearly all asteroids reaching the outer solar system do so after being ejected out of the solar system via an encounter with Jupiter (e.g., Bottke et al. 2002; Granvik et al. 2016; 2018). The only way to be captured into the Saturn system would be for the asteroids to undergo large numbers of planetary encounters, such that they achieve nearly circular, low inclination orbits with Saturn. Such behavior has yet to be noted by any numerical simulations discussed in the literature. Next, some of these bodies would need to wander into the L1 or L2 regions of Saturn, which would allow them to achieve a temporary Saturn orbit capture. The new "minimoons" would reside within Saturn's Hill Sphere for a limited time until they struck a body or wandered back out of the L1 or L2 regions. At Jupiter, this mechanism explains how a small fraction of Jupiter family comets become temporarily captured within its Hill Sphere (e.g., Kary and Dones 1996). But it is a relatively rare dynamical pathway for objects, even those on nearly circular orbits. Finally, even if a few bodies managed to beat the odds and follow all of these steps, they would need to lower their planetocentric semimajor axes, eccentricities and inclinations values enough to achieve low impact velocities with the inner satellites. At best, satellites encounters could achieve this behavior for tiny fraction of the minimoon population. Taken together, we can readily reject the asteroid impactor hypothesis on dynamical grounds. We argue that a more satisfying solution to this issue is that collisional evolution in the main belt and PKB/destabilized populations yield wavy SFDs that have modestly similar shapes. While the \(Q_{D}^{*}\) functions for asteroids and KBOs in Fig. 9 differ from one another, both produce steep Dohnanyi-type SFDs for objects with \(D<D_{\rm min}\), and this leads to collision grinding among larger objects up to sizes disrupted by a projectile with \(D_{\rm min}\). In the main belt, this means a shallow SFD of \(q\sim-1.2\) between 0.2 and 2-3 km (Bottke et al. 2020), while in the destabilized population, we get a shallow slope of \(q=\sim-1.2\) between \(\sim\)0.03 and 1 km (Fig. 10; see also Morbidelli et al. 2021). A comparison between the main belt SFD from Bottke et al. (2020) and the destabilized population SFD from Fig. 10 (at the 40 Myr timestep) is shown in Fig. 16. Both SFDs are wavy, have Dohnanyi SFDs at small sizes, and have vaguely similar shapes. The local maxima, however, are different: the main belt has a bump at \(D\sim 2-3\) km while destabilized population has one at \(D\sim 1\) km. This variance is caused by asteroids and KBOs following different \(Q_{D}^{*}\) functions, with their \(D_{\rm min}\) values being 200 m and 20 m, respectively. The implication is that there is no longer any basis to use lunar crater chronologies in the outer solar system. Instead, new chronology models will need to be developed using the model SFD produced by the destabilized population. PLACE FIGURE 16 HERE ### Neptune Trojans A potential test of our model SFD comes from the Neptune Trojans that populate the L4 and L5 orbital regions of Neptune. In our model, these worlds were captured from the destabilized population early after the giant planet instability (Nesvorny and Vokrouhlicky 2009; Parker 2015; Gomes and Nesvorny 2016). According to Lin et al. (2021), 13 of the 21 known Neptune Trojans are stable for more than 1 Gyr (see their Table 1). They suggest that the Neptune L4 population has 149 [+95, -81] objects with absolute magnitude \(H<8\), roughly equivalent to prediction of 162 \(\pm\) 73, 250, and an upper limit of 300 by Lin et al. (2019), Sheppard and Trujillo (2010), and Gladman et al. (2012), respectively. For reference, the Lin et al. (2021) value is 2.4 times than that of the Jupiter Trojans over the same magnitude range. At face value, the larger population size would suggest a similarly-shaped SFD to that seen in the Jupiter Trojans (Fig. 2b). Insights gleaned from our collision probability simulations in Fig. 4, however, suggest that the intrinsic collision probability of Neptune Trojans striking one another is likely to be orders of magnitude lower than that of the Jupiter Trojans. Unless impacts from the destabilized population can make up the difference, which we suspect is unlikely, our prediction is the Neptune Trojan's SFD should have a shape that is comparable to an early version of the destabilized population's SFD in Fig. 10. Estimates of the Neptune Trojan SFD from Sheppard and Trujilo (2010) show that the largest observed objects with \(H\) < 8.5 (_D_ > 90 \(\pm\) 20 km, depending on the choice of albedo; see Eq. (1)) follow a cumulative power law slope \(q\) = -5 \(\pm\) 1, similar to that of the Kuiper belt (e.g., Nesvorny and Vokrouhlicky 2016). The SFD then bends to a slope that is shallower than \(q\) \(\sim\) -1.5 for objects with an absolute magnitude \(H\) > 8.5. This value is inconsistent with the Jupiter Trojan SFD, whose 5 \(<\)_D_ < 100 km bodies follow \(q\) \(\sim\) -2 (Fig. 2b), but it is a match with our estimate of the initial PKB (Fig. 3) and the shape of the destabilized population's SFD (Fig. 10). It also may explain why no Neptune Trojans have been detected between 8.1 < _H_r < 8.6 (Lin et al. 2021). Overall, the estimated shape of the Neptune Trojan SFD from the twenty or so known objects supports the hypothesis that the Neptune Trojans are less collisionally evolved than the Jupiter Trojans. While the degree of collisional evolution they have experienced needs to be modeled, it is possible that the largest Neptune Trojans still retain craters from their time in the PKB. Accordingly, these bodies may make compelling targets for spacecraft missions that fly past 30 au or those that intend to enter into orbit with Neptune. ### Comet SFDs #### 6.5.1 Jupiter family comet size distribution The SFD of the Jupiter family comets (JFCs) has been investigated by many different groups over the last decade (e.g., a compilation of several different observational sets is shown in Lisse et al. 2020). The cumulative power law slopes of these comet datasets are between -1.6 \(<\)_q_ < -2.5 for 4 \(<\)_D_ < 10 km bodies. Infrared observations of 98 distant comets using Spitzer over the same size range determined a SFD with a slope of \(q\) = -1.9 (Fernandez et al. 2013). It should be noted that nearly all of the comets were observed at more than 3 au from the Sun, yet a substantial fraction still showed activity. In Fig. 17, we have plotted a comparison between the Fernandez et al. (2013) data and a scaled version of our model SFD, using our results from the last timestep in Fig. 10. PLACE FIGURE 17 HERE The match is good between the two sets for 5 km \(<\)_D_ < 20 km, but the SFDs diverge for smaller objects. Given that our model SFD has the same shape as fresh crater SFDs on Europa and Ganymede, we suspect this difference is observational incompleteness for \(D\) < 5 km bodies. As a further check on this discrepancy, we have also placed on Fig. 17 an estimate of the debiased JFC SFD found using infrared observations from the Wide-field Infrared Survey Explorer (WISE) mission (Bauer et al. 2017). Using a survey simulator, they created suites of synthetic comets to determine the observational selection effects for these bodies being detected by WISE. This allowed them to debias their detected number of 108 short period comets. Here they found that the power law slope of the net SFD was steeper over a larger range, with \(q=-2.3\pm 0.2\) for 1 < \(D\) < 20 km. We plotted a scaled version of this SFD in Fig. 17 so we could compare the shape of their debiased SFD to our model SFD. Note that we do not attempt to match their predicted number of JFCs here because we would need to link our SFD with both test body dynamics and a comet activity model. We consider this work interesting but beyond the scope of this paper. Instead, we normalized our SFD to their population of \(D\) > 5 km bodies. Overall, our model SFD and debiased SFD from Bauer et al. (2017) are very close to one another for most of the bins in the JFC dataset. The results are mutually supportive, in that they suggest both model and data are reasonable. For \(D\) < 2 km bodies, however, we find a mismatch between model and observations. The cause is either observational incompleteness or the onset of a physical loss mechanism (e.g., Jewitt 2022) that presumably is affecting JFCs that come close to the Sun. #### 6.5.2 Long period comet size distribution There have also been attempts to debias the long period comet (LPC) population. For example, using a survey simulator, Boe et al. (2019) debiased 150 detected LPCs detected by the Pan-STARRS1 near-Earth object survey. Their resultant SFD was found to be consistent with another LPC survey from Meech et al. (2004), who used the Wide Field Camera on the Hubble Space Telescope and the Low Resolution Imaging Spectrograph on the Keck II telescope to detect 21 LPCs and JFCs comet nuclei. These comets were observed at heliocentric distances between 20-30 au, far enough away that nuclei identification was relatively easy compared to that of comets observed closer to the Sun with sizable comae. Both SFDs are plotted in Fig. 18. PLACE FIGURE 18 HERE Dynamical modeling work indicates the LPCs were ejected into the Oort cloud during the giant planet instability via giant planet encounters, with their perihelion values raised by passing stars and galactic tides (e.g., Vokrouhlicky et al. 2019). Accordingly, our expectation is that these comets should have a SFD similar to the destabilized population shortly after the giant planet instability. Here we plotted our model SFD from the second timestep in Fig. 10 (10 Myr timestep), which is modestly shallower than the third timestep (40 Myr). As before, we normalized our SFD to the cumulative number of \(D\) = 2.5 km bodies in Boe et al. (2019) to compare the shape of the SFDs. Overall, we found a good match between our model SFD, the Boe et al. (2019) results, and the Meech et al. (2004) results for \(D\) > 2 km. For smaller sizes, the debiased SFD quickly becomes shallow compared to the model SFD. As stated in Boe et al. (2019), we do not know whether the paucity of small LPCs means the objects are harder to detect or that they have physically disrupted. Given that the mismatch between model and debiased data approaches an order of magnitude for \(D\) > 0.5 km bodies, and that Boe et al. (2021) presumably understands the observational biases associated with Pan-STARRS1 reasonably well, we suspect that the likely culprit is comet splitting or disruption events among LPCs that approach the Sun (e.g., Jewitt 2022). We also plotted the debiased estimate of LPCs detected by WISE in Bauer et al. (2017). WISE detected 56 LPCs, with their debiased SFD shown in Fig. 18. Here their slope is much shallower than that of the JFCs, with \(q=-1.0\pm 0.1\) for \(1<D<20\) km. We have also normalized our model SFD so it intersects the WISE SFD at \(D=4.5\) km. The match between our model and the debiased SFD of the WISE LPCs is modest. We have yet to see a model SFD in our runs that could match \(q=-1.0\pm 0.1\) over \(1<D<20\) km, so we suspect several possibilities: the detection efficiency of LPCs calculated for WISE is inaccurate, LPC comae are not being subtracted out as well as they could, and/or that there is some physical process that is being missed. Note that many of the WISE LPCs were detected inside 3-5 au, where activity is more prominent. LPCs undergo relatively frequent splitting events as approach the Sun, and this can affect both their activity, with new regions available for sublimating ices, and the size of the nuclei. #### 6.5.3 Centaurs Another way to test the dynamical model used here, as well as the destabilized population's SFD, is to examine the observed population of Centaurs. These bodies come from the destabilized population/scattered disk and reside on giant planet crossing orbits. Centaurs are formally defined as objects with a semimajor axis smaller than that of Neptune, a Tisserand parameter with Jupiter larger than 3.05, and a perihelion distance \(q>7.35\) au (Gladman et al. 2008). Objects with \(q<7.35\) au are defined as JFCs. Note that the dynamical boundaries separating Centaurs from JFCs are defined for convenience, in much the same way a river is sometimes named for its upper and lower courses. Nesvorny et al. (2019a) tested their dynamical model for the giant planet instability and the evolution of the PKB/destabilized population by predicting the present-day Centaur population. They compared their results to those of the Outer Solar System Origins Survey (OSSOS), a wide-field imaging survey that has detected nearly a thousand outer Solar System objects. Using an OSSOS survey simulator, they computed how many Centaurs OSSOS should have detected with semimajor axis \(a<30\) au, perihelion distance \(q>7.5\) au and diameter \(D>10\) km, defined as absolute magnitude \(H<13.7\) for a 6% albedo. They found it should have picked out \(11\pm 4\) Centaurs with \(H<13.7\), within the error bars of the 15 Centaurs actually detected by OSSOS. A possible problem with the Nesvorny et al. (2019a) estimate, however, is that it used a Trojan-like SFD for \(D<100\) km bodies in the PKB (Fig. 2). This means their model Centaurs with \(D<50\) km follow a cumulative power law slope of \(q=-2.1\) down to small sizes. This value is more aggressive than our model results from Fig. 10; our model results indicate that they should instead adopt the SFD from the last time step in Fig. 10. Examining both SFDs side by side, we find our model SFD is lower by a factor of 1.6 near \(D>10\) km than that of Nesvorny et al. (2019a), but then catches up to it again at \(D>4\) km. Accordingly, the \(11\pm 4\) detections above should instead probably be \(7\pm 3\), which would put the model off by a factor of 2. With that said, there is variability among Centaur albedos; many have values larger than 10% (Johnston 2018), while Spitzer observations indicate the observed Centaurs have mean albedos similar to comets and are a few percent (Lisse et al. 2020). The use of lower or higher mean albedos in this model would shrink this gap between model and observations but would not eliminate it. Given our match with the other constraints (see previous sections), we postulate that this difference could be from small number statistics or stochastic processes (i.e., the Centaur population should fluctuate over short time intervals, though it is secularly decreasing over longer timescales). Regardless, the Nesvorny et al. (2019) analysis should be repeated as additional Centaurs are discovered by OSSOS and other surveys. ## 7 Implications Our collisional evolution model results for the PKB and destabilized population have implications for various solar system issues. Given the length of this paper, we decided it would be better to summarize some of our implications in this section and place a longer discussion of these topics into Appendix A. **Hydrated Materials in Interplanetary Dust Particles and Comets.** The conventional wisdom is that most anhydrous IDPs reaching Earth are derived from comets or KBOs, while hydrous clays among IDPs are likely from collisions taking place among hydrated asteroids in the main belt. Using the cosmic ray exposure age of various IDPs collected in the stratosphere, Keller and Flynn (2022) report that the abundance of hydrous IDPs reaching Earth is somewhere between 20-35%. This result is arguably unexpected, in that Nesvorny et al. (2010) show that the abundance of IDPs reaching the Earth from the main belt is \(<\) 15%. This mismatch suggests the abundance of hydrous IDPs coming from Jupiter family comets and KBOs higher than expected. One way to explain this difference is to extrapolate from our model results. We show that numerous 100-300 km bodies are disrupted in the PKB, destabilized population, and scattered disk, with many more potentially shattered and scrambled. We hypothesize that these events may be capable of dredging up and/or ejecting aqueously altered materials from the deep interiors of KBOs, provided they exist. The putative interior water creating the aqueously altered materials would come from the decay of radioactive nuclides like \({}^{26}\)Al. Alternatively, it could be that impacts produce sufficient heating on the surface of KBOs that they create a layer of near-surface hydrous clays, some which get liberated as hydrous IDPs. One way to test these ideas may be to examine objects like the Jupiter Trojan Eurybates, the largest remnant of the Eurybates family and a target of the Lucy mission. It may show evidence for exposed hydrated materials dredged up from its parent body's interior or possibly hydrated materials created by impact heating (Sec. A.1). **Most Observed Comets are Fragments of Large KBOs.** The initial SFD of the PKB indicates that \(D\) > 10 km objects are largely primordial, while our collisional model shows that \(D\) < 10 km objects, as they become smaller, are increasingly fragments of large KBO collisions. Our predictions are in line with a considerable body of previous work. We postulate this may explain why so many comets have shapes reminiscent of near-Earth asteroids, which are themselves collisional fragments (Sec. A.2). The sizes of Interstellar Comets. The sizes of the interstellar comets 'Oumuamua and Borisov have diameters of 0.14-0.22 km and 0.8-1.0 km, respectively ('Oumuamua ISSI Team 2019; Jewitt and Luu 2019; Hui et al. 2020). While two objects by themselves are the definition of statistics of small numbers, this factor of five difference in their sizes is arguably unexpected. Assuming their detections were not strongly biased by observational selection effects, we used a Monte Carlo code to estimate what kind of synthesis SFD could produce this range. Our work shows that these two bodies were most likely derived from a shallow power law slope similar to the ones reported here for the destabilized population (i.e., \(q\) ~ -1 cumulative). This similarity may suggest that collisional processes strongly affect small comets formed in exoplanets systems prior to ejection. ## 8 Conclusions We developed a collisional evolution model of the primordial Kuiper belt (PKB), its destabilized population, and the Jupiter Trojans asteroids. The destabilized population are KBOs pushed onto giant planet crossing orbits in the aftermath of the giant planet instability and Neptune's migration across the PKB. Some of these bodies go on to strike the giant planet satellites, and their long-lived remnants make up Neptune's scattered disk, the population that replenishes the Jupiter family comet and Centaur populations. A small portion of the destabilized population is captured in Jupiter's L4 and L5 zones during the giant planet instability, and these bodies make up the Jupiter Trojan population (Sec. 2). **Model Constraints.** We tested our collisional evolution results by attempting to reproduce two key sets of constraints: the shapes of the basin/crater SFDs found on the oldest terrains of the giant planet satellites and the combined SFD of the Jupiter Trojans (Sec. 3). For the former, we created a synthesis projectile SFD using basins/craters from Iapetus and Phoebe (i.e., the IP SFD) together with crater scaling laws verified against the crater SFDs found on the largest main belt asteroids (e.g., Ceres, Vesta, Lutetia, Mathilde) (Sec. 3.1). For the latter, we used the latest estimates of the combined Jupiter Trojan SFD, including observations made from the Suburu telescope with Suprime-Cam and Hyper Suprime-Cam (Sec. 3.2). **Initial SFD of the PKB.** Using insights from a range of sources (e.g., planetesimal formation models that use the streaming instability to reproduce the population of well separated binary KBOs; new observations of the cold classical Kuiper belt, collisional evolution models of the main asteroid belt; dynamical simulations of giant planet instability), we deduced a starting SFD for the PKB (Sec 4.2). It assumes there is a bump in the cumulative SFD near \(D\) ~ 100 km, as observed in current main belt and Kuiper belt. For smaller bodies, we assume the SFD follows a shallow power law slope with cumulative power law slope \(q\) ~ -1.1, like that inferred for the primordial asteroid belt (Bottke et al. 2005a,b). **Collisional Evolution Model.** Our work takes advantage of numerical models of the giant planet instability that are capable of reproducing dynamical constraints across the solar system. These results allowed us to calculate the collision probabilities and impact velocities of PKB bodies, destabilized population bodies, and Jupiter Trojans. They were used as input into our collisional evolution model (Sec. 4.3). Using the Boulder collisional evolution code, we tested a wide range of input parameters: the time Neptune takes to enter the PKB (\(\Delta t_{0}\)), the time it takes Neptune to cross the PKB prior to the giant planet instability (\(\Delta t_{1}\)), and the shape of the KBO disruption law, or what we call the \(Q_{D}^{*}\) function, which is defined by four parameters (Table 2; Secs. 4.3-4.4). We ran a combined number of over 140,000 trial cases for the evolution of the destabilized population and the Jupiter Trojans, with the results of each set tested against the appropriate constraints using \(\chi^{2}\) methods (Sec. 5). Model Results for Destabilized Population and Trojans. We found that disruption events among \(D\) ~100 to ~300 km bodies produced numerous \(D\) < 10 km fragments that continue to undergo collisional evolution. We also identified \(Q_{D}^{*}\) functions that can reproduce the IP SFD and the Trojan SFDs (Table 3). From these functions, we find the easiest KBOs to disrupt from an impact energy per mass perspective are near \(D_{\rm min}\) ~ 20 m. When put into a collisions model, we find these \(Q_{D}^{*}\) functions lead to a steep Dohnanyi-like SFD for \(D_{\rm min}\) < 20 m (i.e., \(q\) ~ -2.7). In contrast, the same value for main belt asteroids, which are dominated by carbonaceous chondrite-like bodies that presumably formed in the giant planet zone, is \(D_{\rm min}\) ~ 200 m. For the destabilized population, steep Dohnanyi-like SFDs with cumulative power law slopes of \(q\) ~ -2.7 will disrupt numerous bodies with \(D\) > 20 m, which in turn creates a shallow SFD (\(q\) ~ -1.2) for 30 m < \(D\) < 1 km and a steeper SFD for 1 < \(D\) < 10 km. The result is a wavy shaped SFD that is broadly similar to that of the main belt (Sec. 5.2.1). Despite its smaller population, Jupiter Trojans experience more collisional evolution than typical members of the destabilized population. In part, this is because Trojan collision probabilities are orders of magnitude higher than the destabilized population, but also this is because Trojans are stable enough to maintain their current population size (within a factor of 2 or so) for ~4.5 Gyr. Collisions steepen the Trojan SFD over time until it takes on the observed \(q\) ~ -2 slope for 5 < \(D\) < 100 km (Sec. 5.2.2). The remnants of the destabilized population in the scattered disk undergo a more limited degree of collisional evolution. These bodies can strike both themselves and the population of stable KBOs over many billions of years, leading to a SFD whose power law slope between 1 < \(D\) < 10 km gradually steepens with time. This effect can be seen on large relatively young terrains on worlds like Ganymede (Sec. 5.2.1). Our best fit runs indicate that the giant planet instability probably occurred ~20-30 Myr after the dissipation of the solar nebula, which in turn probably occurred a few Myr after CAIs (e.g., Weiss and Bottke 2021). This interval gives time for the PKB to experience some collisional evolution, but not so much to greatly steepen in the slope of 1 < \(D\) < 10 km bodies. Accordingly, our work is supportive of the idea that the giant planet instability did not occur after hundreds of Myr (e.g., Nesvorny et al. 2018) (Sec. 5.2.3). In order to further test our results, we compared our predictions to additional sets of data and found the following results. Superbolide Impacts on Jupiter. We calculated the model impact flux of small comets striking Jupiter in the present day and found it was high enough to reproduce the frequency of "bright flashes" on Jupiter seen by amateur and professional observers. The size and frequency of superbolide impacts on Jupiter are consistent with our model's prediction that they come from the steep Dohnanyi-type portion of the scattered disk's SFD, which has a cumulative power law slope of q = -2.7. This value is consistent with the predicted shape of the projectile SFD suggested by Morbidelli et al. (2021), who argued the power law slope for small comets should be q \(\sim\) -3 (Sec. 6.1.2). Impacts on Saturn's Rings. A similar test was performed to determine the flux of small bodies striking Saturn's rings. We found that the impact frequency of 0.01 < \(D\) < 1 m bodies was consistent with constraints provided by specific ring images from the Cassini mission. The impactor sizes were deduced by the Cassini team from tiny clouds of debris, which were interpreted to be ejecta from small collision events on ring bodies (Sec. 6.1.3). Crater SFDs on Giant Planet Satellites. Using the results of crater scaling laws with input parameters verified on asteroids like Mathilde and Ceres, we compared crater SFDs produced by our model to those found on the satellites of Jupiter, Saturn, and Uranus. We found general agreement between model and observations for our larger craters. The mismatches that did exist predominantly came from smaller craters on worlds where secondary and/or sesquinary craters may be important (Sec. 6.2.1-6.2.3). The Shallow SFD on Charon and Arrokoth. We discussed the shallow SFD found on Charon (_q_ = -0.7 \(\pm\) 0.2; Robbins and Singer 2021) for \(D\) < 10-15 km craters. While we did not model the collisional evolution of the Kuiper belt in this paper, we find that the shape of the destabilized population's SFD is larger than this value (_q_ = -1.2). Conversely, our value was consistent with prediction made from an analysis of Arrokoth craters by Morbidelli et al. (2021). Additional modeling will be needed to see if Kuiper belt collisions can explain crater observations on Charon (Sec. 6.2.4). Neptune Trojans. Observations of the Neptune Trojans indicate they have a shallower SFD for \(D\) < 100 km bodies than the Jupiter Trojans. This outcome is consistent with their capture from the early PKB/destabilized population. Collisional evolution is likely to be limited near 30 au, and thus the Neptune Trojan SFD may be relatively unchanged from early times (Sec. 6.4) Comparisons with Comet SFDs. Our model SFD for the destabilized population was compared to debiased observations of the Jupiter-family comet (JFC) and long period comet (LPC) SFDs, which have modestly different shapes. We found the shape of the model SFD from the present-day was a good match to the debiased population of JFCs for \(D\) > 2 km bodies. Recall that these comets come from the scattered disk, a population that has been collisionally-coupled to the Kuiper belt for 4.5 Gyr. For \(D\) < 2 km objects, we found our results increasingly overestimate the observed population as we go to smaller sizes (Sec. 6.5.1). Given that our model can reproduce young crater SFDs on Europa and Ganymede, we suspect the difference is most easily explained by observational incompleteness and/or small comets disrupting as they approach the Sun The shape of our model SFD was also able to reproduce that of the debiased \(D\) > 2 km LPC SFD found by Pan-STARRS1 (Boe et al. 2019), provided that we use collisional model results taken shortly after the giant planet instability. This implies that comets sent to the Oort cloud only experienced modest collisional evolution prior to their emplacement. For \(D\) < 2 km LPCs, we found our results overestimated observational constraints. We could mean that sub-km LPCs experienced substantial physical evolution and/or disruption en route to perihelion (Sec. 6.5.2). **Centaurs.** Using a destabilized population/scattered disk that followed a Jupiter Trojan-like SFD for \(D\) < 100 km bodies, Nesvorny et al. (2019a) used the OSSOS survey simulator to predict how many Centaurs with \(D\) > 10 km should have been detected to date. They found 11 +- 4 object, whereas the observed number was 15. Here we substituted in the model SFD from our work, which has a "valley" near \(D\) ~ 10 km. This decreased the OSSOS detections to 7 +- 3. While still within a factor of 2 of the observed value, this does represent a mismatch. The origin of this discrepancy will require further investigation (Sec. 6.5.3). **Acknowledgements** We thank Michelle Kirchoff, Paul Schenk, and Peter Thomas for giving us access to their crater databases, which was extremely helpful for this project. We also thank our two anonymous referees for their useful and constructive comments that made this a better paper. The work in this paper was supported by NASA's Solar System Workings program through Grant 80NSSC18K0186, NASA's Lucy mission through contract NNM16AA08C, and NASA's NEO Surveyor mission through NASA Contract 80MSFC20C0045. The work of David Vokrouhlicky was partially funded by grant 21-11058S from the Czech Science Foundation. **Appendix** **A.1**: **Hydrated Materials in Interplanetary Dust Particles and Comets** Interplanetary dust particles (IDPs) are micron- to cm-sized objects that come from asteroids, comets, and possibly KBOs (e.g., Nesvorny et al. 2010). This makes them a potential source of information about the nature of objects that formed and evolved beyond the original orbit of Neptune. Most IDPs larger than a few microns are driven inward toward the Sun by the effects of Poynting-Robertson (P-R) drag, which decreases both their semimajor axes and eccentricities. This creates a disk of IDPs, called the Zodiacal cloud, whose thermal emission can be observed using space-based telescopes (e.g., the Infrared Astronomical Satellite (IRAS), Spitzer). It also means that a small fraction of IDPs evolve far enough toward the Sun that they can impact Earth. Laboratory studies of IDPs suggest the majority have bulk compositions are similar to carbonaceous chondritic meteorites, specifically CIs and CMs (e.g., Bradley et al. 2003). IDPs are often defined by their physical properties and are placed into two broad categories: chondritic porous (CP) and chondritic smooth (CS) IDPs (Bradley et al. 2003). CP IDPs look like fractal aggregates and have porosities as high as 70%. They are anhydrous materials that are either from anhydrous parent bodies or from regions of hydrous parent bodies where aqueous alteration was negligible. CS IDPs have hydrated silicates (clays) and occasionally carbonates. They probably come from parent bodies where aqueous alteration has occurred. Traditionally, it has been assumed that CP IDPs, being predominantly anhydrous, come from comets, all which started their lives in the PKB and reached the inner solar system via the scattered disk or Oort cloud. The morphology and mineralogy of some CP IDPs also hint at connections to ice-rich parent bodies (Rietmeijer 2004; Zolensky et al. 2006). In this view, their parent bodies form with limited radiogenic nuclides like \({}^{26}\)Al and are therefore too cold to produce flowing water and hydrated materials. Some CP IDPs could also come from the main asteroid belt, where comet-like bodies, in the form of D- and P-type asteroids, are found in substantial numbers. This connection also matches up with the spectral characteristics of these bodies (Bradley et al. 1996; Vernazza et al. 2015). Following this logic, the hydrous materials in CS IDPs would need to come from an alternative source where aqueous alteration is possible, such as carbonaceous asteroids that now reside in the main belt. Several lines of evidence indicate that many CM or CI-like parent bodies in the main belt experienced aqueous alteration (e.g., studies of the links between IDPs and asteroid spectra; the presence of carbonate veins on Bennu; analysis of returned samples from Ryugu by Hayabusa2; Vernazza et al. 2015; Kaplan et al. 2020; Yokoyama et al. 2022). The sources of IDP populations coming from the main belt and comets have been explored using dynamical models constrained by IRAS and Spitzer observations. Here we focus on modeling results and predictions from Nesvorny et al. (2010). They showed that IDPs from the main belt objects start with relatively low inclinations, like their parent objects. As they evolve inward by P-R drag, their ecliptic latitudes remain low. IDPs from Jupiter family comets (JFCs) have a broader distribution of ecliptic latitudes, and nearly-isotropic comets (NICs) have the broadest distribution of all. These distributions, when fit to IRAS observations of the Zodiacal cloud, indicate that 85% to 95% of the observed mid-infrared emission is produced by particles from JFCs. Less than 10% of the remaining contributions comes from NICs or the main belt, and the typical size of the particles that contribute to this emission are 100 um in diameter. Further modeling reveals that > 85% of the total mass flux reaching Earth from IDPs is likely to come from disrupted Jupiter-family comets (though the relative importance of JFC and Kuiper Belt particles beyond Jupiter has yet to be quantified). The combined contribution from main belt bodies and NICs for these IDPs is therefore limited to < 15%. Accordingly, our expectation would be that > 90% of all IDPs should be anhydrous CP-type IDPs and < 10% would be hydrous CS-type IDPs. The fraction of hydrous IDPs collected from the stratosphere, however, instead varies from about 20% to 35% (e.g., Genge et al. 2020; Keller and Flynn 2022). This value is substantially higher than the < 10% limit imposed by Nesvorny et al. (2010), and it could suggest a substantial fraction of IDPs from Jupiter-family comets and/or the Kuiper belt are hydrous. Additional support for hydrous materials from comets and/or KBOs comes from the work of Keller and Flynn (2022), who investigated the exposure ages of IDPs to solar energetic particles. They found that many IDPs spent > 1 Myr in space prior to reaching Earth, and that makes it difficult for them to come directly from main belt asteroids or Jupiter-family comets; assuming their typical sizes are 20 um, their P-R drift timescales are far too short. Instead, Keller and Flynn (2022) postulate these long exposure age IDPs come from the Kuiper belt. They assert that such materials comprise ~25% of all of the IDPs analyzed in NASA's stratospheric dust collections, and that nearly half of these were hydrous. While this value is not debiased, it nevertheless points in the direction that a major source of hydrous IDPs could be the Kuiper belt, either directly or indirectly through JFCs. Phillosilicates are relatively rare among the observed comets (Davidsson et al. 2016), but they have been noticed from time to time (e.g., spectral features on 9P/Tempel 1 and C/1995 O1 Hale-Bopp; Lisse et al. 2006, 2007). Hydration features have also been detected on a few KBOs (de Bergh et al. 2004). They seem to be more noticeable on irregular satellites, which are thought to be captured KBOs (Nesvorny et al. 2008). Himalia, the largest irregular of Jupiter, appears to be hydrated and is similar to C-type asteroids (Brown and Rhoden 2014; Takir and Emery 2012). Phoebe, the largest irregular of Saturn, has phyllosilicates exposed on its surface (Clark et al. 2005), with apparently all regions showing water bands (Fraser and Brown 2018). Sycorax, the largest irregular of Uranus, has a 0.7 um spectral feature consistent with hydrated silicates (Sharkey 2023). Collectively, these bodies hint at a more complicated story for the source of hydrated IDPs. Keller and Flynn (2022) argue that most aqueous altered IDPs are derived from low-velocity collisional processes taking place on the surfaces of comets and KBOs. Here impacts would produce transient heating events that melt ice and yield aqueously altered materials. For larger KBOs, these processes would presumably make them look like present-day Himalia, Phoebe, etc. A potential issue with this scenario is whether most IDPs are indeed from surface collisions. Consider that the majority of IDPs coming from the main belt are derived from the disruption of a few modest-sized asteroids (e.g., Nesvorny et al. 2003), while many IDPs observed in the Zodiacal cloud may be coming from the disruption of Jupiter family comets (Nesvorny et al. 2010). In both cases, limited ejected material would be residing near the surface. Clearly additional modeling is needed here to explore this issue. We postulate that another way to explain these results would be to put them into the context of our collisional model results. We have argued that most of the mass of the initial PKB was in the form of ~100 km bodies (Sec. 4). Some bodies of this size or larger likely formed early enough to heat up from active radiogenic elements like \({}^{26}\)Al, which could lead to flowing water and aqueous altered materials within their deep interiors. Their exteriors, however, would presumably remain primitive to a substantial depth, with low temperatures preventing water from percolating up to the near surface. An example of such a body could be (87) Sylvia, a 280 km diameter P-type asteroid in the outer main belt that was likely captured from the PKB (Vokrouhlicky et al. 2016). While Sylvia's exterior is spectrally similar to anhydrous IDPs, studies of its gravitational interactions with its satellites point to a differentiated interior (Carry et al. 2021). In fact, in thermal modeling work done by Carry et al. (2021), they suggest Sylvia has a three-layer structure: a central region dominated by a muddy ocean, surrounded by a porous layer free of water, and then a primordial outer layer that remains too cold for ice to melt. Our model results show that disruptive collisions among 100-300 km diameter bodies create the fragments that produce the wavy shape of the destabilized population's SFD (and that of the Kuiper belt). If some of these bodies have Sylvia-like internal structures, disruptive collisions should mix interior and exterior materials in the largest remnant of the parent body and the ejected fragments. The degree of hydrous materials escaping in the form of fragments would depend on many factors: the accretion time and size of the parent body, the nature of the collision event dredging up aqueous altered materials, etc. Given that hydrated material is probably deep within large KBOs, our expectation is that most comets will be dominated by samples of the exterior, as is the case for asteroids in asteroid families (e.g., DellaGiustina et al. 2020). An interesting test of our hypothesis may come from NASA's upcoming Lucy mission. It will be interesting to see whether Jupiter Trojan (3548) Eurybates, the largest remnant of the Eurybates family and a body with C-type spectra (Marshall et al. 2022), has exposed phyllosilicates, and if so, whether the putative source can be identified. ### Comments on Comets as the Eurybates of Collisions Our results may be useful to the ongoing debate about whether comets are primordial or fragments of larger bodies. For the former, primordial means that they were formed directly by planetesimal processes in the solar nebula, such that their shapes, compositions, etc. are mainly from that provenance. For the latter, a fragment means they were formed by a cratering or disruption event taking place on a larger body. Davidsson et al. (2016) refers to the latter comets as "collisional rubble-piles" produced by the gravitational re-accumulation of debris. That characterization may not be a good fit for all bodies made in collisions. For example, as defined by Richardson et al. (2002), certain objects within asteroid families, like (243) Ida or (951) Gaspra, might be better described as fractured or shattered objects rather than rubble piles, though all are gravitational aggregates. For the moment, we will assume that fragments of larger collisions fall somewhere on this spectrum of possibilities. In the aftermath of studies of comet 67P/Churyumov-Gerasimenko by the Rosetta mission team, there have been several papers debating the nature of observed comets, most which are smaller than 10 km (Lamy et al. 2004; Belton 2014). Arguments favoring the pro-fragment case have been made by many groups (e.g., Michel and Richardson 2013; Morbidelli and Rickman 2015, Rickman et al. 2015, Jutzi et al. 2016, Schwartz et al. 2018, Campo Bagatin et al. 2020; Benavidez et al. 2022), while a pro-primordial case was made in an omnibus paper by Davidsson et al. (2016). Our results favor the pro-fragment case. For example, our results indicate many \(D<10\) km bodies in the destabilized population are fragments produced by collisions (Fig. 10). Moreover, our destabilized population's SFD is consistent with the shape of the debiased Jupiter family comet SFD in the present day (Fig. 17) and the long period comet SFD shortly after the giant planet instability (Fig. 18) (see Sec. 6.5). Some pro-primordial arguments focus on the bilobed shape of comets, which could be produced by in a collapsing cloud of pebbles (e.g., the hamburger-like shapes of the two lobes from the 20 km object Arrokoth is probably primordial; McKinnon et al. 2020). Many comet shapes observed by spacecraft, however, also look analogous to contact binaries in the near-Earth asteroid population; numerical modeling work has shown that collisions can create such shapes (e.g., Rickman et al. 2015; Jutzi and Benz 2017; Schwartz et al. 2018; Campo Bagatin et al. 2020). We would also argue most NEOs are likely to be collisional aggregates produced by large main belt collisions (e.g., Bottke et al. 2005a,b). Other pro-primordial arguments focus on the paucity of aqueous altered materials in comet 67P/Churyumov-Gerasimenko. As discussed in the last section, this could be explained by this comet coming from the primitive exterior of a KBO, or from a KBO that formed late enough that it experienced minimal aqueous alteration. Accordingly, the presence of exotic ices or supervolatiles on a comet does not necessarily mean that the object could not be a collisional fragment, especially since studies have shown that at smaller scales (_D_ < 10 km), collisions do not produce substantial heating of the largest fragments (e.g. Jutzi and Benz 2017; Jutzi et al. 2017; Schwartz et al. 2018). Our model results also counter additional pro-primordial arguments made by Davidsson et al. (2016). For example, our preferred starting conditions for the PKB indicates that smaller primordial comets are not a favored size of planetesimal formation mechanisms (Klahr and Schreiber 2020; 2021), though it is possible some might form this way. In addition, during planetesimal formation, as small particles gravitationally collapse to form predominately \(D\) ~ 100 km planetesimals, it can be shown that smaller gravitational aggregates form and are sometimes ejected from the system. Either scenario might explain ~20 km diameter objects like Arrokoth (Nesvorny et al. 2021). Finally, our work indicates that collisional evolution can reproduce the observed SFD in the main belt, destabilized population, and in the Trojans within a single dynamical and collisional framework, provided most of the mass in the SFD is in the form of \(D\) ~ 100 km bodies. This removes the need for numerous \(D\) < 10 km comets created by primordial processes, and it sets up interesting predictions that can be tested by future missions. ### A.3 Comments on the Sizes of the Interstellar Comets 'Oumuamua and Borisov The discovery of the first interstellar comet on October 19, 2017 by the PanSTARRS survey sent shock waves through the astronomical community (Meech et al. 2017). Long anticipated but never seen, this object, named II/'Oumuamua, provides us with critical clues about the nature of planetesimal formation and small body evolution around other stars. The origin of 'Oumuamua has been debated with vigor, with both simple and exotic mechanisms used to explain how its trajectory was influenced by non-gravitational forces during its close approach to the Sun (e.g., Bialy and Loeb 2018; 'Oumuamua ISSI Team 2019; Seligman and Laughlin 2020; Desch and Jackson 2021). In some ways, though, 'Oumuamua can be considered a standard comet, with colors consistent with Jupiter family comet nuclei and D-type asteroids (Jewitt and Luu 2019). Its most unusual physical property may be its axis ratio, which is thought to be 6:1 ('Oumuamua ISSI Team 2019). With that said, the object is small; its absolute magnitude (\(H\)) is 22.4 \(\pm\) 0.04, which, for an albedo of 0.04 to 0.1, translates into a diameter of 140-220 m. We have little to no information on the nature of such small comets in our own solar system, so it is impossible to say whether 'Oumuamua's elongated or perhaps hamburger-like shape is unusual. A second interstellar comet, named 2I/Borisov, was found on August 30, 2019 by amateur astronomer Gennadiy Borisov. Like 'Oumuamua, it has colors that are essentially identical to the mean color measured for the dust comae of long-period comets (Jewitt and Luu 2019). Borisov does not have appear to have a bizarre shape; estimates of its nucleus size suggest it is ~0.8 to ~1.0 km in diameter (Jewitt and Luu 2019; Hui et al. 2020). This value is substantially larger than 'Oumuamua. It is surprising that the first two detected interstellar objects have such a wide range of sizes. One factor that might explain this would be their discovery circumstances: 'Oumuamua was discovered three days after its closest approach to Earth at 0.16 au, while Borisov was discovered near 3 au. Another factor would be the activity of the bodies at discovery; Borisov had a sizeable coma, while 'Oumuamua did not show any sizes of comet activity, such as coma or tail. Debiasing these two detections is beyond the scope of this paper, but as a thought experiment, we will assume here that their observation selection effects balance each other out. That allows us to explore the implications of their sizes using a Monte Carlo code. We assumed that the SFD of interstellar comets reaching our system follow a generic power law with \(P\) (\(D\)) = \(C\)\(D\), with \(C\) being a constant, \(D\) being diameter, and \(n\) being the differential slope between -1.1 and -5.0. This makes the cumulative slope \(q\) = \(n\) + 1. In each trial, we used random deviates to extract two bodies from the power law SFD between 0.1 and 5 km and then determined whether they were between 0.14-0.22 km or 0.8-1.0 km. A successful run had one body in each size range. We tested each value of \(n\) over 10\({}^{6}\) trials and tallied the results. The relative probabilities of success are shown in Fig. 11. PLACE FIGURE A1 HERE Our results suggest that our two interstellar objects are far more likely to come from a shallow sloped SFD than a steep one. The relative probabilities of success for \(q\) = -1 are 3.8, 28, and 200 times more likely than \(q\) = -2, -3, or -4, respectively. The most successful values are similar to those found in model runs for this size range, where \(q\) ~ -1.2 (Fig. 10). Accordingly, if the two interstellar objects are not enormously biased by observational selection effects relative to one another, and one can cope with statistics of small numbers, 'Oumuamua and Borisov provide us with insights into small body evolution in exoplanet systems. For example, our inferred shallow slope from Fig. 19 could be telling us that many interstellar comets are byproducts of collisional evolution, unless planetesimal formation processes naturally make a shallow SFD between 0.1 and 1 km. We argue this makes sense, since to be ejected from their home system, these comets probably had to have a giant planet encounter. Getting from their source region to the giant planet almost certainly involved some kind of dynamical excitation, which in turn implies collisional evolution within the source region. Our results also hint at the possibility that collisional processes act in similar ways in various exoplanets systems, regardless of disk mass. This makes sense, in that whether we are dealing with asteroids or comets, their SFDs should develop a steep Dohnanyi-like SFD at small sizes, a shallow SFD for sub-km bodies, and a steep SFD for objects \(\sim\)1 < \(D\) < 10 km bodies. Some evidence for this may be found in the inferred slope of exocomets within the Beta Pictoris system, whose comet diameters between 3 < \(D\) < 8 km follow a cumulative power law slope \(q\) = 2.6 \(\pm\) 0.8 (Lecavelier des Etangs et al. 2022). This value is consistent with our results (Fig. 10) and those of the known comets (Figs. 17 and 18). The other interesting issue concerns the highly elongated shape of 'Oumuamua. Its origin is a mystery. Some suggest it was made by tidal disruption of a comet-like object (Cuk 2018; Raymond et al. 2018; Zhang and Lin 2020), while others assert it could be made by "sandblasting" a modestly elongated progenitor with small particles over long time periods (Vavilov and Medvedev 2019). We offer no solution to this problem, but if small interstellar objects are derived from SFDs with a shallow power law slope, it could be that collisions played some role in 'Oumuamua's initial and perhaps final shape. Testing this possibility might lead to an interesting line of future research.
2310.15042
Topological Gysin Coherence for Algebraic Characteristic Classes of Singular Spaces
Brasselet, the second author and Yokura introduced Hodge-theoretic Hirzebruch-type characteristic classes $IT_{1, \ast}$, and conjectured that they are equal to the Goresky-MacPherson $L$-classes for pure-dimensional compact complex algebraic varieties. In this paper, we show that the framework of Gysin coherent characteristic classes of singular complex algebraic varieties developed by the first and third author in previous work applies to the characteristic classes $IT_{1, \ast}$. In doing so, we prove the ambient version of the above conjecture for a certain class of subvarieties in a Grassmannian, including all Schubert subvarieties. Since the homology of Schubert subvarieties injects into the homology of the ambient Grassmannian, this implies the conjecture for all Schubert varieties in a Grassmannian. We also study other algebraic characteristic classes such as Chern classes and Todd classes (or their variants for the intersection cohomology sheaves) within the framework of Gysin coherent characteristic classes.
Markus Banagl, Jörg Schürmann, Dominik J. Wrazidlo
2023-10-23T15:45:10Z
http://arxiv.org/abs/2310.15042v1
# Topological Gysin coherence for algebraic characteristic classes of singular spaces ###### Abstract. Brasselet, the second author and Yokura introduced Hodge-theoretic Hirzebruch-type characteristic classes \(IT_{1,*}\), and conjectured that they are equal to the Goresky-MacPherson \(L\)-classes for pure-dimensional compact complex algebraic varieties. In this paper, we show that the framework of Gysin coherent characteristic classes of singular complex algebraic varieties developed by the first and third author in previous work applies to the characteristic classes \(IT_{1,*}\). In doing so, we prove the ambient version of the above conjecture for a certain class of subvarieties in a Grassmannian, including all Schubert subvarieties. Since the homology of Schubert subvarieties injects into the homology of the ambient Grassmannian, this implies the conjecture for all Schubert varieties in a Grassmannian. We also study other algebraic characteristic classes such as Chern classes and Todd classes (or their variants for the intersection cohomology sheaves) within the framework of Gysin coherent characteristic classes. Key words and phrases:Gysin transfer, Characteristic Classes, Singularities, Stratified Spaces, Intersection Homology, Goresky-MacPherson \(L\)-classes, Chern classes, Todd classes, Verdier-Riemann-Roch formulae, Schubert varieties, Intersection theory, Transversality 2020 Mathematics Subject Classification: 57R20, 55R12, 55N33, 57N80, 32S60, 32S20, 14M15, 14C17, 32S50 M. Banagl and D. Wrazidlo are funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through a research grant to the first author (Projektnummer 495696766). J. Schurmann is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 427320536 - SFB 1442, as well as under Germany's Excellence Strategy EXC 2044 390685587, Mathematics Munster: Dynamics - Geometry - Structure ## 1. Introduction We show that various algebraic characteristic classes for singular spaces fit into the framework of Gysin coherent characteristic classes of singular complex algebraic varieties that was developed by the first and the third author in [10]. This framework formalizes Verdier-Riemann-Roch type formulae with respect to Gysin restriction arising from transverse intersection with smooth varieties. In particular, we show that the framework applies to the Hodge-theoretic intersection Hirzebruch characteristic classes \(IT_{1,*}\) introduced by the second author jointly with Brasselet and Yokura in [14]. The classes \(IT_{1,*}\) are conjectured in [14, Remark 5.4] to be equal to the topological \(L\)-classes of Goresky and MacPherson for pure-dimensional compact complex algebraic varieties. By applying the uniqueness theorem for Gysin coherent characteristic classes of [10] to both the classes \(IT_{1,*}\) and the Goresky-MacPherson \(L\)-classes, we conclude that both classes coincide on Cohen-Macaulay subvarieties of Grassmannians after push-forward into the homology of the ambient Grassmannian. Since the homology of Schubert subvarieties injects into the homology of an ambient Grassmannian, this implies the conjecture for all Schubert varieties in a Grassmannian. The notion of Gysin coherent characteristic classes is recalled in Section 6. This notion was introduced in [10] with respect to a family \(\mathcal{X}\) of inclusions \(i\colon X\to W\) of compact irreducible subvarieties of smooth pure-dimensional projective complex algebraic varieties \(W\) which satisfy primarily an analog of the Kleiman-Bertini transversality theorem with respect to a suitable notion of transversality. A Gysin coherent characteristic class is a pair \(c\ell=(c\ell^{*},c\ell_{*})\) consisting of a function \(c\ell^{*}\) that assigns to every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\) a normalized element \(c\ell^{*}(f)\in H^{*}(M;\mathbb{Q})\), and a function \(c\ell_{*}\) that assigns to every inclusion \(i\colon X\to W\) of a compact possibly singular subvariety of a smooth variety an element \(c\ell_{*}(i)\in H_{*}(W;\mathbb{Q})\) whose highest non-trivial homogeneous component is the ambient fundamental class of \(X\) in \(W\) such that the Gysin restriction formula \[f^{!}c\ell_{*}(i)=c\ell^{*}(f)\cap c\ell_{*}(M\cap X\subset M)\] holds for all \(i\) contained in \(\mathcal{X}\). Here, \(f^{!}\) denotes the topological Gysin homomorphism on singular rational homology. Furthermore, we require that \(c\ell_{*}\) is multiplicative under products, that \(c\ell^{*}\) and \(c\ell_{*}\) transform naturally under isomorphisms of ambient smooth varieties, and that \(c\ell_{*}\) is natural with respect to inclusions in larger ambient smooth varieties. It was shown in [10] that the Goresky-MacPherson \(L\)-class gives rise to a Gysin coherent characteristic class. In this paper, the focus lies on application of the Gysin coherence framework to the Hodge-theoretic intersection Hirzebruch characteristic classes \(IT_{1,*}\) introduced by the second author jointly with Brasselet and Yokura in [14]. Section 2 provides a brief outline of the theory of Hodge-theoretic characteristic classes on singular algebraic varieties. For an introduction to algebraic characteristic classes of singular spaces via mixed Hodge theory in the complex algebraic context, see the second author's expository paper [52]. For an introduction to topological characteristic classes of singular spaces, see [5]. Let \(L^{*}(\nu)\) denote the Hirzebruch \(L\)-class of a topological vector bundle \(\nu\). The main result of the present paper is Theorem 7.4, which can be stated as follows. **Theorem**.: _The pair \(\mathcal{L}=(\mathcal{L}^{*},\mathcal{L}_{*})\) defined by \(\mathcal{L}^{*}(f)=L^{*}(\nu_{f})\) for every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\subset W\) in a smooth complex algebraic variety \(W\) with normal bundle \(\nu_{f}\), and by \(\mathcal{L}_{*}(i)=i_{s}IT_{1,*}(X)\) for every inclusion \(i\colon X\to W\) of a compact possibly singular subvariety \(X\subset W\) in a smooth variety \(W\) is a Gysin coherent characteristic class with respect to the family \(\mathcal{X}_{CM}\) of inclusions \(i\colon X\to W\) such that \(X\) is Cohen-Macaulay._ In Section 4, we discuss the main ingredient for the proof, which is a Verdier-Riemann-Roch type formula derived by the first author in [7] for the behavior of the Hodge-theoretic intersection Hirzebruch characteristic classes \(IT_{1*}\) under algebraic Gysin restriction with respect to normally nonsingular embeddings of singular spaces. In order for this formula to fit the axioms of Gysin coherent characteristic classes, we need to apply it in a transverse setup (see Theorem 4.15), where Tor-independence turns out to be the correct notion of transversality for subvarieties of an ambient smooth variety. Since Sierra's Kleiman-Bertini transversality theorem for Tor-independence [56] is only available under additional assumptions, we need to incorporate the Cohen-Macaulay condition into our result. For the Gysin restriction formula to be valid in the transverse setup, we need to handle some technical issues regarding the behavior of Whitney stratifications under blow-up of complex manifolds along transverse submanifolds, see Section 3. Finally, to establish the topological Gysin coherence axiom for the algebraic characteristic class \(IT_{1*}\), we compare the algebraic Gysin map with the topological Gysin map in Section 5, and find that they coincide at least on algebraic cycles (see Theorem 5.10). Since the Goresky-MacPherson \(L\)-class is a Gysin coherent characteristic class, the uniqueness theorem for such classes implies that both the classes \(IT_{1,*}\) and the Goresky-MacPherson \(L\)-classes coincide on irreducible Cohen-Macaulay subvarieties of Grassmannians after pushforward into the homology of the ambient Grassmannian (see Theorem 7.6). Since Schubert varieties in a Grassmannian are Cohen-Macaulay, and their homology injects into the homology of the Grassmannian, we obtain Corollary 7.7, which states the following. **Corollary**.: _The equality \(L_{*}(X)=IT_{1,*}(X)\) holds for all Schubert varieties \(X\) in a Grassmannian._ The above equality was conjectured more generally for pure-dimensional compact complex algebraic varieties in [14, Remark 5.4]. In joint work with Cappell, Maxim, and Shanson, the second author proved the conjecture in [17, Cor. 1.2] for orbit spaces \(X=Y/G\), with \(Y\) a projective \(G\)-manifold and \(G\) a finite group of algebraic automorphisms. They also showed the conjecture for certain complex hypersurfaces with isolated singularities which are assumed rational homology manifolds [16, Theorem 4.3]. The conjecture holds for simplicial projective toric varieties as shown by Maxim and the second author [41, Corollary 1.2(iii)]. In [6], the first author proved that his extension of the Goresky-MacPherson \(L\)-class to oriented pseudomanifolds that possess Lagrangian structures along strata of odd codimension, introduced in [4], transfers to the \(L\)-class of any finite covering space (see Theorem 3.14 in [6]). He deduced from this that the conjecture holds for normal connected complex projective 3-folds \(X\) that have at worst canonical singularities, trivial canonical divisor, and \(\dim H^{1}(X;\mathcal{O}_{X})>0\). If \(\xi\) is an oriented PL block bundle over a closed PL Witt space \(B\) (e.g. a pure-dimensional complex algebraic variety) with closed \(d\)-dimensional PL manifold fiber and total space \(X\), then \(\xi^{\dagger}L_{*}(B)=L^{*}(V_{\xi})\cap L_{*}(X)\) under the block bundle transfer \(\xi^{\dagger}\): \(H_{*}(B;\mathbb{Q})\to H_{*+d}(X;\mathbb{Q})\) associated to \(\xi\), see [8]. Here, \(V_{\xi}\) is the oriented stable vertical normal PL microbundle of \(\xi\). In fact, it is shown in [9] that the KO-theoretic block bundle transfer \(\xi^{\dagger}\): \(\operatorname{KO}_{*}(B)\otimes\mathbb{Z}[\frac{1}{2}]\to\operatorname{KO}_ {*+d}(X)\otimes\mathbb{Z}[\frac{1}{2}]\) sends the Siegel-Sullivan orientation \(\Delta(B)\) to \(\Delta(X)\). The Siegel-Goresky-MacPherson \(L\)-class can be recovered from \(\Delta(-)\) by applying the Pontrjagin character. Note that transfer does not generally commute with localization of homotopy theoretic spectra. The aforementioned cases of the conjecture concern rational homology manifolds. Fernandez de Bobadilla and Pallares [23] proved the conjecture for all projective complex algebraic varieties that are rational homology manifolds. In joint work with Saito [24], they also gave a different proof for compact instead of projective complex varieties, using the theory of mixed Hodge modules. On the other hand, Schubert varieties in a Grassmannian are generally singular enough so as not to be rational homology manifolds. Furthermore, we shall clarify how other algebraic characteristic classes such as Chern classes (see Section 8) and Todd classes (see Section 9) (or their variants for the intersection cohomology sheaves) fit into the framework of Gysin coherence. In Theorem 9.1, we show that Todd classes are Gysin coherent with respect to the family \(\mathcal{X}_{CM}\) of inclusions \(X\to W\) such that \(X\) is Cohen-Macaulay. The proof is similar to that of our main result, but is based on the Verdier-Riemann-Roch formula for the Todd class transformation \(\tau_{*}\), which was conjectured by Baum-Fulton-MacPherson in [11, p. 137], and proved by Verdier [58, p. 214, Theorem 7.1]. On the other hand, Theorem 8.7 shows that Chern classes are Gysin coherent, which exploits the second author's Verdier-Riemann-Roch type theorem (see [53]) for the behavior of the Chow homology Chern class transformation \(c_{*}\colon F(X)\to A_{*}(X)\) on complex algebraically constructible functions under refined Gysin maps associated to transverse intersections in a microlocal context. We conclude by mentioning that the normally nonsingular expansions derived in [10, Theorem 7.1] provide a systematic method for the recursive computation of Gysin coherent characteristic classes in ambient Grassmannians in terms of genera of explicitly constructed characteristic subvarieties. For the classes discussed in this paper, the computational consequences e.g. in the case of Schubert varieties in a Grassmannian remain open for future study. Chern classes of Schubert varieties in Grassmannians were computed by Aluffi and Mihalcea in [1]. Moreover, an algorithm for the computation of Chern classes for Schubert cells in a generalized flag manifold is provided in [2]. For an extension to the equivariant setting, as well as applications to positivity of non-equivariant Chern classes and related classes, we refer the reader to [3]. **Notation.** Regarding singular cohomology, singular homology and Borel-Moore homology, we work with rational coefficients and will write these groups as \(H^{*}(-)\), \(H_{*}(-)\), and \(H_{*}^{\text{BM}}(-)\). ## 2. Hodge-Theoretic Characteristic Classes This section provides a brief outline of the theory of Hodge-theoretic characteristic classes on singular algebraic varieties (compare also Section 5 in [7]). After recalling the motivic Hodge Chern class transformation in Definition 2.1 and the twisted Todd transformation of Baum, Fulton, MacPherson in Definition 2.7, we define the motivic Hirzebruch class transformation in Definition 2.9, and finally the intersection Hirzebruch characteristic class \(IT_{\text{\rm{ys}}}\) in Definition 2.11. For a more detailed exposition of the topic, we refer e.g. to the expository paper [52]. For an algebraic variety \(X\), let \(K_{0}^{\text{alg}}(X)\) denote the Grothendieck group of the abelian category of coherent sheaves of \(\mathcal{O}_{X}\)-modules. When there is no danger of confusion with other \(K\)-homology groups, we shall also write \(K_{0}(X)=K_{0}^{\text{alg}}(X)\). Let \(K^{0}(X)=K_{\text{alg}}^{0}(X)\) denote the Grothendieck group of the exact category of algebraic vector bundles over \(X\). The tensor product \(\otimes_{\mathcal{O}_{X}}\) induces a _cap product_ \[\cap:K^{0}(X)\otimes K_{0}(X)\longrightarrow K_{0}(X),\;[E]\cap[\mathcal{F}]=[ E\otimes_{\mathcal{O}_{X}}\mathcal{F}].\] Thus, \[-\cap[\mathcal{O}_{X}]:K^{0}(X)\longrightarrow K_{0}(X) \tag{1}\] sends a vector bundle \([E]\) to its associated (locally free) sheaf of germs of local sections \([E\otimes\mathcal{O}_{X}]\). If \(X\) is smooth, then \(-\cap[\mathcal{O}_{X}]\) is an isomorphism. Let \(X\) be a complex algebraic variety and \(E\) an algebraic vector bundle over \(X\). For a nonnegative integer \(p\), let \(\Lambda^{p}(E)\) denote the \(p\)-th exterior power of \(E\). The _total \(\lambda\)-class_ of \(E\) is by definition \[\lambda_{y}(E)=\sum_{p\geq 0}\Lambda^{p}(E)\cdot y^{p},\] where \(y\) is an indeterminate functioning as a bookkeeping device. This construction induces a homomorphism \(\lambda_{y}(-):K^{0}_{\mathrm{alg}}(X)\longrightarrow K^{0}_{\mathrm{alg}}(X )[[y]]\) from the additive group of \(K^{0}(X)\) to the multiplicative monoid of the power series ring \(K^{0}(X)[[y]]\). Now let \(X\) be a smooth variety, let \(TX\) denote its holomorphic tangent bundle and \(T^{*}X\) its holomorphic cotangent bundle. Then \(\Lambda^{p}(T^{*}X)\) is the vector bundle of holomorphic \(p\)-forms on \(X\). Its associated sheaf of sections is denoted by \(\Omega^{p}_{X}\). Thus \[[\Lambda^{p}(T^{*}X)]\cap[\mathcal{O}_{X}]=[\Omega^{p}_{X}]\] and hence \[\lambda_{y}(T^{*}X)\cap[\mathcal{O}_{X}]=\sum_{p=0}^{\dim X}[\Omega^{p}_{X}] y^{p}.\] Let \(X\) be a complex algebraic variety and let \(MHM(X)\) denote the abelian category of M. Saito's algebraic mixed Hodge modules on \(X\). Totaro observed in [57] that Saito's construction of a pure Hodge structure on the intersection homology of compact varieties implicitly contains a definition of certain characteristic homology classes for singular algebraic varieties. The following definition is based on this observation and due to Brasselet, Schurmann and Yokura, [14], see also the expository paper [52]. **Definition 2.1**.: The _motivic Hodge Chern class transformation_ \[MHC_{y}:K_{0}(MHM(X))\to K^{\mathrm{alg}}_{0}(X)\otimes\mathbb{Z}[y^{\pm 1}]\] is defined by \[MHC_{y}[M]=\sum_{i,p}(-1)^{i}[\mathcal{H}^{i}(Gr^{F}_{-p}DR[M])](-y)^{p}.\] Here, \(Gr^{F}_{p}DR:D^{b}MHM(X)\to D^{b}_{\mathrm{coh}}(X),\) with \(D^{b}_{\mathrm{coh}}(X)\) the bounded derived category of sheaves of \(\mathcal{O}_{X}\)-modules with coherent cohomology sheaves, denotes the functor of triangulated categories constructed by M. Saito, see [46, SS2.3], [50, SS1], [47, SS3.10], obtained by taking a suitable filtered de Rham complex of the filtered holonomic \(D\)-module underlying a mixed Hodge module. For every \(p\in\mathbb{Z}\), these functors induce functors between the associated Grothendieck groups, with \(Gr^{F}_{p}DR[M]\simeq 0\) for a given \(M\) and almost all \(p\). A flat morphism \(f:X\to Y\) gives rise to a flat pullback \(f^{*}:\mathrm{Coh}(Y)\to\mathrm{Coh}(X)\) on coherent sheaves, which is exact and hence induces a flat pullback \(f^{*}_{K}:K^{\mathrm{alg}}_{0}(Y)\to K^{\mathrm{alg}}_{0}(X)\). This applies in particular to smooth morphisms and is then often called smooth pullback. An arbitrary algebraic morphism \(f:X\to Y\) (not necessarily flat) induces a homomorphism \[f^{*}:K_{0}(MHM(Y))\longrightarrow K_{0}(MHM(X))\] which corresponds under the forgetful functor \(\mathrm{rat}:D^{b}MHM(-)\to D^{b}_{c}(-;\mathbb{Q})\) to \(f^{-1}\) on constructible complexes of sheaves. (Additional remarks on rat are to be found further below.) We record Schurmann's [52, Cor. 5.11, p. 459]: **Proposition 2.2**.: _(Verdier-Riemann-Roch for smooth pullbacks.) For a smooth morphism \(f:X\to Y\) of complex algebraic varieties, the Verdier Riemann-Roch formula_ \[\lambda_{y}(T^{*}_{X/Y})\cap f^{*}_{K}MHC_{y}[M]=MHC_{y}(f^{*}[M])=MHC_{y}[f^ {*}M]\] _holds for \(M\in D^{b}MHM(Y),\) where \(T^{*}_{X/Y}\) denotes the relative cotangent bundle of \(f\)._ Let \(E\) be a complex vector bundle and let \(a_{i}\) denote the Chern roots of \(E\). In [32], Hirzebruch introduced a cohomological characteristic class \[T_{y}^{*}(E)=\prod_{i=1}^{\operatorname{rk}E}Q_{y}(a_{i}),\] where \(y\) is an indeterminate, coming from the power series \[Q_{y}(a)=\frac{a(1+y)}{1-e^{-a(1+y)}}-ay\in\mathbb{Q}[y][[a]].\] If \(R\) is an integral domain over \(\mathbb{Q}\), then a power series \(Q(a)\in R[[a]]\) is called _normalized_ if it starts with \(1\), i.e. \(Q(0)=1\). With \(R=\mathbb{Q}[y]\), we have \(Q_{y}(0)=1\), so \(Q_{y}(a)\) is normalized. For \(y=0\), \[T_{0}^{*}(E)=\prod_{i=1}^{\operatorname{rk}E}\frac{a_{i}}{1-e^{-a_{i}}}= \operatorname{td}^{*}(E) \tag{2}\] is the classical Todd class of \(E\), while for \(y=1\), \[T_{1}^{*}(E)=\prod_{i=1}^{\operatorname{rk}E}\frac{a_{i}}{\tanh a_{i}}=L^{*}(E) \tag{3}\] is the Hirzebruch \(L\)-class of the vector bundle \(E\). We shall also need a certain unnormalized version of \(Q_{y}(a)\): Let \[\widetilde{Q}_{y}(a)=\frac{a(1+ye^{-a})}{1-e^{-a}}\in\mathbb{Q}[y][[a]]\] and set \[\widetilde{T}_{y}^{*}(E)=\prod_{i=1}^{\operatorname{rk}E}\widetilde{Q}_{y}(a _{i}).\] Note that \(\widetilde{Q}_{y}(0)=1+y\neq 1\), whence \(\widetilde{Q}_{y}(a)\) is unnormalized. The relation \[(1+y)Q_{y}(a)=\widetilde{Q}_{y}((1+y)a)\] implies: **Proposition 2.3**.: _If \(E\) is a complex vector bundle of complex rank \(r\), then for the degree \(2i\) components:_ \[\widetilde{T}_{y}^{i}(E)=(1+y)^{r-i}T_{y}^{i}(E).\] More conceptually, we have the following formula for the unnormalized class: **Proposition 2.4**.: _For any complex vector bundle \(E\), we have_ \[\widetilde{T}_{y}^{*}(E)=\operatorname{td}^{*}(E)\cup\operatorname{ch}^{*}( \lambda_{y}(E^{*})).\] Proof.: The Chern character is given by \[\operatorname{ch}^{*}(\lambda_{y}(E^{*}))=\prod_{i=1}^{\operatorname{rk}E}(1+ ye^{-a_{i}}),\] with \(a_{i}\) the Chern roots of \(E\), see [33, p. 11]. Thus \[\widetilde{T}_{y}^{*}(E) =\prod_{i}\frac{a_{i}(1+ye^{-a_{i}})}{1-e^{-a_{i}}}=\prod_{i}\frac {a_{i}}{1-e^{-a_{i}}}\prod_{i}(1+ye^{-a_{i}})\] \[=\operatorname{td}^{*}(E)\cup\operatorname{ch}^{*}(\lambda_{y}( E^{*})).\] Let \(\tau_{*}:K_{0}(X)\longrightarrow H^{\mathrm{BM}}_{2*}(X)\otimes\mathbb{Q}\) denote the Todd class transformation of Baum, Fulton, MacPherson. We review, to some extent, construction and properties of this transformation. Let \[\alpha^{*}:K^{0}_{\mathrm{alg}}(X)\longrightarrow K^{0}_{\mathrm{top}}(X)\] be the forget map which takes an algebraic vector bundle to its underlying topological vector bundle. Composing with the Chern character, one obtains a transformation \[\tau^{*}=\mathrm{ch}^{*}\circ\alpha^{*}:K^{0}_{\mathrm{alg}}(X)\longrightarrow H ^{2*}(X;\mathbb{Q}),\] see [12, p. 180]. Baum, Fulton and MacPherson construct with the use of Bott periodicity a corresponding homological version \[\alpha_{*}:K^{\mathrm{alg}}_{0}(X)\longrightarrow K^{\mathrm{top}}_{0}(X)\] for quasi-projective varieties \(X\). Composing with the homological Chern character \[\mathrm{ch}_{*}:K^{\mathrm{top}}_{0}(X)\longrightarrow H^{\mathrm{BM}}_{2*}(X ;\mathbb{Q}),\] where \(H^{\mathrm{BM}}_{*}\) denotes Borel-Moore homology, they obtain a transformation \[\tau_{*}=\mathrm{ch}_{*}\circ\alpha_{*}:K^{\mathrm{alg}}_{0}(X)\longrightarrow H ^{\mathrm{BM}}_{2*}(X;\mathbb{Q}).\] An algebraic version of this transformation is in fact available for any algebraic scheme over a field and generalizes the Grothendieck Riemann-Roch theorem to singular varieties. _Remark 2.5_.: Let \(A_{*}(V)\) denote Chow homology of a variety \(V\), i.e. algebraic cycles in \(V\) modulo rational equivalence. Then there is a transformation \[\tau_{*}:K^{\mathrm{alg}}_{0}(X)\longrightarrow A_{*}(X)\otimes\mathbb{Q}\] such that for a complex algebraic variety \(X\), the diagram commutes, where \(\mathrm{cl}\) is the cycle map; see the first commutative diagram on p. 106 of [11, (0.8)]. The construction of \(\tau_{*}\) to Chow homology is described in Fulton's book [25, p. 349]. Thus Todd classes are algebraic cycles with rational coefficients that are well-defined up to rational equivalence. According to [12, Theorem, p. 180], \(\tau_{*}\) and \(\tau^{*}\) are compatible with respect to cap products, i.e. the diagram commutes. Thus, if \(E\) is a vector bundle and \(\mathcal{F}\) a coherent sheaf on \(X\), then \[\tau_{*}([E]\cap[\mathcal{F}])=\mathrm{ch}^{*}(E)\cap\tau_{*}[\mathcal{F}]. \tag{4}\] For smooth \(X\), \[\tau_{*}[\mathcal{O}_{X}]=\mathrm{td}^{*}(TX)\cap[X]=T^{*}_{0}(TX)\cap[X].\] So if \(E\) is a vector bundle on a smooth variety, then \[\tau_{*}([E]\cap[\mathcal{O}_{X}])=(\operatorname{ch}^{*}(E)\cup\operatorname{td} ^{*}(TX))\cap[X]. \tag{5}\] For locally complete intersection morphisms \(f:X\to Y\), Gysin maps \[f^{*}_{\operatorname{BM}}:H^{\operatorname{BM}}_{*}(Y)\longrightarrow H^{ \operatorname{BM}}_{*-2d}(X)\] have been defined by Verdier [58, SS10], and Baum, Fulton and MacPherson [11, Ch. IV, SS4], where \(d\) denotes the (complex) virtual codimension of \(f\). Thus for a regular closed embedding \(g\), there is a Gysin map \(g^{*}_{\operatorname{BM}}\) on Borel-Moore homology, which we shall also write as \(g^{!}_{\operatorname{alg}}\), and for a smooth morphism \(f\) of relative dimension \(r\), there is a smooth pullback \(f^{*}_{\operatorname{BM}}:H^{\operatorname{BM}}_{*}(Y)\to H^{\operatorname{ BM}}_{*+2r}(X)\). Baum, Fulton and MacPherson conjectured and Verdier showed: **Proposition 2.6**.: _(Verdier-Riemann-Roch for smooth pullbacks.) For a smooth morphism \(f:X\to Y\) of complex algebraic varieties and \([\mathcal{F}]\in K^{\operatorname{alg}}_{0}(Y)\),_ \[\operatorname{td}^{*}(T_{X/Y})\cap f^{*}_{\operatorname{BM}}\tau_{*}[ \mathcal{F}]=\tau_{*}(f^{*}_{K}[\mathcal{F}]).\] Yokura [61] twisted \(\tau_{*}\) by a Hirzebruch-type variable \(y\): **Definition 2.7**.: The _twisted Todd transformation_ \[\operatorname{td}_{1+y}:K_{0}(X)\otimes\mathbb{Z}[y^{\pm 1}]\longrightarrow H^{ \operatorname{BM}}_{2*}(X)\otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}]\] is given by \[\operatorname{td}_{1+y}[\mathcal{F}]:=\sum_{k\geq 0}\tau_{k}[\mathcal{F}] \cdot\frac{1}{(1+y)^{k}},\] where the Baum-Fulton-MacPherson transformation \(\tau_{*}\) is extended linearly over \(\mathbb{Z}[y^{\pm 1}]\), and \(\tau_{k}\) denotes the degree \(2k\)-component of \(\tau_{*}\). _Remark 2.8_.: Regarding the transformation \(\tau_{*}\) as taking values in Chow groups \(A_{*}(-)\otimes\mathbb{Q}\) (cf. Remark 2.5), the above definition yields a twisted Todd transformation \[\operatorname{td}_{1+y}:K_{0}(X)\otimes\mathbb{Z}[y^{\pm 1}]\longrightarrow A_{*} (X)\otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}],\] which commutes with the Borel-Moore twisted Todd transformation under the cycle map. The definition of the motivic Hirzebruch class transformation below is due to Brasselet, Schurmann and Yokura [14], see also Schurmann's expository paper [52]. **Definition 2.9**.: The _motivic Hirzebruch class transformation_ is \[MHT_{\operatorname{ys}}:=\operatorname{td}_{1+y}\circ MHC_{y}:K_{0}(MHM(X)) \longrightarrow H^{\operatorname{BM}}_{2*}(X)\otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}].\] For the intersection Hodge module \(IC^{H}_{X}\) on a complex purely \(n\)-dimensional variety \(X\), we use the convention \[IC^{H}_{X}:=j_{!*}(\mathbb{Q}^{H}_{U}[n]),\] which agrees with [52, p. 444] and [43, p. 345]. Here, \(U\subset X\) is smooth, of pure dimension \(n\), Zariski-open and dense, and \(j_{!*}\) denotes the intermediate extension of mixed Hodge modules associated to the open inclusion \(j:U\hookrightarrow X\). The underlying perverse sheaf is \(\operatorname{rat}(IC^{H}_{X})=IC_{X}\), the intersection chain sheaf, where \(\operatorname{rat}:MHM(X)\to\operatorname{Per}(X)=\operatorname{Per}(X; \mathbb{Q})\) is the faithful and exact functor that sends a mixed Hodge module to its underlying perverse sheaf. Here, \(\operatorname{Per}(X)\) denotes perverse sheaves on \(X\) which are constructible with respect to _some_ algebraic stratification of \(X\). This functor extends to a functor \(\operatorname{rat}:D^{b}MHM(X)\to D^{b}_{c}(X)=D^{b}_{c}(X;\mathbb{Q})\) between bounded derived categories. For every object of \(D^{b}_{c}(X)\) there exists _some_ algebraic stratification with respect to which the object is constructible, and these stratifications will generally vary with the object. Recall that a functor \(F\) is _conservative_, if for every morphism \(\phi\) such that \(F(\phi)\) is an isomorphism, \(\phi\) is already an isomorphism. Faithful functors on balanced categories (such as abelian or triangulated categories) are conservative. According to [48, p. 218, Remark (i)], \(\operatorname{rat}:D^{b}MHM(X)\to D_{c}^{b}(X)\) is not faithful. We recall Lemma 5.10 from [7]: **Lemma 2.10**.: _The functor \(\operatorname{rat}:D^{b}MHM(X)\to D_{c}^{b}(X)\) is conservative._ Using cones, this lemma also appears embedded in the proof of [17, Lemma 5.3, p. 1752], see also Exercise 11.2.1 in Maxim's book [39]. The module \(IC_{X}^{H}\) is the unique simple object in the category \(MHM(X)\) which restricts to \(\mathbb{Q}_{U}[n]\) over \(U\). As \(U\) is smooth and pure \(n\)-dimensional, \(\mathbb{Q}_{U}^{H}[n]\) is pure of weight \(n\). Since the intermediate extension \(j_{!*}\) preserves weights, \(IC_{X}^{H}\) is pure of weight \(n\). There is a duality isomorphism (polarization) \(\mathbb{D}_{X}^{H}IC_{X}^{H}\cong IC_{X}^{H}(n)\). Taking \(\operatorname{rat}\), this isomorphism induces a self-duality isomorphism \[\mathbb{D}_{X}IC_{X}=\mathbb{D}_{X}\operatorname{rat}IC_{X}^{H}\cong \operatorname{rat}\mathbb{D}_{X}^{H}IC_{X}^{H}\cong\operatorname{rat}IC_{X}^{ H}(n)\cong IC_{X},\] if an isomorphism \(\mathbb{Q}_{U}(n)\cong\mathbb{Q}_{U}\) is chosen. **Definition 2.11**.: ([14].) The _intersection generalized Todd class_ (or _intersection Hirzebruch characteristic class_) is \[IT_{y*}(X):=MHT_{y*}[IC_{X}^{H}[-n]]\in H_{2*}^{\operatorname{BM}}(X)\otimes \mathbb{Q}[y^{\pm 1},(1+y)^{-1}].\] Taking up expectations formulated by Cappell and Shaneson in [19], this class started to be investigated in detail by Cappell, Maxim and Shaneson in [18]. _Remark 2.12_.: The intersection characteristic class \(IT_{y*}(X)\) is represented by an algebraic cycle by Remark 2.8. _Remark 2.13_.: Later on in the present paper, we will be interested in the specializations \(IT_{1,*}(X)\) for \(y=1\), \(IT_{0,*}(X)\) for \(y=0\) and \(IT_{-1,*}(X)\) for \(y=-1\) in Definition 2.11. Here, we note that specialization \(y=0\) is in fact possible because we actually have \(IT_{y*}(X)\in H_{2*}^{\operatorname{BM}}(X)\otimes\mathbb{Q}[y,(1+y)^{-1}]\) by [52, p. 451, Example 5.2]. Specialization \(y=-1\) is possible in Definition 2.11 because we actually have \(IT_{y*}(X)\in H_{2*}^{\operatorname{BM}}(X)\otimes\mathbb{Q}[y^{\pm 1}]\) as shown in [52, p. 465, Proposition 5.21]. _Remark 2.14_.: Let \(K_{0}(\operatorname{var}/X)\) denote the motivic relative Grothendieck group of complex algebraic varieties over \(X\). By definition, \(K_{0}(\operatorname{var}/X)\) is the free abelian group generated by isomorphism classes \([f]=[f\colon Z\to X]\) of morphisms \(f\) to \(X\) modulo the additivity relation (or scissor relation) \[[f]=[f\circ i]+[f\circ j]\] for a closed inclusion \(i\colon Y\to X\) with complement \(j\colon U=Z\setminus Y\to X\). As pointed out in [52, Remark 5.5], composition of the transformations \(MHC_{y}\) and \(MHT_{y}\) with the natural group homomorphism \[\chi_{Hdg}\colon K_{0}(\operatorname{var}/X)\to K_{0}(MHM(X)),\qquad[f\colon Z \to X]\mapsto[f;\mathbb{Q}_{Z}^{H}],\] (see [52, Corollary 4.10]) yields similar transformations \[mC_{y}:=MHC_{y}\circ\chi_{Hdg},\qquad T_{y*}:=MHT_{y}\circ\chi_{Hdg},\] defined on \(K_{0}(\operatorname{var}/X)\), which are studied in [14]. The intersection Hodge modules are multiplicative under the external product \[\boxtimes\colon MHM(X)\times MHM(Y)\to MHM(X\times Y)\] as follows (compare also [40, p. 471]). **Proposition 2.15**.: _Let \(X\) and \(Y\) be pure-dimensional complex algebraic varieties. Then there is a natural isomorphism \(IC_{X\times Y}^{H}=IC_{X}^{H}\boxtimes IC_{Y}^{H}\) in \(MHM(X\times Y)\)._ Proof.: Let \(Z\) be a pure \(n\)-dimensional complex algebraic variety. The perverse sheaf \(IC_{Z}(\mathbb{Q})\) is a simple object in the category of perverse sheaves (see [13, p. 112, Theoreme 4.3.1] or [39, p. 142, Corollary 8.4.13]). Since the functor \(\operatorname{rat}\colon MHM(Z)\to\operatorname{Per}(Z)\) is faithful, the module \(IC_{Z}^{H}\) is the unique simple object in the category \(MHM(Z)\) which restricts to \(\mathbb{Q}_{U}^{H}[n]\) over a dense open smooth subset \(U\) of \(Z\) (see [43, p. 345, Section 14.1.4]). Let \(a\) and \(b\) denote the dimensions of \(X\) and \(Y\), respectively. We shall show that \(IC_{X}^{H}\boxtimes IC_{Y}^{H}\) is simple in \(MHM(X\times Y)\) and restricts to \(\mathbb{Q}_{U\times V}^{H}[a+b]\), where \(U\subset X\) and \(V\subset Y\) are dense open smooth subsets. As for the former claim, let \(i\colon A\hookrightarrow IC_{X}^{H}\boxtimes IC_{Y}^{H}\) be a subobject. As \(\operatorname{rat}\) is exact, its application to \(i\) yields a subobject \(\operatorname{rat}(i)\colon\operatorname{rat}(A)\hookrightarrow\operatorname {rat}(IC_{X}^{H}\boxtimes IC_{Y}^{H})\). As \(\operatorname{rat}\) preserves external products (see [49, Eq. (1.5.2)]), there is a natural isomorphism \[\operatorname{rat}(IC_{X}^{H}\boxtimes IC_{Y}^{H})=\operatorname{rat}(IC_{X}^ {H})\boxtimes\operatorname{rat}(IC_{Y}^{H})=IC_{X}\boxtimes IC_{Y}.\] In the category of perverse sheaves, we have \(IC_{X}\boxtimes IC_{Y}=IC_{X\times Y}\), as a standard verification of the intersection sheaf axioms (see [29, p. 121, Section 6.3]). Therefore, we obtain a subobject \(\operatorname{rat}(i)\colon\operatorname{rat}(A)\hookrightarrow IC_{X\times Y}\) in \(\operatorname{Per}(X\times Y)\). Since \(IC_{X\times Y}\) is a simple object in \(\operatorname{Per}(X\times Y)\), we obtain either that \(\operatorname{rat}(A)=0\) or that \(\operatorname{rat}(i)\) is an isomorphism. Faithful functors on balanced categories (such as abelian or triangulated categories) are conservative. Thus, if \(\operatorname{rat}(i)\) is an isomorphism, then \(i\) is an isomorphism as well. On the other hand, let us suppose that \(\operatorname{rat}(A)=0\). Application of the additive functor \(\operatorname{rat}\) to the unique morphism \(t\colon A\to 0\) yields \(\operatorname{rat}(t)\colon\operatorname{rat}(A)\to\operatorname{rat}(0)=0\), which is an isomorphism since \(\operatorname{rat}(A)=0\). Using again that \(\operatorname{rat}\) is conservative, we conclude that \(A=0\). This shows that \(IC_{X}^{H}\boxtimes IC_{Y}^{H}\) is simple in \(MHM(X\times Y)\). Next, let us show that there is a natural isomorphism \[(i_{U}\times i_{V})^{*}(IC_{X}^{H}\boxtimes IC_{Y}^{H})=\mathbb{Q}_{U\times V}^ {H}[a+b]\in MHM(U\times V), \tag{6}\] where \(i_{U}\colon U\hookrightarrow X\) and \(i_{V}\colon V\hookrightarrow Y\) are inclusions of dense open smooth subsets. We will first establish the isomorphism in \(D^{b}MHM(U\times V)\), and then show that it gives rise to one in \(MHM(U\times V)\). For \(M\in D^{b}MHM(X)\) and \(N\in D^{b}MHM(Y)\), there is a natural isomorphism \[(i_{U}\times i_{V})^{*}(M\boxtimes N)=i_{U}^{*}M\boxtimes i_{V}^{*}N\in D^{b} MHM(U\times V)\] by [49, Eq. (3.1.9), p. 13]. Substituting \(M=IC_{X}^{H}\) and \(N=IC_{Y}^{H}\), we obtain a natural isomorphism \[(i_{U}\times i_{V})^{*}(IC_{X}^{H}\boxtimes IC_{Y}^{H})=i_{U}^{*}IC_{X}^{H} \boxtimes i_{V}^{*}IC_{Y}^{H}\in D^{b}MHM(U\times V).\] Furthermore, it follows from \(i_{U}^{*}IC_{X}^{H}=\mathbb{Q}_{U}^{H}[a]\) and \(i_{V}^{*}IC_{Y}^{H}=\mathbb{Q}_{V}^{H}[b]\) that \[i_{U}^{*}IC_{X}^{H}\boxtimes i_{V}^{*}IC_{Y}^{H}=\mathbb{Q}_{U}^{H}[a]\boxtimes \mathbb{Q}_{V}^{H}[b]=(\mathbb{Q}_{U}^{H}\boxtimes\mathbb{Q}_{V}^{H})[a+b]\in D ^{b}MHM(U\times V).\] For the projection \(p\colon U\times V\to V\) to the second factor, there are natural isomorphisms \[\mathbb{Q}_{U}^{H}\boxtimes\mathbb{Q}_{V}^{H}=p^{*}\mathbb{Q}_{V}^{H}=\mathbb{Q }_{U\times V}^{H}\in D^{b}MHM(U\times V)\] by [49, Eq. (3.6.3), p. 15] and [49, Eq. (3.9.1), p. 16], respectively. By composition, we arrive at a natural isomorphism \[(i_{U}\times i_{V})^{*}(IC_{X}^{H}\boxtimes IC_{Y}^{H})=\mathbb{Q}_{U\times V}^ {H}[a+b]\in D^{b}MHM(U\times V). \tag{7}\] Since \(MHM(-)\) is stable under external product and pullback by open embeddings, it follows from \(IC_{X}^{H}\in MHM(X)\) and \(IC_{Y}^{H}\in MHM(Y)\) that \((i_{U}\times i_{V})^{*}(IC_{X}^{H}\boxtimes IC_{Y}^{H})\in MHM(U\times V)\). Moreover, we have \(\mathbb{Q}_{U\times V}^{H}[a+b]\in MHM(X\times Y)\) because \(U\times V\) is smooth. Finally, application of the functor \(H^{0}\colon D^{b}MHM(U\times V)\to MHM(U\times V)\) to the isomorphism (7) yields the desired isomorphism (6). **Corollary 2.16**.: _The intersection generalized Todd class of the product \(X\times Y\) of complex algebraic pure-dimensional varieties \(X,Y\) satisfies_ \[IT_{y*}(X\times Y)=IT_{y*}(X)\times IT_{y*}(Y)\in H^{\mathrm{BM}}_{2*}(X\times Y) \otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}].\] Proof.: By [52, Corollary 5.10, p. 458], the motivic Chern class transformation \[MHC_{y}:K_{0}(MHM(X))\to K_{0}^{\mathrm{alg}}(X)\otimes\mathbb{Z}[y^{\pm 1}]\] (see Definition 2.1) commutes with the external product, that is, \[MHC_{y}([M]\boxtimes[N])=MHC_{y}([M])\boxtimes MHC_{y}([N])\] for \(M\in D^{b}MHM(X)\) and \(N\in D^{b}MHM(Y)\). Furthermore, according to [52, p. 462], the Todd class transformation \[\tau_{*}\colon K_{0}^{\mathrm{alg}}(X)\longrightarrow H^{\mathrm{BM}}_{2*}(X) \otimes\mathbb{Q}\] of Baum, Fulton, MacPherson is compatible with the external product in the sense that \[\tau_{*}([\mathcal{F}]\boxtimes[\mathcal{G}])=\tau_{*}([\mathcal{F}])\times \tau_{*}([\mathcal{G}])\] for \([\mathcal{F}]\in K_{0}(X)\) and \([\mathcal{G}]\in K_{0}(Y)\) (see also [25, Example 18.3.1, p. 360, and Example 19.1.9, p. 377], as well as [27, Property (i), p. 122]). (The cross product on Borel-Moore homology agrees with the cross product on ordinary singular homology under the isomorphism \(H^{\mathrm{BM}}_{*}(X)\cong H_{*}(\overline{X},\overline{X}-X)\), where \(\overline{X}\) is a compactification of \(X\) such that \((\overline{X},\overline{X}-X)\) is a CW pair, see [21, p. 99, 2.6.19].) The latter implies for the twisted Todd transformation \[\mathrm{td}_{1+y}:K_{0}^{\mathrm{alg}}(X)\otimes\mathbb{Z}[y^{\pm 1}] \longrightarrow H^{\mathrm{BM}}_{2*}(X)\otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}]\] of Definition 2.7 that \[td_{1+y}([\mathcal{F}])\times td_{1+y}([\mathcal{G}]) =\left[\sum_{k\geq 0}\tau_{k}([\mathcal{F}])\frac{1}{(1+y)^{k}} \right]\times\left[\sum_{l\geq 0}\tau_{l}([\mathcal{G}])\frac{1}{(1+y)^{l}}\right]\] \[=\sum_{k,l\geq 0}\tau_{k}([\mathcal{F}])\times\tau_{l}([\mathcal{G }])\frac{1}{(1+y)^{k+l}}\] \[=\sum_{m\geq 0}\left[\sum_{k+l=m}\tau_{k}([\mathcal{F}])\times \tau_{l}([\mathcal{G}])\right]\frac{1}{(1+y)^{m}}\] \[=\sum_{m\geq 0}\tau_{m}([\mathcal{F}]\boxtimes[\mathcal{G}]) \frac{1}{(1+y)^{m}}\] \[=td_{1+y}([\mathcal{F}]\boxtimes[\mathcal{G}])\] for \([\mathcal{F}]\in K_{0}(X)\otimes\mathbb{Z}[y^{\pm 1}]\) and \([\mathcal{G}]\in K_{0}(Y)\otimes\mathbb{Z}[y^{\pm 1}]\). By composition, we see that the motivic Hirzebruch class transformation \[MHT_{y*}:=\mathrm{td}_{1+y}\circ MHC_{y}:K_{0}(MHM(X))\longrightarrow H^{ \mathrm{BM}}_{2*}(X)\otimes\mathbb{Q}[y^{\pm 1},(1+y)^{-1}]\] (see Definition 2.9) satisfies \[MHT_{y*}([M]\boxtimes[N])=MHT_{y}([M])\times MHT_{y}([N])\] for \(M\in D^{b}MHM(X)\) and \(N\in D^{b}MHM(Y)\). Finally, by taking \(M=IC_{X}^{H}[-a]\) and \(M=IC_{Y}^{H}[-b]\), where \(a\) and \(b\) denote the dimensions of \(X\) and \(Y\), respectively, we can compute the intersection generalized Todd class (see Definition 2.11) of the product \(X\times Y\) to be \[IT_{ys}(X\times Y) =MHT_{ys}[IC^{H}_{X\times Y}[-(a+b)]]\] \[=MHT_{ys}[IC^{H}_{X}[-a]\boxtimes IC^{H}_{Y}[-b]]\] \[=MHT_{ys}([IC^{H}_{X}[-a]]\boxtimes[IC^{H}_{Y}[-b]])\] \[=MHT_{ys}[IC^{H}_{X}[-a]]\times MHT_{ys}[IC^{H}_{Y}[-b]]\] \[=IT_{ys}(X)\times IT_{ys}(Y),\] where we exploited that \(IC^{H}_{X\times Y}=IC^{H}_{X}\boxtimes IC^{H}_{Y}\in MHM(X\times Y)\) by Proposition 2.15. **Proposition 2.17**.: _Let \(\Phi\colon X\stackrel{{\cong}}{{\longrightarrow}}Y\) be an isomorphism of pure-dimensional complex algebraic varieties. Then, \(\Phi_{*}IT_{ys}(X)=IT_{ys}(Y)\) in \(H^{\mathrm{BM}}_{2*}(Y)\otimes\mathbb{Q}[\varphi^{\pm 1},(1+y)^{-1}]\)._ Proof.: This follows from \(\Phi_{*}IC^{H}_{X}=IC^{H}_{Y}\), and by naturality of \(MHT_{ys}\). ## 3. Blow-up and Transversality for Complex Manifolds This section studies stratifications and transversality of the strict transform of Whitney stratified sets under blow-ups along smooth centers. Throughout, let \(M\) be a complex submanifold of a complex manifold \(W\). **Lemma 3.1**.: _Let \(S\subset W\) be a complex submanifold that is transverse to \(M\subset W\) (in the sense of \(C^{\infty}\) manifolds). Then, the intersection \(S\cap M\) is a complex submanifold of \(W\). Moreover, the canonical map \(N_{S\cap M}S\to(N_{M}W)|_{S\cap M}\) of complex normal bundles is an isomorphism._ Proof.: The proof of the first claim is similar to the real smooth version, but is based on the complex counterpart of the \(C^{\infty}\) implicit function theorem (see e.g. Griffiths-Harris [31, p. 19]). The proof of the second claim uses the definition of the complex normal bundle \(N_{M}W\) as the quotient of the restricted complex tangent bundle \(TW|_{M}\) by the subbundle \(TM\) (see e.g. Griffiths-Harris [31, p. 71]). In the following, let \(\pi:W^{\prime}=\mathrm{Bl}_{M}W\to W\) denote the blow-up of \(W\) along \(M\). Thus, \(W^{\prime}\) is a complex manifold, \(\pi\) is a holomorphic map that is an isomorphism away from \(M\), \(\pi|\colon W^{\prime}\setminus\pi^{-1}(M)\stackrel{{\cong}}{{ \longrightarrow}}W\setminus M\), and the restriction \(\pi|\colon E\to M\) of \(\pi\) to the exceptional divisor \(E:=\pi^{-1}(M)\subset W^{\prime}\) is isomorphic over \(M\) to the projectivization \(\mathbb{P}(N_{M}W)\to M\) of the complex normal bundle \(N_{M}W\to M\) of \(M\) in \(W\). The blow-up of \(W\) along \(M\) can be characterized uniquely up to isomorphism as follows. Given a complex manifold \(W^{\prime}_{0}\) and a holomorphic map \(\pi_{0}\colon W^{\prime}_{0}\to W\) that is an isomorphism away from \(M\), and such that the fibers of \(\pi_{0}\) over points of \(M\) are isomorphic to complex projective \((k-1)\)-space \(\mathbb{P}(\mathbb{C}^{k})\), where \(k\) denotes the codimension of \(M\) in \(W\), there is an isomorphism \(\Phi\colon W^{\prime}_{0}\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) such that \(\pi_{0}=\pi\circ\Phi\). Consequently, we have the following **Proposition 3.2**.: _For every open subset \(U\subset W\), the restriction \(\pi|\colon\pi^{-1}(U)\to U\) is the blow-up of \(U\) along the complex submanifold \(M\cap U\subset U\)._ The inverse image \(\pi^{-1}(Z)\) of a subset \(Z\subset W\) is called the _total transform_ of \(Z\) under \(\pi\). For any subset \(Z\subset W\), the _strict transform_\(\tilde{Z}\) of \(Z\) under \(\pi\) is the intersection of the total transform \(\pi^{-1}(Z)\) with the closure of the inverse image \(\pi^{-1}(Z\setminus M)\) in \(W^{\prime}\). In particular, for a closed subset \(Z\subset W\), we note that \(\tilde{Z}\) is just the closure of the inverse image \(\pi^{-1}(Z\setminus M)\) in \(W^{\prime}\). In view of Proposition 3.2, it makes sense to study the behavior of the strict transform under restriction to open subsets. **Lemma 3.3**.: _For any subset \(Z\subset W\), the strict transform of the intersection \(Z\cap U\) under the blow-up \(\pi|\colon\pi^{-1}(U)\to U\) of an open subset \(U\subset W\) along \(M\cap U\) is given by \(\tilde{Z}\cap\pi^{-1}(U)\)._ Proof.: The strict transform of the intersection \(Z\cap U\) under the blow-up \(\pi|\colon\pi^{-1}(U)\to U\) of an open subset \(U\subset W\) along \(M\cap U\) is by definition the intersection of the total transform \(\pi^{-1}(Z\cap U)=\pi^{-1}(Z)\cap\pi^{-1}(U)\) with the closure of the inverse image \(\pi^{-1}((Z\cap U)\setminus(M\cap U))=\pi^{-1}(Z\setminus M)\cap\pi^{-1}(U)\) in \(\pi^{-1}(U)\). Since \(\pi^{-1}(U)\subset W^{\prime}\) is an open subset, the closure of \(\pi^{-1}(Z\setminus M)\cap\pi^{-1}(U)\) in \(\pi^{-1}(U)\) equals the intersection of \(\pi^{-1}(U)\) with the closure of \(\pi^{-1}(Z\setminus M)\) in \(W^{\prime}\). All in all, we obtain the intersection of \(\pi^{-1}(Z)\cap\pi^{-1}(U)\) with the closure of \(\pi^{-1}(Z\setminus M)\) in \(W^{\prime}\), which is just \(\tilde{Z}\cap\pi^{-1}(U)\). **Proposition 3.4**.: _If \(S\subset W\) is a complex submanifold that is transverse to \(M\subset W\), then the strict transform and the total transform of \(S\) under \(\pi\) coincide, \(\tilde{S}=\pi^{-1}(S)\)._ Proof.: Without loss of generality, we may assume that \(S\) is a closed subset of \(W\). (In fact, \(S\) is a closed subset of an open tubular neighborhood \(U\subset W\) of \(S\), and the restriction \(\pi|\colon\pi^{-1}(U)\to U\) is the blow-up of \(U\) along the complex submanifold \(M\cap U\subset U\) by Proposition 3.2. Hence, the strict transform of \(S\cap U=S\) under \(\pi|\colon\pi^{-1}(U)\to U\) coincides with the total transform \(\pi^{-1}(S)\). On the other hand, this strict transform is given by \(\tilde{S}\cap\pi^{-1}(U)=\tilde{S}\) according to Lemma 3.3.) Then, according to Griffiths-Harris [31, property 5, pp. 604-605], the intersection \(\tilde{S}\cap E\) corresponds under the isomorphism \(\Psi\colon E\cong\mathbb{P}(N_{M}W)\) of bundles over \(M\) to the image in \(N_{M}W\) of the complex tangent spaces \(T_{x}S\) to the points \(x\in S\cap M\). This image is \((N_{M}W)|_{S\cap M}\) because the right-hand vertical arrow in the commutative diagram is surjective by Lemma 3.1. Consequently, \[\tilde{S}\cap E=\Psi^{-1}\mathbb{P}((N_{M}W)|_{S\cap M})=\pi^{-1}(S\cap M).\] All in all, we obtain \[\tilde{S}=(\tilde{S}\cap E)\cup(\tilde{S}\cap\pi^{-1}(S\setminus M))=\pi^{-1} (S\cap M)\cup\pi^{-1}(S\setminus M)=\pi^{-1}(S).\] From now on, we assume familiarity with the notion of Whitney stratifications; for details, see e.g. [30, Section 1.2, p. 37]. **Corollary 3.5**.: _If \(X\subset W\) is a Whitney stratified subspace whose strata are complex submanifolds of \(W\) that are transverse to \(M\subset W\), then the strict transform and the total transform of \(X\) coincide, \(\tilde{X}=\pi^{-1}(X)\)._ Proof.: Since every stratum \(S\) of \(X\) satisfies \(\pi^{-1}(S)=\tilde{S}\) by Proposition 3.4, and \(\tilde{S}\subset\tilde{X}\) holds by definition of the strict transform, we have \[\pi^{-1}(X)=\bigcup_{S}\pi^{-1}(S)=\bigcup_{S}\tilde{S}\subset\tilde{X}.\] Conversely, we have \(\tilde{X}\subset\pi^{-1}(X)\) by definition of the strict transform. We proceed to the main result of this section. **Theorem 3.6**.: _Let \(X\subset W\) be a Whitney stratified subspace whose strata are complex submanifolds of \(W\) that are transverse to \(M\subset W\). Then, the strict transform \(\tilde{X}\subset W^{\prime}\) admits a Whitney stratification whose strata are transverse to the exceptional divisor \(E\subset W^{\prime}\) of \(\pi\)._ In view of Proposition 3.4 and Corollary 3.5, Theorem 3.6 follows from the following **Theorem 3.7**.: _Let \(X\subset W\) be a Whitney stratified subspace whose strata are transverse to \(M\subset W\). Then, the total transforms \(\pi^{-1}(S)\) of the strata \(S\) of \(X\) form a Whitney stratification of the total transform \(\pi^{-1}(X)\subset W^{\prime}\). Furthermore, every stratum of the total transform \(\pi^{-1}(X)\) is transverse to \(E=\pi^{-1}(M)\subset W^{\prime}\)._ Proof.: The claims are local in the sense that it suffices to find an open cover of \(W\) such that for every subset \(U\subset W\) of this cover, the inverse images \(\pi^{-1}(S\cap U)\) of the strata \(S\cap U\) of \(X\cap U\) form a Whitney stratification of the inverse image \(\pi^{-1}(X\cap U)\subset\pi^{-1}(U)\), and every \(\pi^{-1}(S\cap U)\) is transverse to \(\pi^{-1}(M\cap U)\subset\pi^{-1}(U)\). Since \(\pi\colon W^{\prime}\to W\) is locally isomorphic to the blow-up of an \(n\)-dimensional open disc along a coordinate plane (see property 3 in [31, p. 604]), we may therefore assume without loss of generality that \(\pi\) is globally of this form. Explicitly, the blow-up \(\pi:W^{\prime}\to W\) of an \(n\)-dimensional open disc \(W\subset\mathbb{C}^{n}\) with complex coordinates \(z=(z_{1},\dots,z_{n})\) along the coordinate plane \(M=W\cap\{z_{i}=0;\,i=1,\dots,k\}\) of codimension \(k\) is given by restricting the projection \(W\times\mathbb{P}(\mathbb{C}^{k})\to W\) to the first factor to \[W^{\prime}=\{(z,w=[w_{1}:\dots:w_{k}])\in W\times\mathbb{P}(\mathbb{C}^{k}); \,z_{i}w_{j}=z_{j}w_{i}\text{ for all }1\leq i,j\leq k\},\] and we have \(E=\pi^{-1}(M)=M\times\mathbb{P}(\mathbb{C}^{k})\). In this situation, we have the following **Lemma 3.8**.: _If \(S\subset W\) is a smooth submanifold that is transverse to \(M\subset W\), then \(S\times\mathbb{P}(\mathbb{C}^{k})\) is transverse to \(E=M\times\mathbb{P}(\mathbb{C}^{k})\subset W\times\mathbb{P}(\mathbb{C}^{k})\) and to \(W^{\prime}\subset W\times\mathbb{P}(\mathbb{C}^{k})\)._ Proof.: Since \(S\) is transverse to \(M\subset W\), it follows that \(S\times\mathbb{P}(\mathbb{C}^{k})\) is transverse to \(E=M\times\mathbb{P}(\mathbb{C}^{k})\subset W\times\mathbb{P}(\mathbb{C}^{k})\), which proves the first claim. To show the second claim, it remains to prove transversality of \(S\) and \(W^{\prime}\) at points in \((S\times\mathbb{P}(\mathbb{C}^{k}))\cap(W^{\prime}\setminus E)\). For this purpose, fix a point \[x=(y,z)\in(S\times\mathbb{P}(\mathbb{C}^{k}))\cap(W^{\prime}\setminus E)\;( \subset(S\setminus M)\times\mathbb{P}(\mathbb{C}^{k})),\] and let \((p,q)\in T_{x}(W\times\mathbb{P}(\mathbb{C}^{k}))=T_{y}(W\setminus M)\times T _{z}\mathbb{P}(\mathbb{C}^{k})\) be an arbitrary tangent vector. Since the differential \[d_{x}\pi:T_{x}(W^{\prime}\setminus E)\to T_{y}(W\setminus M)\] is surjective, the vector \(p\in T_{y}(W\setminus M)\) is the image under \(d_{x}\pi\) of a vector in \(T_{x}(W^{\prime}\setminus E)\) of the form \((p,q^{\prime})\in T_{x}(W^{\prime}\setminus E)\subset T_{y}(W\setminus M) \times T_{z}\mathbb{P}(\mathbb{C}^{k})\). Writing \((p,q)=(p,q^{\prime})+(0,q-q^{\prime})\) with \((p,q^{\prime})\in T_{x}(W^{\prime}\setminus E)\) and \((0,q-q^{\prime})\in T_{x}(S\times\mathbb{P}(\mathbb{C}^{k}))=T_{y}S\times T_{ z}\mathbb{P}(\mathbb{C}^{k})\), we conclude that \(S\times\mathbb{P}(\mathbb{C}^{k})\) and \(W^{\prime}\) are transverse at \(x\), which proves the second claim. Using Lemma 3.8, we can now complete the proof of Theorem 3.7. For this purpose, let \(X\subset W\) be a Whitney stratified subspace whose strata \(S\) are transverse to \(M\subset W\). We have to show that the total transforms \(\pi^{-1}(S)\) form a Whitney stratification of the total transform \(\pi^{-1}(X)\subset W^{\prime}\), and that every stratum \(\pi^{-1}(S)\) of the total transform \(\pi^{-1}(X)\) is transverse to \(E\subset W^{\prime}\). Since \(\pi^{-1}(Z)=W^{\prime}\cap(Z\times\mathbb{P}(\mathbb{C}^{k}))\) for all subsets \(Z\subset W\), we obtain the claims by applying Lemma 3.9 below to the smooth submanifolds \(E\subset W^{\prime}\) of the smooth manifold \(W\times\mathbb{P}(\mathbb{C}^{k})\), and to the Whitney stratified subset \(X\times\mathbb{P}(\mathbb{C}^{k})\subset W\times\mathbb{P}(\mathbb{C}^{k})\), whose strata \(S\times\mathbb{P}(\mathbb{C}^{k})\) are all transverse to \(E\subset W\times\mathbb{P}(\mathbb{C}^{k})\) and to \(W^{\prime}\subset W\times\mathbb{P}(\mathbb{C}^{k})\) by Lemma 3.8. **Lemma 3.9**.: _Let \(N\subset M\) be smooth submanifolds of a smooth manifold \(W\). Let \(X\subset W\) be a Whitney stratified subset whose strata \(S\) are transverse to \(N\subset W\) and to \(M\subset W\). Then, the intersection \(Y:=M\cap X\subset M\) is a Whitney stratified subset whose strata \(M\cap S\) are transverse to \(N\subset M\)._ Proof.: By the theory of Whitney stratifications (see e.g. Lemma 2.2.2 in [20]), it is well-known that the intersection \(Y:=M\cap X\subset M\) is a Whitney stratified subset with strata \(S\cap M\), where \(S\) runs through the strata of \(X\). It remains to show that the strata of \(Y\) are transverse to \(N\subset M\). For this purpose, we fix \(x\in N\cap Y\), and have to show that \(T_{x}M\subset T_{x}N+T_{x}S^{\prime}\), where \(S^{\prime}\) denotes the stratum of \(Y=M\cap X\) that contains \(x\). Now, \(N\pitchfork_{p}X\) and \(x\in N\cap Y\subset N\cap X\) imply that \(T_{x}P\subset T_{x}N+T_{x}S\), where \(S\) denotes the stratum of \(X\) that contains \(x\). Therefore, using \(M\subset P\), we obtain \(T_{x}M\subset T_{x}N+T_{x}S\). Thus, given \(\mu\in T_{x}M\), we find \(\nu\in T_{x}N\) and \(\xi\in T_{x}S\) such that \(\mu=\nu+\xi\). Using \(T_{x}N\subset T_{x}M\), we conclude that \[\xi=\mu-\nu\in T_{x}M\cap T_{x}S=T_{x}(M\cap S)=T_{x}S^{\prime}.\] All in all, we have \(\mu=\nu+\xi\in T_{x}N+T_{x}S^{\prime}\). As \(\mu\in T_{x}M\) was arbitrary, the claim follows. This completes the proof of Theorem 3.7. _Remark 3.10_.: Our proof of Theorem 3.7 is inspired by an argument of Cheniot [20, pp. 145-146]. There, one has \(M=A\) of codimension \(k=2\), and thus \(P(\mathbb{C}^{k})=\mathbb{P}^{1}(\mathbb{C})\), where \(\phi:=\pi\colon Z:=W^{\prime}\to W\) denotes the blow-up. However, all we really need is that \(M\) is transverse to the Whitney stratification of \(X\) in \(W\). ## 4. Algebraic Gysin Restriction of \(IT_{1,*}\) in a Transverse Setup In Section 4.1, we review from [7] a topological notion of normally nonsingular embedding that respects a stratification of the ambient space. Then, Section 4.2 discusses along the lines of [7] the first author's Verdier-Riemann-Roch type formula for the algebraic Gysin restriction of the Hodge-theoretic characteristic classes \(IT_{1,*}\) with respect to suitably normally nonsingular closed algebraic regular embeddings. Finally, in Section 4.3, we derive our algebraic Gysin restriction formula for \(IT_{1,*}\) in a transverse setup (see Theorem 4.15) by invoking Theorem 3.6. ### Topological Normally Nonsingular Embeddings **Definition 4.1** (see Definition 3.1 in [7]).: A _topological stratification_ of a topological space \(X\) is a filtration \[X=X_{n}\supseteq X_{n-1}\supseteq\dots\supseteq X_{1}\supseteq X_{0} \supseteq X_{-1}=\varnothing\] by closed subsets \(X_{i}\) such that the difference sets \(X_{i}-X_{i-1}\) are topological manifolds of pure dimension \(i\), unless empty. The connected components \(X_{\alpha}\) of these difference sets are called the _strata_. We will often write stratifications as \(\mathcal{X}=\{X_{\alpha}\}\). The following definition of locally cone-like topological stratifications, which are also known as _cs-stratifications_, is due to Siebenmann [54]; see also [51, Def. 4.2.1, p. 232]. **Definition 4.2** (see Definition 3.2 in [7]).: A topological stratification \(\{X_{i}\}\) of \(X\) is called _locally cone-like_ if for all \(x\in X_{i}-X_{i-1}\) there is an open neighborhood \(U\) of \(x\) in \(X\), a compact topological space \(L\) with filtration \[L=L_{n-i-1}\supseteq L_{n-i-2}\supseteq\dots\supseteq L_{0}\supseteq L_{-1}=\varnothing,\] and a filtration preserving homeomorphism \(U\cong\mathbb{R}^{i}\times\mathrm{cone}^{\circ}(L)\), where \(\mathrm{cone}^{\circ}(L)\) denotes the open cone on \(L\). We shall employ the following notion of normally nonsingular embedding of topological spaces that respects an ambient locally cone-like topological stratification. **Definition 4.3** (see Definition 3.3 in [7]).: Let \(X\) be a topological space with locally cone-like topological stratification \(\mathcal{X}=\{X_{\alpha}\}\) and let \(Y\) be any topological space. An embedding \(g\colon Y\hookrightarrow X\) is called _normally nonsingular_ (with respect to \(\mathcal{X}\)), if 1. \(\mathcal{Y}:=\{Y_{\alpha}:=X_{\alpha}\cap Y\}\) is a locally cone-like topological stratification of \(Y\), 2. there exists a topological vector bundle \(\pi\colon E\to Y\) and 3. there exists a (topological) embedding \(j\colon E\to X\) such that 1. \(j(E)\) is open in \(X\), 2. \(j|_{Y}=g\), and 3. the homeomorphism \(j\colon E\xrightarrow{\cong}j(E)\) is stratum preserving, where the open set \(j(E)\) is endowed with the stratification \(\{X_{\alpha}\cap j(E)\}\) and \(E\) is endowed with the stratification \(\mathcal{E}=\{\pi^{-1}Y_{\alpha}\}\). Note that the above stratification \(\mathcal{E}\) of the total space \(E\) is automatically topologically locally cone-like. For example, transverse intersections give rise to normally nonsingular inclusions as follows (compare the beginning of the proof of Proposition 6.3 in [7]). **Theorem 4.4**.: _(See Theorem 1.11 in [30, p. 47].) Let \(X\subset W\) be a Whitney stratified subset of a smooth manifold \(W\). Suppose that \(M\subset W\) is a smooth submanifold of codimension \(r\) that is transverse to every stratum of \(X\), and that \(Y=M\cap X\) is compact. Then, the inclusion \(g\colon Y\hookrightarrow X\) is normally nonsingular of codimension \(r\) with respect to the normal bundle \(\nu=\nu_{M\subset W}|_{Y}\) given by restriction of the normal bundle \(\nu_{M\subset W}\) of \(M\) in \(W\)._ Proof.: In [45], Pati points out that one may wish to add precision to the argument given in [30, p. 48], as Thom's First Isotopy Lemma is applied there to a composition \(\Psi^{-1}(X)\subset E_{\epsilon}\times(-\delta,1+\delta)\to(-\delta,1+\delta)\) which is not necessarily proper. To address this, we impose the assumption that \(Y=M\cap X\) be compact, and apply Thom's First Isotopy Lemma to the proper composition \(\Psi^{-1}(X)\cap D_{\epsilon/2}\subset E_{\epsilon}\times(-\delta,1+\delta) \to(-\delta,1+\delta)\), where \(D_{\epsilon/2}\subset E_{\epsilon}\) denotes the closed disk bundle of radius \(\epsilon/2\). To assure that the intersection \(\Psi^{-1}(X)\cap D_{\epsilon/2}\) is Whitney stratified, we choose \(\epsilon>0\) small enough to achieve, using Whitney's condition B, that \(\Psi|_{\partial D_{\epsilon/2}}\) is transverse to \(X\subset W\). ### Algebraic Gysin Restriction of \(IT_{1,*}\) for Upwardly Normally Nonsingular Embeddings We review the Verdier-Riemann-Roch type formula for the algebraic Gysin restriction of \(IT_{1,*}\) obtained in [7]. **Definition 4.5** (see p. 1279 in [7]).: An _algebraic stratification_ of a complex algebraic variety \(X\) is a locally cone-like topological stratification \(\{X_{2i}\}\) of \(X\) such that the closed subspaces \(X_{2i}\) are algebraic subsets of \(X\). **Example 4.6**.: Let \(Z\) be a closed subvariety of a smooth complex algebraic variety \(W\). A Whitney stratification of \(Z\subset W\) is called _complex algebraic_ if all its open strata are smooth complex algebraic subvarieties of \(W\). It is well-known that complex algebraic Whitney stratifications always exist (see e.g. [30, Theorem, p. 43]). Furthermore, complex algebraic Whitney stratifications are algebraic stratifications in the sense of Definition 4.5. (Here, we point out that the closures of the open strata are the same in the Zariski and the complex topology; see e.g. [42, Corollary 1, p. 60].) **Definition 4.7** (see Definition 3.4 in [7]).: If \(X\) and \(Y\) are complex algebraic varieties and \(g\colon Y\hookrightarrow X\) a closed algebraic embedding whose underlying topological embedding \(g_{\mathrm{top}}\) in the complex topology is normally nonsingular, then we will call \(g\) and \(g_{\mathrm{top}}\)_compatibly stratifiable_ if there exists an algebraic stratification \(\mathcal{X}\) of \(X\) such that \(g_{\mathrm{top}}\) is normally nonsingular with respect to \(\mathcal{X}\) and the induced stratification \(\mathcal{Y}\) is an algebraic stratification of \(Y\). The algebraic normal bundle of a regular algebraic embedding does not necessarily reflect the normal topology near the subvariety. In particular, the underlying topological embedding needs not be normally nonsingular. This observation motivates the following **Definition 4.8** (see Definition 6.1 in [7]).: A closed regular algebraic embedding \(Y\hookrightarrow X\) of complex algebraic varieties is called _tight_, if its underlying topological embedding (in the complex topology) is normally nonsingular and compatibly stratifiable, with topological normal bundle \(\pi\colon E\to Y\) as in Definition 4.3, and \(E\to Y\) is isomorphic (as a topological vector bundle) to the underlying topological vector bundle of the algebraic normal bundle \(N_{Y}X\) of \(Y\) in \(X\). For example, closed embeddings of smooth complex varieties are tight, see Example 6.2 in [7]. Next, we recall the notion of upward normal nonsingularity for tight embeddings. For a closed regular embedding \(V\hookrightarrow U\) of complex varieties, let \(\pi\colon\operatorname{Bl}_{V}U\to U\) denote the blow-up of \(U\) along \(V\). The exceptional divisor \(E=\pi^{-1}(V)\subset\operatorname{Bl}_{V}U\) is the projectivization \(\mathbb{P}(N)\) of the algebraic normal bundle \(N\) of \(V\) in \(U\). **Definition 4.9** (see Definition 6.5 in [7]).: A tight embedding \(Y\hookrightarrow X\) is called _upwardly normally nonsingular_ if the inclusion \(E\subset\operatorname{Bl}_{Y\times 0}(X\times\mathbb{C})\) of the exceptional divisor \(E\) is topologically normally nonsingular. According to Verdier [58], a regular closed algebraic embedding \(g\colon Y\hookrightarrow X\) has an associated algebraic Gysin homomorphism on Borel-Moore homology \[g^{!}_{\operatorname{alg}}\colon H^{\operatorname{BM}}_{*}(X;\mathbb{Q}) \longrightarrow H^{\operatorname{BM}}_{*-2c}(Y;\mathbb{Q}),\] where \(c\) is the complex codimension of \(Y\) in \(X\). The following result is the first author's Verdier-Riemann-Roch type formula for the algebraic Gysin restriction of the Hodge-theoretic characteristic classes \(IT_{1,*}\) (see Definition 2.11) with respect to upwardly normally nonsingular embeddings. **Theorem 4.10** (see Theorem 6.30 in [7]).: _Let \(X,Y\) be pure-dimensional compact complex algebraic varieties and let \(g\colon Y\hookrightarrow X\) be an upwardly normally nonsingular embedding. Let \(N=N_{Y}X\) be the algebraic normal bundle of \(g\) and let \(\nu\) denote the topological normal bundle of the topologically normally nonsingular inclusion underlying \(g\). Then_ \[g^{!}_{\operatorname{alg}}IT_{1,*}(X)=L^{*}(N)\cap IT_{1,*}(Y)=L^{*}(\nu) \cap IT_{1,*}(Y).\] ### Algebraic Gysin Restriction of \(IT_{1,*}\) in a Transverse Setup Tight embeddings arise frequently from transverse intersections in ambient smooth varieties, as we shall discuss next. For this purpose, we require the notion of Tor-independence (compare [7, p. 1312]), which can be thought of as a transversality condition in algebraic settings. For indications of this viewpoint in the literature, see for instance Section 1.6 in Baum-Fulton-MacPherson [12, p. 165], Definition 1.1.1 in Levine-Morel [37, p. 1], as well as Sierra's notion of homological transversality [56]. **Definition 4.11**.: Closed subschemes \(X,Y\subset S\) of a scheme \(S\) are called _Tor-independent_ if \[\operatorname{Tor}_{i}^{\mathbb{O}_{S}}(\mathcal{O}_{X},\mathcal{O}_{Y})=0 \text{ for all }i>0.\] **Proposition 4.12** (Proposition 6.3 in [7]).: _Let \(M\hookrightarrow W\) be a closed algebraic embedding of smooth complex algebraic varieties. Let \(X\subset W\) be a (possibly singular) algebraic subvariety, equipped with an algebraic Whitney stratification and set \(Y=X\cap M\). If_ * _each stratum of_ \(X\) _is transverse to_ \(M\)_, and_ _ * \(X\) _and_ \(M\) _are Tor-independent in_ \(W\)_,_ _then the embedding \(g\colon Y\hookrightarrow X\) is tight._ If the embeddings \(X\hookrightarrow W\hookhook M\) in the previous result satisfy an even stronger transversality condition defined next, then the embedding \(Y=X\cap M\hookrightarrow X\) is upwardly normally nonsingular according to Proposition 4.14 below. **Definition 4.13** (see Definition 6.4 in [7]).: Let \(X\hookrightarrow W\hookhook M\) be closed algebraic embeddings of algebraic varieties with \(M,W\) smooth. We say that these embeddings are _upwardly transverse_, if \(X\) and \(M\) are Tor-independent in \(W\), there exists an algebraic Whitney stratification of \(X\) which is transverse to \(M\) in \(W\), and there exists a (possibly non-algebraic) Whitney stratification on the strict transform of \(X\times\mathbb{C}\) in \(\operatorname{Bl}_{M\times 0}(W\times\mathbb{C})\) which is transverse to the exceptional divisor. **Proposition 4.14** (see Corollary 6.7 in [7]).: _If \(X\hookrightarrow W\hookhook M\) are upwardly transverse embeddings, then the embedding \(Y=X\cap M\hookrightarrow X\) is upwardly normally nonsingular._ In the transverse situation, we obtain the following extension of Theorem 4.10 by eliminating the strict transform condition. This constitutes the main result of this section. **Theorem 4.15**.: _Let \(X\hookrightarrow W\hookhook M\) be closed algebraic embeddings of pure-dimensional complex algebraic varieties with \(M,W\) smooth, \(X,M\) irreducible, and \(X\) compact. We suppose that \(X\) and \(M\) are Tor-independent in \(W\), that \(X\) is generically transverse to \(M\) in \(W\) (see e.g. [10, Section 3]), and that there exists a complex algebraic Whitney stratification of \(X\subset W\) which is transverse to \(M\) in \(W\). Then, the embedding \(g\colon Y\hookrightarrow X\) of the compact pure-dimensional subvariety \(Y=X\cap M\subset X\) is tight, and the algebraic normal bundle \(N=N_{Y}X\) of \(g\) and the topological normal bundle \(\nu\) of the topologically normally nonsingular inclusion underlying \(g\) satisfy_ \[g^{!}_{\operatorname{alg}}IT_{1,*}(X)=L^{*}(\nu)\cap IT_{1,*}(Y),\] _where \(g^{!}_{\operatorname{alg}}\colon H^{\operatorname{BM}}_{*}(X)\to H^{ \operatorname{BM}}_{*}(Y)\) denotes the algebraic Gysin homomorphism associated to \(g\) constructed by Verdier in [58]._ Proof.: Since \(X\) admits by assumption a complex algebraic Whitney stratification which is transverse to \(M\) in \(W\), it follows that the product stratification on \(X\times\mathbb{C}\) is a complex algebraic Whitney stratification which is transverse to \(M\times 0\) in \(W\times\mathbb{C}\). Thus, Theorem 3.6 implies that the strict transform of \(X\times\mathbb{C}\) in \(\operatorname{Bl}_{M\times 0}(W\times\mathbb{C})\) admits a Whitney stratification which is transverse to the exceptional divisor. Consequently, our transversality assumptions on \(X\) and \(M\) imply that the embeddings \(X\hookrightarrow W\hookhook M\) are upwardly transverse in the sense of Definition 4.13. (Recall from Example 4.6 that the complex algebraic Whitney stratification on \(X\) is in particular an algebraic stratification.) Hence, it follows from Proposition 4.14 that the embedding \(g\colon Y\hookrightarrow X\) of the compact subvariety \(Y=X\cap M\subset X\) is upwardly normally nonsingular (and, in particular, tight). Moreover, generic transversality of \(X\) and \(M\) in \(W\) implies that \(Y\) is pure-dimensional according to [10, Corollary 3.4]. Finally, the algebraic Gysin restriction formula for \(IT_{1,*}\) follows from Theorem 4.10. ## 5. Algebraic versus Topological Gysin Restriction A normally nonsingular inclusion \(g\colon Y\hookrightarrow X\) (see Definition 4.3) of a closed subset \(Y\subset X\) with oriented normal bundle \(\pi\colon E\to Y\) of rank \(r\) induces on singular homology groups a topological Gysin homomorphism \[g^{!}_{\operatorname{top}}\colon H_{*}(X;\mathbb{Q})\to H_{*-r}(Y;\mathbb{Q})\] given by the composition \[H_{*}(X)\xrightarrow{\operatorname{incl_{*}}}H_{*}(X,X\setminus Y)\xrightarrow{ \operatorname{\ell^{*}}_{\cong}}H_{*}(E,E_{0})\xrightarrow{\operatorname{\mathit{u \cap}}\lnot}H_{*-r}(E)\xrightarrow{\operatorname{\pi_{*}}_{\cong}}H_{*-r}(Y),\] where \(u\in H^{r}(E,E_{0})\) is the Thom class with \(E_{0}=E\setminus Y\) the complement of the zero section in \(E\), and \(e_{*}\) denotes the excision isomorphism induced by the open embedding \(j\colon E\to X\). The next proposition interprets the topological Gysin map on singular homology in terms of transverse intersections. **Proposition 5.1** (see e.g. Proposition 2.5 in [10]).: _Let \(W\) be an oriented smooth manifold, \(X,K\subset W\) Whitney stratified subspaces which are oriented pseudomanifolds with \(K\subset X\) and \(K\) compact. Let \(M\subset W\) be an oriented smooth submanifold which is closed as a subset. Suppose that \(M\) is transverse to all strata of the Whitney stratified subspaces \(X\subset W\) and \(K\subset W\), and that \(M\cap X\) is compact. Then the Gysin map_ \[g^{!}_{\operatorname{top}}:H_{*}(X;\mathbb{Q})\longrightarrow H_{*-r}(Y; \mathbb{Q})\] _associated to the normally nonsingular embedding \(g:Y=M\cap X\hookrightarrow X\) (see Theorem 4.4), where \(r\) is the (real) codimension of \(Y\) in \(X\), sends the fundamental class \([K]_{X}\in H_{*}(X;\mathbb{Q})\) of \(K\) to the fundamental class \([K\cap Y]_{Y}\) of the intersection \(K\cap Y=M\cap K\) (which is again an oriented pseudomanifold),_ \[g^{!}_{\operatorname{top}}[K]_{X}=[K\cap Y]_{Y}.\] In the following, we shall discuss a variant of the topological Gysin map defined on Borel-Moore homology groups. For reasonable compact spaces, both topological Gysin maps coincide under a natural identification of Borel-Moore homology with singular homology (see Proposition 5.7 below). This variant is needed here to show that the algebraic and topological Gysin maps coincide on algebraic cycles in a transverse setup (see Theorem 5.10 below). Following Fulton [25, p. 371, Eq. (1)], we employ here a construction of Borel-Moore homology defined for any topological space that can be embedded as a closed subset of some Euclidean space. Namely, if the space \(X\) is embedded as a closed subset of \(\mathbb{R}^{n}\), then Fulton sets \(H_{*}^{\operatorname{BM}}(X;\mathbb{Q})=H^{n-*}(\mathbb{R}^{n},\mathbb{R}^{n} -X;\mathbb{Q})\). When \(X\) is a compact ENR, then the Alexander duality isomorphism \(H^{n-*}(\mathbb{R}^{n},\mathbb{R}^{n}-X;\mathbb{Q})\cong H_{*}(X;\mathbb{Q})\) provides a natural identification with singular homology, \(H_{*}^{\operatorname{BM}}(X;\mathbb{Q})\cong H_{*}(X;\mathbb{Q})\). For a detailed discussion of this viewpoint on Borel-Moore homology, we refer to [26, Appendix B.2, pp. 215 ff.]. For a normally nonsingular inclusion \(g\colon Y\hookrightarrow X\) of a closed subset \(Y\subset X\) as above with oriented normal bundle \(\pi\colon E\to Y\) of rank \(r\), the topological Gysin map on Borel-Moore homology, \[g^{!!}_{\operatorname{top}}\colon H_{*}^{\operatorname{BM}}(X;\mathbb{Q}) \to H_{*-r}^{\operatorname{BM}}(Y;\mathbb{Q}),\] is given by the composition \[H_{*}^{\operatorname{BM}}(X)\xrightarrow{\operatorname{res}}H_{*}^{ \operatorname{BM}}(U)\xrightarrow{\operatorname{\mathit{u\cap}}\lnot}H_{*-r}^{ \operatorname{BM}}(Y),\] where the first map restricts Borel-Moore cycles on \(X\) to the open tubular neighborhood \(U:=j(E)\subset X\) of \(Y\subset X\), and the second map is given by cap product with the Thom class \(u\in H^{r}(U,U_{0})=H^{r}(E,E_{0})\), where \(U_{0}=U\setminus Y\). The cap product used in the definition of the topological Gysin map on Borel-Moore homology is the one mentioned by Fulton in [25, p. 371, Eq. (2)]. As explained in [25, p. 375], its construction can be found in Fulton-MacPherson [27, Eq. (2), p. 36] in a more abstract setting, and involves relative Cech cohomology groups as discussed by Dold [22, pp. 281 ff.]. For convenience, we provide here the construction of this cap product together with some of its transformational properties (see Corollary 5.5). _Remark 5.2_.: Recall from [22, Definition 6.1, p. 281] that the relative Cech cohomology groups \(\tilde{H}^{i}(A,B)\) are defined for any pair \(B\subset A\) of locally compact subspaces of an ENR. Furthermore, for any locally compact subspaces \(A_{1},A_{2}\) of an ENR which are separated by \(A_{1}\cap A_{2}\) (e.g. when \(A_{1},A_{2}\) are both open or closed in \(A_{1}\cup A_{2}\)), there is an excision isomorphism \(\tilde{H}^{*}(A_{1}\cup A_{2},A_{1})\cong\tilde{H}^{*}(A_{2},A_{1}\cap A_{2})\) by [22, p. 286, Eq. (6.16)]. **Proposition 5.3**.: _For a topological space \(X\) that can be embedded as a closed subset of some Euclidean space, the following hold:_ 1. _For any closed subset_ \(Y\subset X\)_, there is a cap product_ \[\tilde{H}^{i}(X,X-Y)\times H^{\mathrm{BM}}_{k}(X)\stackrel{{ \cap}}{{\longrightarrow}}H^{\mathrm{BM}}_{k-i}(Y).\] 2. _Every open subset_ \(X^{\prime}\subset X\) _can be embedded as a closed subset of some Euclidean space, and if_ \(X^{\prime}\) _is an open neighborhood of a closed subset_ \(Y\subset X\)_, then we have a commutative diagram_ \[\begin{array}{ccc}\tilde{H}^{i}(X,X-Y)&\times&H^{\mathrm{BM}}_{k}(X)& \cap&H^{\mathrm{BM}}_{k-i}(Y)\\ \cong&\rho^{*}&&\rho^{*}&&\rho^{*}\\ \tilde{H}^{i}(X^{\prime},X^{\prime}-Y)&\times&H^{\mathrm{BM}}_{k}(X^{\prime})& \cap&H^{\mathrm{BM}}_{k-i}(Y),\end{array}\] _where_ \(\rho^{*}\) _is the excision isomorphism induced by inclusion in Cech cohomology, and_ \(r^{*}\) _is restriction of a Borel-Moore cycle to an open subset._ 3. _Any cartesian diagram of closed embeddings_ _induces compatible cap products_ \[\begin{array}{ccc}\tilde{H}^{i}(X^{\prime},X^{\prime}-Y^{\prime})&\times&H^ {\mathrm{BM}}_{k}(X^{\prime})&\cap&H^{\mathrm{BM}}_{k-i}(Y^{\prime})\\ \tilde{H}^{i}(X,X-Y)&\times&H^{\mathrm{BM}}_{k}(X)&\cap&H^{\mathrm{BM}}_{k-i}( Y),\end{array}\] _where_ \(f^{*}_{X}\) _is the induced map on Cech cohomology, and_ \(f_{X*},f_{Y*}\) _are induced on Borel-Moore homology by the proper maps_ \(f_{X},f_{Y}\)_._ Proof.: Suppose that \(X\) is embedded as a closed subset of \(\mathbb{R}^{n}\). For the construction of the cap product in claim (1), let \((U,V)\supset(X,X-Y)\) be an open neighborhood pair in \(\mathbb{R}^{n}\). Since both \(V\) and \(U-X\) are open in \(U\), singular cohomology comes with a standard cup product \[H^{i}(U,V)\times H^{j}(U,U-X)\stackrel{{\cup}}{{\longrightarrow }}H^{i+j}(U,V\cup(U-X)).\] Now it follows from \(X-Y\subset V\) that \(U-Y=(X-Y)\cup(U-X)\subset V\cup(U-X)\). Hence, there is a restriction homomorphism \[H^{i+j}(U,V\cup(U-X))\longrightarrow H^{i+j}(U,U-Y).\] Composing with it, we obtain a cup product1 Footnote 1: In Fulton-MacPherson [27, Eq. (2), p. 36], the additional assumption \(V\cap X=X-Y\) is imposed. \[H^{i}(U,V)\times H^{j}(U,U-X)\stackrel{{\cup}}{{\longrightarrow}}H ^{i+j}(U,U-Y).\] Using excision, this can be rewritten as a product \[H^{i}(U,V)\times H^{j}(\mathbb{R}^{n},\mathbb{R}^{n}-X)\stackrel{{ \cup}}{{\longrightarrow}}H^{i+j}(\mathbb{R}^{n},\mathbb{R}^{n}-Y).\] Next, we note that \(X-Y\subset X\) are locally compact subsets of the ENR \(\mathbb{R}^{n}\) by [22, Lemma 8.3, p. 80] because they are locally closed subsets by assumption. Therefore, the relative Cech cohomology groups \[\check{H}^{i}(X,X-Y):=\operatorname{colim}\{H^{i}(U,V)\mid(U,V)\supset(X,X-Y)\}\] are defined as in [22, Definition 6.1, p. 281]. Consequently, there is a cup product \[\check{H}^{i}(X,X-Y)\times H^{j}(\mathbb{R}^{n},\mathbb{R}^{n}-X)\stackrel{{ \cup}}{{\longrightarrow}}H^{i+j}(\mathbb{R}^{n},\mathbb{R}^{n}-Y).\] By the definition of Borel-Moore homology used by Fulton, we have \[H^{j}(\mathbb{R}^{n},\mathbb{R}^{n}-X)=H^{\mathrm{BM}}_{n-j}(X),\quad H^{i+j}( \mathbb{R}^{n},\mathbb{R}^{n}-Y)=H^{\mathrm{BM}}_{n-i-j}(Y).\] We arrive at the desired cap product \[\check{H}^{i}(X,X-Y)\times H^{\mathrm{BM}}_{k}(X)\stackrel{{ \cap}}{{\longrightarrow}}H^{\mathrm{BM}}_{k-i}(Y).\] As for claim (2), let \(X^{\prime}\subset X\) be an open neighborhood of the closed subset \(Y\subset X\). Then, there exists an open subset \(W\subset\mathbb{R}^{n}\) such that \(X^{\prime}=W\cap X\). We note that \(X^{\prime}\) is a closed subset of the manifold \(W\), which can be embedded as a closed subset of some Euclidean space. If \((U,V)\supset(X,X-Y)\) is an open neighborhood pair in \(\mathbb{R}^{n}\), then \((W\cap U,W\cap V)\supset(X^{\prime},X^{\prime}-Y)\) is an open neighborhood pair in \(W\), and the diagram of restrictions commutes by naturality of the cup product. Finally, we obtain the desired commutative diagram by passing to the colimits to obtain the induced map on Cech cohomology (see [22, Definition 6.3, p. 282]), and by definition of the restriction map in Borel-Moore homology (see [26, Eq. (30), p. 218]). To show claim (3), we observe that any open neighborhood pair \((U,V)\supset(X,X-Y)\) in \(\mathbb{R}^{n}\) induces compatible cup products where the vertical arrows are induced by inclusion. The claim now follows by passing to the colimits to obtain the induced map on Cech cohomology (see [22, Definition 6.3, p. 282]), and by definition of the pushforward for proper maps in Borel-Moore homology (see [26, p. 218]). _Remark 5.4_.: In Proposition 5.1, the assumption that \(K\) is compact can be dropped when using the topological Gysin map on Borel-Moore homology instead of singular homology. The proof is very similar, but uses base change as stated in Proposition 5.3(3), as well as the fact that the topological Gysin map in Borel-Moore homology maps the fundamental class to the fundamental class. **Corollary 5.5**.: _For any closed subset \(Y\) of an ENR \(X\), we have a cap product_ \[H^{i}(X,X-Y)\times H^{\mathrm{BM}}_{k}(X)\stackrel{{\cap}}{{ \longrightarrow}}H^{\mathrm{BM}}_{k-i}(Y)\] _with the properties (2) and (3) of Proposition 5.3 with respect to singular cohomology._ Proof.: Since \(X\) is an ENR, we can embed it as a closed subset of some Euclidean space \(\mathbb{R}^{n}\) according to [22, Proposition 8.1 and Lemma 8.2, p. 80], so that the previous proposition yields a cap product using Cech cohomology. By assumption, \(X\) is an ENR, and hence the open subset \(X-Y\subset X\) is an ENR as well. By [22, Proposition 6.12, p. 285], \(\tilde{H}^{i}(X,X-Y)\) is therefore naturally isomorphic to singular cohomology \(H^{i}(X,X-Y)\). _Remark 5.6_.: For \(Y=X\) an ENR, the cap product of Corollary 5.5 specializes to a cap product of the form \(H^{*}(X)\times H^{\mathrm{BM}}_{*}(X)\to H^{\mathrm{BM}}_{*}(X)\). This is the cap product that appears in Theorem 4.10 and Theorem 4.15. When \(X\) is a compact CW complex, this cap product corresponds to the ordinary cap product under the natural identification of Borel-Moore homology with singular homology. (In fact, if \(\alpha\in H^{i}(X;\mathbb{Q})=\tilde{H}^{i}(X;\mathbb{Q})\) is represented by \(\alpha_{U}\in H^{i}(U;\mathbb{Q})\) on some open neighborhood \(U\) of \(X\subset\mathbb{R}^{n}\), then there is a commutative diagram where \(\mu_{X}\in H_{n}(U,U-X;\mathbb{Q})\) denotes the orientation class of \(X\subset U\). Passing to Cech cohomology of \(X\) in the lower horizontal row induces the ordinary cup product map \[\alpha\cup-:H^{n-(i+j)}(X)\to H^{n-j}(X),\] and the vertical maps become Alexander duality isomorphisms \(H^{*}(X)\cong H_{n-*}(U,U-X)\) (see e.g. Bredon [15, Theorem 8.3, p. 351]). Then, the claim follows by applying \(\mathrm{Hom}_{\mathbb{Q}}(-,\mathbb{Q})\) to the resulting commutative diagram, and identifying \(H^{*}(-)\cong\mathrm{Hom}_{\mathbb{Q}}(H_{*}(-),\mathbb{Q})\) via the Kronecker pairing. Note that the involved \(\mathbb{Q}\)-vector spaces are finite dimensional.) **Proposition 5.7**.: _Let \((X,Y)\) be a compact CW pair such that \(X\) is endowed with a topologically cone-like topological stratification. If the inclusion \(g:Y\hookrightarrow X\) is normally nonsingular with oriented normal bundle \(\nu\colon E\to Y\) of rank \(r\), the topological Gysin maps_ \[g^{!}_{\mathrm{top}},g^{!^{\dagger}_{\mathrm{top}}}_{\mathrm{top}}:H_{*}(X) \longrightarrow H_{*-r}(Y)\] _coincide under the natural identification of Borel-Moore homology with singular homology._ Proof.: We may identify a closed tubular neighborhood of \(Y\subset X\) with the total space of the disc bundle \(D\nu\) of the topological normal bundle \(\nu\) of the normally nonsingular inclusion \(g\colon Y\hookrightarrow X\). Let \(S\nu\subset D\nu\) denote the sphere bundle and \(D^{\circ}\nu=D\nu-S\nu\) the open disc bundle. Furthermore, \(Z=X-D^{\circ}\nu\) is a closed subset of \(X\). The ENR \(X\) can be embedded a closed subset of some Euclidean space \(\mathbb{R}^{n}\). Then, we obtain a commutative diagram and for any open neighborhood \(U\) of \(D\nu\subset\mathbb{R}^{n}\) a commutative diagram where the vertical maps are Alexander duality isomorphisms described by cap product with an orientation \(\vartheta\) of \(\mathbb{R}^{n}\), or its restriction to \(U\) (see e.g. Bredon [15, Theorem 8.3, p. 351], who also provides a description of the Alexander duality isomorphisms on the (co)chain level in [15, p. 349]). The Thom class \(\tau(\nu)\in H^{r}(D\nu,S\nu)=\tilde{H}^{r}(D\nu,S\nu)\) can be represented by an element \(t\in H^{r}(U,V)\) for some open neighborhood pair \((U,V)\) of \((D\nu,S\nu)\) in \(\mathbb{R}^{n}\). By making \(V\) smaller if necessary, we may assume without loss of generality that the inclusion \((U,V)\subset(U,U-Y)\) is a homotopy equivalence of pairs. Let \(u\in H^{r}(U,U-Y)\) be the element that restricts to \(t\). Writing \(W=U-S\nu\), we obtain the commutative diagram where the vertical maps are Alexander duality isomorphisms described by cap product with the orientation \(\vartheta|_{W}\) of \(W\) or the orientation \(\vartheta|_{U}\) of \(U\). Again, one checks commutativity of the diagram by using the description of the Alexander duality isomorphisms on the (co)chain level provided in [15, p. 349]. Finally, the claim follows by applying \(\operatorname{Hom}_{\mathbb{Q}}(-,\mathbb{Q})\) to the concatenation of the above three commutative diagrams, and identifying \(H^{*}(-)\cong\operatorname{Hom}_{\mathbb{Q}}(H_{*}(-),\mathbb{Q})\) via the Kronecker pairing. Note that the involved \(\mathbb{Q}\)-vector spaces are finite dimensional. Let \(A_{*}(Z)\) denote the Chow groups of a complex variety \(Z\) and let \(\operatorname{cl}=\operatorname{cl}_{Z}:A_{*}(Z)\to H^{\operatorname{BM}}_{2*} (Z)\) be the cycle map from Chow homology to rational Borel-Moore homology. Let \[u_{f}\in H^{2d}(X,X-Y)\] be the orientation class of the regular closed embedding \(f\colon Y\hookrightarrow X\) as in [25, 19.2, p. 378]. The cap product of Corollary 5.5 can be used to evaluate the orientation class of a closed regular embedding on a cycle class in Borel-Moore homology as follows. **Theorem 5.8** (see Theorem 19.2 in Fulton [25]).: _Let \(f\colon Y\hookrightarrow X\) be a regular embedding of complex algebraic varieties of codimension \(d\). Then, for all \(k\)-cycles \(\alpha\) on \(X\),_ \[cl_{Y}(f^{!}_{\operatorname{alg}}\alpha)=u_{f}\cap cl_{X}(\alpha)\] _in \(H^{BM}_{2k-2d}(Y)\)._ _Remark 5.9_.: More generally, Theorem 19.2 in [25] uses the refined Gysin homomorphism [25, Section 6.2] associated to the fiber square formed by the regular embedding \(f\colon Y\hookrightarrow X\) and a morphism \(g\colon X^{\prime}\to X\). However, for \(g=\operatorname{id}_{X}\) the refined Gysin homomorphism coincides with Verdier's algebraic Gysin homomorphism \(f^{!}_{\operatorname{alg}}\) for regular embeddings (see [25, p. 117], where we note that \(f^{!}_{\operatorname{alg}}=f^{*}\) in Fulton's notation). **Theorem 5.10**.: _Let \(W\) be a smooth complex algebraic variety, \(M\subset W\) a smooth closed subvariety and \(X\subset W\) a closed subvariety such that an algebraic Whitney stratification of \(X\) in \(W\) is transverse to \(M\) and \(X\) and \(M\) are Tor-independent in \(W\). Then the algebraic and topological Gysin restriction homomorphisms associated to the inclusion \(g:Y=X\cap M\hookrightarrow X,\)_ \[g^{!}_{\operatorname{alg}},g^{!!}_{\operatorname{top}}:H^{\operatorname{BM}}_ {*}(X)\longrightarrow H^{\operatorname{BM}}_{*-2d}(Y)\] _coincide on algebraic cycles, where \(d\) is the complex codimension of the embedding \(g\)._ Proof.: By smoothness, the closed embedding \(M\subset W\) is regular with algebraic normal bundle \(N_{M}W\). The Tor-independence of \(X\) and \(M\) ensures that the closed embedding \(g:Y\hookrightarrow X\) is regular as well, and that the excess normal bundle vanishes, i.e. the canonical closed embedding \(N_{Y}X\to j^{*}N_{M}W\) is an isomorphism of algebraic vector bundles, where \(j\) is the embedding \(j:Y\hookrightarrow M\). According to Verdier [58, p. 222, 9.2.1], the algebraic Gysin map of a closed regular embedding commutes with the cycle map from Chow to Borel-Moore homology. Thus there is a commutative diagram (8) Since \(X\) and \(M\) are transverse and Tor-independent in \(W\), Proposition 4.12 implies that the embedding \(g:Y\hookrightarrow X\) is tight, i.e. the underlying topological embedding (in the analytic topology) is normally nonsingular with tubular neighborhood described by the disc bundle \(D\nu\) of a topological normal bundle \(\nu\) which is isomorphic to the underlying topological vector bundle of the algebraic normal bundle \(N_{Y}X\). Let \(S\nu\subset D\nu\) denote the sphere bundle and \(D^{\circ}\nu=D\nu-S\nu\) the open disc bundle. Then, Proposition 5.3 yields a commutative diagram (9) where the left vertical arrow is the excision isomorphism induced by inclusion, and the middle vertical arrow is restriction of a Borel-Moore cycle to an open subset. Let \[u_{g}\in H^{2d}(X,X-Y)\] be the orientation class of the regular closed embedding \(g:Y\hookrightarrow X\) as in [25, 19.2, p. 378]. We shall next compute this class. Consider the cartesian diagram of closed embeddings By [25, Lemma 19.2 (a), p. 379], applied to the above cartesian diagram containing the regular embeddings \(f\) and \(g\), \[i^{*}(u_{f})=u_{g}\in H^{2d}(X,X-Y),\qquad i^{*}:H^{2d}(W,W-M)\to H^{2d}(X,X-Y).\] Let \(\nu_{M}\) be the underlying topological vector bundle of the algebraic normal bundle \(N_{M}W\). By tightness of the embedding \(g:Y\hookrightarrow X\), there is an isomorphism \(\nu=j^{*}\nu_{M},\ j:Y\hookrightarrow M\) inherited from the isomorphism of algebraic vector bundles. Let \(i_{D}:D\nu\to D\nu_{M}\) denote the bundle map covering \(j\). Then by naturality of the Thom class \[t(\nu)=i_{D}^{*}t(\nu_{M}).\] Consider the following factorization of the above cartesian diagram: By [25, p. 372, bottom], \[\delta^{*}(u_{f})=t(\nu_{M}),\] since \(f\) is a closed embedding of nonsingular varieties. Therefore, \[\rho^{*}(u_{g})=\rho^{*}i^{*}(u_{f})=\mathfrak{i}_{D}^{*}\delta^{*}(u_{f})=i_ {D}^{*}t(\nu_{M})=t(\nu).\] By Theorem 5.8, \[u_{g}\cap\operatorname{cl}(\alpha)=\operatorname{cl}(g_{\operatorname{alg}}^ {!}\alpha) \tag{10}\] for any algebraic cycle \(\alpha\in A_{*}(X)\). Using Verdier's diagram (8), and the commutativity of the cap diagram (9), we obtain for \(\alpha\in A_{*}(X)\), \[g_{\operatorname{alg}}^{!}(\operatorname{cl}(\alpha)) =\operatorname{cl}(g_{\operatorname{alg}}^{!}\alpha)\] \[=u_{g}\cap\operatorname{cl}(\alpha)\] \[=\rho^{*}(u_{g})\cap r^{*}(\operatorname{cl}(\alpha))\] \[=t(\nu)\cap r^{*}(\operatorname{cl}(\alpha)).\] Thus for any class \(\beta\in H_{*}^{\operatorname{BM}}(X)\) in the image of the cycle map, \[g_{\operatorname{alg}}^{!}(\beta)=t(\nu)\cap r^{*}\beta=g_{\operatorname{ top}}^{!!}(\beta).\] ## 6. Gysin Coherent Characteristic Classes In this section, we recall the notion of Gysin coherent characteristic classes (see Definition 6.2 below), and state the uniqueness theorem, which is the main result of [10] (see Theorem 6.4 below). In the present paper, we shall discuss algebraic characteristic classes such as Todd classes (see Section 9), Chern classes (see Section 8), as well as motivic Hodge classes (see Section 7) within the framework of Gysin coherence. In the following, by a variety we mean a pure-dimensional complex quasiprojective algebraic variety. Let \(\mathcal{X}\) be a family of inclusions \(i\colon X\to W\), where \(W\) is a smooth variety, and \(X\subset W\) is a compact irreducible subvariety. We require the following properties for \(\mathcal{X}\): * For every Schubert subvariety \(X\subset G\) of a Grassmannian \(G\), the inclusion \(X\to G\) is in \(\mathcal{X}\). * If \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\) are in \(\mathcal{X}\), then the product \(i\times i^{\prime}\colon X\times X^{\prime}\to W\times W^{\prime}\) is in \(\mathcal{X}\). * Given inclusions \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\) of compact subvarieties in smooth varieties, and an isomorphism \(W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(X\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\), it follows from \(i\in\mathcal{X}\) that \(i^{\prime}\in\mathcal{X}\). * For all closed subvarieties \(X\subset M\subset W\) such that \(X\) is compact and \(M\) and \(W\) are smooth, it holds that if the inclusion \(X\to M\) is in \(\mathcal{X}\), then the inclusion \(X\to W\) is in \(\mathcal{X}\). Next, for a given family \(\mathcal{X}\) of inclusions as above, by \(\mathcal{X}\)-_transversality_, we mean a symmetric relation for closed irreducible subvarieties of a smooth variety that satisfies the following properties: * The intersection \(Z\cap Z^{\prime}\) of two \(\mathcal{X}\)-transverse closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\) is _proper_, that is, \(Z\cap Z^{\prime}\) is pure-dimensional of codimension \(c+c^{\prime}\), where \(c\) and \(c^{\prime}\) are the codimensions of \(Z\) and \(Z^{\prime}\) in \(W\), respectively. * The following analog of Kleiman's transversality theorem holds for the action of \(GL_{n}(\mathbb{C})\) on the Grassmannians \(G=G_{k}(\mathbb{C}^{n})\). If \(i\colon X\to G\) and \(i^{\prime}\colon X^{\prime}\to G\) are inclusions in \(\mathcal{X}\), then there is a nonempty open dense subset \(U\subset GL_{n}(\mathbb{C})\) (in the complex topology) such that \(X\) is \(\mathcal{X}\)-transverse to \(g\cdot X^{\prime}\) for all \(g\in U\). * Locality: If \(Z,Z^{\prime}\subset W\) are \(\mathcal{X}\)-transverse closed irreducible subvarieties of a smooth variety \(W\) and \(U\subset W\) is a (Zariski) open subset that has nontrivial intersections with \(Z\) and \(Z^{\prime}\), then \(Z\cap U\) and \(Z^{\prime}\cap U\) are \(\mathcal{X}\)-transverse in \(U\). **Example 6.1**.: The family \(\mathcal{X}_{0}\) consisting of all inclusions of compact irreducible subvarieties in smooth varieties satisfies the above family requirements. Furthermore, a notion of \(\mathcal{X}_{0}\)-transversality is obtained by calling two closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\)\(\mathcal{X}_{0}\)-transverse if they admit complex algebraic Whitney stratifications (see Example 4.6) such that every stratum of \(Z\) is transverse to every stratum of \(Z^{\prime}\) (as smooth submanifolds of \(W\)). This uses Kleiman's transversality theorem [34]. Recall from Section 5 that every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\) of (complex) codimension \(c\) in a smooth variety \(W\) induces a topological Gysin map on singular homology, \(f^{\mathrm{t}}_{\mathrm{top}}\colon H_{*}(W;\mathbb{Q})\to H_{*-2c}(M;\mathbb{ Q})\). **Definition 6.2**.: A _Gysin coherent characteristic class \(c\ell\) with respect to \(\mathcal{X}\)_ is a pair \[c\ell=(c\ell^{*},c\ell_{*})\] consisting of a function \(c\ell^{*}\) that assigns to every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\) an element \[c\ell^{*}(f)=c\ell^{0}(f)+c\ell^{1}(f)+c\ell^{2}(f)+\cdots\in H^{*}(M;\mathbb{Q}),\quad c\ell^{p}(f)\in H^{p}(M;\mathbb{Q}),\] with \(c\ell^{0}(f)=1\), and a function \(c\ell_{*}\) that assigns to every inclusion \(i\colon X\to W\) of a compact possibly singular subvariety \(X\subset W\) of complex dimension \(d\) in a smooth variety \(W\) an element \[c\ell_{*}(i)=c\ell_{0}(i)+c\ell_{1}(i)+c\ell_{2}(i)+\cdots+c\ell_{2d}(i)\in H_ {*}(W;\mathbb{Q}),\quad c\ell_{p}(i)\in H_{p}(W;\mathbb{Q}),\] with \(c\ell_{2d}(i)=[X]w\), such that the following properties hold: 1. _(Multiplicativity)_ For every \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), we have \[c\ell_{*}(i\times i^{\prime})=c\ell_{*}(i)\times c\ell_{*}(i^{\prime}).\] 2. _(Isomorphism invariance)_ For every \(f\colon M\to W\) and \(f^{\prime}\colon M^{\prime}\to W^{\prime}\), and every isomorphism \(W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\phi\colon M\stackrel{{\cong}}{{\longrightarrow}}M^{\prime}\), we have \[\phi^{*}c\ell^{*}(f^{\prime})=c\ell^{*}(f).\] Moreover, for every \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), and every isomorphism \(\Phi\colon W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(X\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\), we have \[\Phi_{*}c\ell_{*}(i)=c\ell_{*}(i^{\prime}).\] 3. _(Naturality)_ For every \(i\colon X\to W\) and \(f\colon M\to W\) such that \(X\subset M\), the inclusion \(i^{M}:=i|\colon X\to M\) satisfies \[f_{*}c\ell_{*}(i^{M})=c\ell_{*}(i).\] 4. _(Gysin restriction in a transverse setup)_ There exists a notion of \(\mathcal{X}\)-transversality such that the following holds. For every inclusion \(i\colon X\to W\) in \(\mathcal{X}\) and every inclusion \(f\colon M\to W\) such that \(M\) is irreducible, and \(M\) and \(X\) are \(\mathcal{X}\)-transverse in \(W\), the inclusion \(j\colon Y\to M\) of the pure-dimensional compact subvariety \(Y:=M\cap X\subset M\) satisfies \[f^{!}_{\mathrm{top}}c\ell_{*}(i)=c\ell^{*}(f)\cap c\ell_{*}(j).\] Such a class \(c\ell\) is called _Gysin coherent characteristic class_ if \(\mathcal{X}=\mathcal{X}_{0}\) is the family of all inclusions of compact irreducible subvarieties in smooth varieties (compare Example 6.1). The genus \(|c\ell_{*}|\) of a Gysin coherent characteristic class \(c\ell\) with respect to \(\mathcal{X}\) is defined as the composition of \(c\ell_{*}\) with the homological augmentation, \(|c\ell_{*}|=\boldsymbol{\varepsilon}_{*}c\ell_{*}\in\mathbb{Q}\). **Example 6.3**.: In [10, Section 9], the framework of Gysin coherence with respect to the family \(\mathcal{X}_{0}\) of Example 6.1 is applied to Goresky-MacPherson \(L\)-classes (see Example 6.3 below), where \(\mathcal{X}_{0}\)-transversality is chosen to mean simultaneously topological transversality with respect to appropriate Whitney stratifications and generic transversality. It is shown that the pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f\colon M\hookrightarrow W)=L^{*}(\nu_{M\subset W})\) and \(c\ell_{*}(i\colon X\hookrightarrow W)=i_{*}L_{*}(X)\), where \(L^{*}\) is Hirzebruch's cohomological \(L\)-class of a vector bundle and \(L_{*}\) is the Goresky-MacPherson \(L\)-class, forms a Gysin coherent characteristic class. The associated genus is the signature \(\sigma(X)\) of the Goresky-MacPherson intersection form on middle-perversity intersection homology of \(X\). The main result of [10] is the following **Theorem 6.4** (Uniqueness Theorem).: _Let \(c\ell\) and \(\widetilde{c\ell}\) be Gysin coherent characteristic classes with respect to \(\mathcal{X}\). If \(c\ell^{*}=\widetilde{c\ell}^{*}\) and \(|c\ell_{*}|=|\widetilde{c\ell}_{*}|\) for the associated genera, then we have \(c\ell_{*}(i)=\widetilde{c\ell}_{*}(i)\) for all inclusions \(i\colon X\to G\) in \(\mathcal{X}\) of compact irreducible subvarieties in ambient Grassmannians._ _Remark 6.5_.: An inspection of the proof of the uniqueness theorem above shows that it suffices to request the assumption \(c\ell_{2d}(i)=[X]w\) in the definition of Gysin coherent classes \(c\ell\) only for irreducible \(X\). In fact, this assumption is only used in the proof of Theorem 7.1 and in the proof of Theorem 6.4 in [10], where the varieties under consideration are irreducible. ## 7. The class \(IT_{1,*}\) as a Gysin Coherent Characteristic Class The class \(IT_{1,*}\) fits into the framework of Gysin coherent characteristic classes (Definition 6.2), as we shall show in Theorem 7.4 below. Our method of proof requires moving varieties into transverse position by means of suitable generalizations of Kleiman's transversality theorem. Such generalizations are known in the Cohen-Macaulay context (see Sierra [56]). Rational and normal Gorenstein singularities are Cohen-Macaulay. For instance, Schubert varieties as well as toric varieties have rational singularities, and hence are Cohen-Macaulay. As in Section 6, by a variety we mean a pure-dimensional complex quasiprojective algebraic variety. For later use in the proof of Theorem 7.4 below, we record the following **Proposition 7.1**.: _For an irreducible projective variety \(X\) of complex dimension \(d\), the highest non-trivial homogeneous component of the class_ \[IT_{1,*}(X)=IT_{1,0}(X)+IT_{1,1}(X)+\cdots\in H_{*}(X;\mathbb{Q})\] _is the fundamental class, \(IT_{1,2d}(X)=[X]_{X}\)._ Proof.: Since \(X\) is compact and irreducible, we know that \(H_{2d}(X;\mathbb{Q})\) is generated by the fundamental class \([X]_{X}\), and that \(H_{i}(X;\mathbb{Q})=0\) for \(i>2d\). Therefore, writing \(IT_{1,2d}(X)=r\cdot[X]_{X}\) for some \(r\in\mathbb{Q}\), it remains to show that \(r=1\). For this purpose, we fix an embedding \(X\subset\mathbb{P}^{m}\) and a complex algebraic Whitney stratification on \(X\) (see Example 4.6). By applying a topological version of Kleiman's transversality theorem (see e.g. [30, p. 39, Theorem 1.3.6 and Example 1.3.7]), we find a generic linear subspace \(H\subset\mathbb{P}^{m}\) of complex codimension \(d\) that is transverse to all strata of \(X\). Then, the intersection \(Y:=H\cap X\) is a pure \(0\)-dimensional closed subvariety of a smooth Zariski open subvariety \(U\subset X\). Therefore, the closed regular embedding \(g\colon Y\hookrightarrow X\) is tight (see Definition 4.8). Moreover, the inclusion \(E\subset\operatorname{Bl}_{Y\times 0}(X\times\mathbb{C})\) of the smooth exceptional divisor \(E\) factorizes over the smooth variety \(\operatorname{Bl}_{Y\times 0}(U\times\mathbb{C})\), and is therefore topologically normally nonsingular. Consequently, the embedding \(g\colon Y\hookrightarrow X\) is upwardly normally nonsingular (see Definition 4.9). Thus, Theorem 4.10 states that the algebraic Gysin map \(g^{!}_{\operatorname{alg}}\colon H_{2*}(X;\mathbb{Q})\to H_{2*-2d}(Y; \mathbb{Q})\) associated to the upwardly normally nonsingular embedding \(g\colon Y\hookrightarrow X\) with underlying topological normal bundle \(\nu\) satisfies \[g^{!}_{\operatorname{alg}}IT_{1,*}(X)=L^{*}(\nu)\cap IT_{1,*}(Y).\] Since the compact subvariety \(Y\subset\mathbb{P}^{m}\) has pure dimension \(0\), it consists of a finite number \(k>0\) of points. (By construction, \(k\) is the degree of the embedding \(X\hookrightarrow\mathbb{P}^{m}\), and is hence positive.) As any vector bundle over a one-point space is trivial, we have \(L^{*}(\nu)=1\). Thus, \(g^{!}_{\operatorname{alg}}IT_{1,*}(X)=IT_{1,*}(Y)\). Furthermore, we have \(g^{!}_{\operatorname{alg}}[X]_{X}=[Y]_{Y}\) (see e.g. [25, p. 100, Example 6.2.1], and use that the algebraic Gysin map of a closed regular embedding commutes with the cycle map from Chow to Borel-Moore homology according to Verdier [58, p. 222, 9.2.1]). Altogether, we conclude that \[r\cdot[Y]_{Y}=g^{!}_{\operatorname{alg}}(r\cdot[X]_{X})=g^{!}_{\operatorname{ alg}}IT_{1,2d}(X)=IT_{1,0}(Y)\in H_{0}(Y;\mathbb{Q}).\] By applying the augmentation \(\varepsilon_{*}\colon H_{0}(Y;\mathbb{Q})\to\mathbb{Q}\) and using that \(Y\) is smooth, we obtain \[r\cdot k=r\cdot\varepsilon_{*}\cdot[Y]_{Y}=\varepsilon_{*}IT_{1,0}(Y)= \varepsilon_{*}T_{1,0}(Y)=\varepsilon_{*}L_{0}(Y)=\sigma(Y)=k.\] Since \(k>0\), we conclude that \(r=1\). _Remark 7.2_.: The statement and proof of Proposition 7.1 hold similarly for \(IT_{0,*}(X)\). In fact, Theorem 4.10 is also valid for \(IT_{0,*}\) with correction factor \(T_{0}^{*}(N_{M}W)=\operatorname{td}^{*}(N_{M}W)\) instead of \(T_{1}^{*}(\nu_{f})=L^{*}(\nu_{f})\) as an inspection of the proof of Theorem 6.30 in [7] shows. (In fact, the assumption \(y=1\) is only used in the proof of Proposition 6.26 in [7] to conclude that \(T_{1}^{*}(1_{X})=L^{*}(1_{X})=1\in H^{*}(X;\mathbb{Q})\) for the trivial line bundle \(1_{X}\) on a complex algebraic variety \(X\), but we also have \(T_{0}^{*}(1_{X})=\operatorname{td}^{*}(1_{X})=1\in H^{*}(X;\mathbb{Q})\).) _Remark 7.3_.: For an alternative proof of Proposition 7.1, we note that \(IT_{y,2d}(X)=IT_{y,2d}(X_{\operatorname{reg}})\) under the identification \(H_{2d}(X;\mathbb{Q})=H_{2d}^{\operatorname{BM}}(X_{\operatorname{reg}}; \mathbb{Q})\), where \(X_{\operatorname{reg}}\) denotes the open smooth part of \(X\). For the manifold \(M=X_{\operatorname{reg}}\), we observe \(IT_{y,*}(M)=T_{y}^{*}(TM)\cap[M]\) with \(T_{y}^{0}(TM)=1\). In Theorem 7.4 below, let \(\mathcal{X}_{CM}\) denote the family of all inclusions \(i\colon X\to W\) of compact irreducible subvarieties \(X\) in smooth varieties \(W\) such that \(X\) is Cohen-Macaulay. We note that \(\mathcal{X}_{CM}\) has all the required properties. In fact, all inclusions of Schubert subvarieties in Grassmannians are contained in \(\mathcal{X}_{CM}\) because Schubert varieties are Cohen-Macaulay. Furthermore, the family \(\mathcal{X}_{CM}\) is stable under products (because the product of two Cohen-Macaulay schemes is again Cohen-Macaulay, see e.g. the proof of the Lemma in [28, p. 108]), isomorphisms of smooth ambient spaces, and inclusions into larger smooth ambient spaces. **Theorem 7.4**.: _The pair \(\mathcal{L}=(\mathcal{L}^{*},\mathcal{L}_{*})\) defined by \(\mathcal{L}^{*}(f)=L^{*}(\nu_{f})\) for every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\) with normal bundle \(\nu_{f}\), and by \(\mathcal{L}_{*}(i)=i_{*}IT_{1,*}(X)\) for every inclusion \(i\colon X\to W\) of a compact possibly singular subvariety \(X\subset W\) in a smooth variety \(W\) is a Gysin coherent characteristic class with respect to \(\mathcal{X}_{CM}\)._ Proof.: By the properties of the cohomological Hirzebruch \(L\)-class, the class \(\mathcal{L}^{*}(f)=L^{*}(\nu_{f})\in H^{*}(M;\mathbb{Q})\) is normalized for all \(f\). Moreover, by Proposition 7.1, the highest nontrivial homogeneous component of the class \(IT_{1,*}(X)=IT_{1,0}(X)+IT_{1,1}(X)+\cdots\in H_{*}(X;\mathbb{Q})\) is \(IT_{1,2d}(X)=[X]_{X}\), where \(d\) denotes the complex dimension of \(X\), and \(X\) may be assumed to be irreducible by Remark 6.5. Consequently, the highest nontrivial homogeneous component of \(\mathcal{L}_{*}(i)=i_{*}IT_{1,*}(X)\in H_{*}(W;\mathbb{Q})\) is the ambient fundamental class \(i_{*}IT_{1,2d}(X)=i_{*}[X]_{X}=[X]_{W}\). We proceed to check the axioms of Gysin coherent characteristic classes for the pair \(\mathcal{L}\). As for axiom (1), we have \(IT_{1,*}(X\times X^{\prime})=IT_{1,*}(X)\times IT_{1,*}(X^{\prime})\) in \(H_{*}(X\times X^{\prime};\mathbb{Q})\) for all pure-dimensional compact complex algebraic varieties \(X\) and \(X^{\prime}\) by Corollary 2.16. Hence, for every \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), the claim follows by applying \((i\times i^{\prime})_{*}\) and using naturality of the cross product: \[\mathcal{L}_{*}(i\times i^{\prime}) =(i\times i^{\prime})_{*}IT_{1,*}(X\times X^{\prime})=(i\times i^{ \prime})_{*}(IT_{1,*}(X)\times IT_{1,*}(X^{\prime}))\] \[=i_{*}IT_{1,*}(X)\times i_{*}^{\prime}IT_{1,*}(X^{\prime})= \mathcal{L}_{*}(i)\times\mathcal{L}_{*}(i^{\prime}).\] Next, let us show that the pair \(\mathcal{L}\) is compatible with ambient isomorphisms as stated in axiom (2). As for \(\mathcal{L}^{*}\), we consider \(f\colon M\to W\) and \(f^{\prime}\colon M^{\prime}\to W^{\prime}\), and an isomorphism \(W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\phi\colon M\stackrel{{\cong}}{{\longrightarrow}}M^{\prime}\). Then, we have \(\phi^{*}\nu_{f^{\prime}}=\nu_{f}\), and thus \[\phi^{*}\mathcal{L}^{*}(f^{\prime})=\phi^{*}L^{*}(\nu_{f^{\prime}})=L^{*}(\phi ^{*}\nu_{f^{\prime}})=L^{*}(\nu_{f})=\mathcal{L}^{*}(f).\] As for \(\mathcal{L}_{*}\), we consider \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), and an isomorphism \(\Phi\colon W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\Phi_{0}\colon X\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\). Then, we have \(\Phi_{0*}IT_{1,*}(X)=IT_{1,*}(X^{\prime})\) by Proposition 2.17. Hence, we obtain \[\Phi_{*}\mathcal{L}_{*}(i)=\Phi_{*}i_{*}IT_{1,*}(X)=i_{*}^{\prime}\Phi_{0*}IT_{1, *}(X)=i_{*}^{\prime}IT_{1,*}(X^{\prime})=\mathcal{L}_{*}(i^{\prime}).\] To verify axiom (3), we consider \(i\colon X\to W\) and \(f\colon M\to W\) such that \(X\subset M\). Then, the inclusion \(i^{M}:=i|\colon X\to M\) satisfies \(f\circ i^{M}=i\), and we obtain \[f_{*}\mathcal{L}_{*}(i^{M})=f_{*}i_{*}^{M}IT_{1,*}(X)=i_{*}IT_{1,*}(X)= \mathcal{L}_{*}(i).\] Finally, to show axiom (4), let us call closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\)\(\mathcal{X}_{CM}\)_-transverse_ if \(Z\) and \(Z^{\prime}\) are simultaneously complex algebraic Whitney transverse (that is, they admit complex algebraic Whitney stratifications such that every stratum of \(Z\) is transverse to every stratum of \(Z^{\prime}\) as smooth submanifolds of \(W\)), generically transverse (see e.g. [10, Section 3]), and Tor-independent (see Definition 4.11) in \(W\). This notion of \(\mathcal{X}_{CM}\)-transversality has indeed all required properties. (In fact, properness of \(\mathcal{X}_{CM}\)-transverse intersections follows from generic transversality according to [10, Corollary 3.4]. Next, to obtain the desired analog of Kleiman's transversality theorem for the action of \(GL_{n}(\mathbb{C})\) on the Grassmannians \(G=G_{k}(\mathbb{C}^{n})\), we consider inclusions \(X\to G\) and \(X^{\prime}\to G\) in \(\mathcal{X}_{CM}\). Then, we apply suitable versions of Kleiman's transversality theorem to obtain an open dense subset \(U\) of \(GL_{n}(\mathbb{C})\) (in the complex topology) such that \(X\) is \(\mathcal{X}_{CM}\)-transverse to \(g\cdot X^{\prime}\) for all \(g\in U\). Here, Kleiman's transversality theorem holds for (complex algebraic) Whitney transversality by [10, Theorem 2.2] since \(X\) and \(X^{\prime}\) are compact, for generic transversality by [10, Theorem 3.5] since \(X\) and \(X^{\prime}\) are irreducible, and for Tor-independence by Sierra's general homological Kleiman-Bertini theorem [56, Corollary 4.3, p. 608] since \(X\) and \(X^{\prime}\) are Cohen-Macaulay. We also note that Zariski dense open subsets are also dense in the complex topology by [42, Theorem 1, p. 58]. Finally, locality holds evidently for our notion of \(\mathcal{X}_{CM}\)-transversality.) Now, consider an inclusion \(i\colon X\to W\) in \(\mathcal{X}_{CM}\) and an inclusion \(f\colon M\to W\) (of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\)) such that \(M\) is irreducible, and \(M\) and \(X\) are \(\mathcal{X}_{CM}\)-transverse in \(W\). Then, Theorem 4.15 implies that the embedding \(g\colon Y\hookrightarrow X\) of the compact subvariety \(Y=X\cap M\subset X\) is tight, and the algebraic normal bundle \(N=N_{Y}X\) of \(g\) and the topological normal bundle \(\nu\) of the topologically normally nonsingular inclusion underlying \(g\) satisfy \[g^{\dagger}_{\operatorname{alg}}IT_{1,*}(X)=L^{*}(N)\cap IT_{1,*}(Y)=L^{*}( \nu)\cap IT_{1,*}(Y),\] where \(g^{\dagger}_{\operatorname{alg}}\colon H_{*}(X;\mathbb{Q})\to H_{*}(Y; \mathbb{Q})\) denotes the algebraic Gysin homomorphism associated to \(g\) as constructed in Verdier [58] (where we may use the natural identification of Borel-Moore homology and singular homology because \(X\) and \(Y\) are compact). Furthermore, since \(i\in\mathcal{X}_{CM}\) and \(M\) and \(X\) are \(\mathcal{X}_{CM}\)-transverse in \(W\), Theorem 5.10 and Proposition 5.7 imply that the algebraic Gysin map \(g^{\dagger}_{\operatorname{alg}}\colon H_{*}(X;\mathbb{Q})\to H_{*}(Y; \mathbb{Q})\) coincides with the topological Gysin map \(g^{\dagger}_{\operatorname{top}}\colon H_{*}(X;\mathbb{Q})\to H_{*}(Y; \mathbb{Q})\) on all fundamental classes \([Z]_{X}\) of closed irreducible subvarieties \(Z\subset X\). As \(IT_{1,*}(X)\) is an algebraic cycle according to Remark 2.12, we obtain \[g^{\dagger}_{\operatorname{top}}IT_{1,*}(X)=g^{\dagger}_{\operatorname{alg}}IT _{1,*}(X).\] Next, recall from Theorem 4.4 that the inclusion \(g\colon Y\hookrightarrow X\) is normally nonsingular with topological normal bundle \(\nu=j^{*}\nu_{f}\) given by the restriction under the inclusion \(j\colon Y\to M\) of the normal bundle \(\nu_{f}\) of \(M\) in \(W\). Using the base change \(f^{\dagger}_{\operatorname{top}}i_{*}=j_{*}g^{\dagger}_{\operatorname{top}}\) for topological Gysin maps (see [10, Proposition 2.4]), as well as \(L^{*}(\nu)=L^{*}(j^{*}\nu_{f})=j^{*}L^{*}(\nu_{f})\), we conclude that \[f^{\dagger}_{\operatorname{top}}\mathcal{L}_{*}(i) =f^{\dagger}_{\operatorname{top}}i_{*}IT_{1,*}(X)=j_{*}g^{\dagger }_{\operatorname{top}}IT_{1,*}(X)=j_{*}g^{\dagger}_{\operatorname{alg}}IT_{1,* }(X)=j_{*}(L^{*}(\nu)\cap IT_{1,*}(Y))\] \[=j_{*}(j^{*}L^{*}(\nu_{f})\cap IT_{1,*}(Y))=L^{*}(\nu_{f})\cap j_{* }IT_{1,*}(Y)=\mathcal{L}^{*}(f)\cap\mathcal{L}_{*}(j).\] _Remark 7.5_.: A similar proof shows that the pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f\colon M\hookrightarrow W)=\operatorname{td}^{*}(N_{M}W)\) and \(c\ell_{*}(i\colon X\hookrightarrow W)=i_{*}IT_{0,*}(X)\) is also an example of a Gysin coherent characteristic class with respect to \(\mathcal{X}_{CM}\). In fact, Theorem 4.10 (and consequently, Theorem 4.15) is also valid for \(IT_{0,*}\) with correction factor \(T_{0}^{*}(N_{M}W)=\operatorname{td}^{*}(N_{M}W)\) instead of \(T_{1}^{*}(\nu_{f})=L^{*}(\nu_{f})\) as an inspection of the proof of Theorem 6.30 in [7] shows (compare Remark 7.2). See Remark 8.8 for \(IT_{-1,*}(X)\). When \(X\) is singular, the intersection Todd class \(IT_{0*}(X)\) is generally different from the Baum-Fulton-MacPherson Todd class \(\operatorname{td}_{*}(X)\) studied in Section 9. Let us consider the normalization \(\pi\colon X^{\prime}\to X\) of a singular curve \(X\) such that \(\pi\) is a homeomorphism. Let \(T_{0*}=\tau_{*}\circ mC_{0}\), where \(mC_{0}\colon K_{0}(\operatorname{var}/X)\to K_{0}^{\operatorname{alg}}(X)\) is the evaluation at \(y=0\) of the motivic Chern class transformation \(mC_{y}\) of [14], recalled here in Remark 2.14. Following [14, Example 3.1], let us show that \(T_{0*}(X)\neq\operatorname{td}_{*}(X)\). The normalization \(\pi\) restricts to an isomorphism of regular parts. It is also an isomorphism of the singular parts since those are finite sets of points and the homeomorphism \(\pi\) restricts to a bijection between them. Thus, the scissor relation in the motivic group \(K_{0}(\operatorname{var}/X)\) yields \(\pi_{*}[\operatorname{id}_{X^{\prime}}]=[\operatorname{id}_{X}]\). Then, as the smoothness of \(X^{\prime}\) implies \(mC_{0}[\operatorname{id}_{X^{\prime}}]=[\mathcal{O}_{X^{\prime}}]\) (Du Bois would suffice), we find \[T_{0*}(X):=T_{0*}[\operatorname{id}_{X}]=T_{0*}\pi_{*}[\operatorname{id}_{X^{ \prime}}]=\pi_{*}T_{0*}[\operatorname{id}_{X^{\prime}}]=\pi_{*}\tau_{*}mC_{0}[ \operatorname{id}_{X^{\prime}}]=\pi_{*}\tau_{*}[\mathcal{O}_{X^{\prime}}].\] As \(X\) is singular, \[\pi_{*}[\mathcal{O}_{X^{\prime}}]=[\mathcal{O}_{X}]+n\cdot[\mathcal{O}_{ \operatorname{pt}}],\qquad n>0.\] Hence, \[T_{0*}(X)=\pi_{*}\tau_{*}[\mathcal{O}_{X^{\prime}}]=\tau_{*}\pi_{*}[\mathcal{ O}_{X^{\prime}}]=\tau_{*}([\mathcal{O}_{X}]+n\cdot[\mathcal{O}_{\operatorname{pt} }])=\operatorname{td}_{*}(X)+n\cdot[\operatorname{pt}]\neq\operatorname{td}_{ *}(X).\] On the other hand, since \(\pi\) is a resolution of singularities, \(IC_{X}^{H}\) is a direct summand of \(\pi_{*}\mathbb{Q}_{X^{\prime}}^{H}[1]\) (see [52, Corollary 4.6]). As \(\pi\) is in fact a small resolution, we have \(\pi_{*}(\mathbb{Q}_{X^{\prime}}^{H})=IC_{X}^{H}[-1]\). Moreover, in view of Remark 2.14, we have \([\mathcal{O}_{X^{\prime}}]=mC_{0}[\operatorname{id}_{X^{\prime}}]=MHC_{0*} \chi_{Hdg}[\operatorname{id}_{X^{\prime}}]=MHC_{0*}[\mathbb{Q}_{X^{\prime}}^{ H}]\). All in all, we obtain \[T_{0*}(X) =\pi_{*}\tau_{*}[\mathcal{O}_{X^{\prime}}]=\pi_{*}\tau_{*}MHC_{0* }[\mathbb{Q}_{X^{\prime}}^{H}]=\pi_{*}MHT_{0*}[\mathbb{Q}_{X^{\prime}}^{H}]= MHT_{0*}[\pi_{*}\mathbb{Q}_{X^{\prime}}^{H}]\] \[=MHT_{0*}[IC_{X}^{H}[-1]]=IT_{0*}(X).\] Therefore, \(IT_{0*}(X)=T_{0*}(X)\neq\operatorname{td}_{*}(X)\). (A similar example shows that \(IT_{0*}(X)\neq T_{0*}(X)\) in general.) Recall from Example 6.3 that the pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f\colon M\hookrightarrow W)=L^{*}(\nu_{M\subset W})\) and \(c\ell_{*}(i\colon X\hookrightarrow W)=i_{*}L_{*}(X)\), where \(L_{*}\) is the Goresky-MacPherson \(L\)-class, forms a Gysin coherent characteristic class, and hence a Gysin coherent characteristic class with respect to \(\mathcal{X}_{CM}\) as defined at the beginning of this section. Since the \(L\)-genus, i.e. the signature, agrees with the genus of \(IT_{1*}\) on complex projective algebraic varieties by Saito's intersection cohomology Hodge index theorem (see [46], [40, Section 3.6]), the uniqueness theorem for Gysin coherent characteristic classes (Theorem 6.4) implies **Theorem 7.6**.: _We have \(i_{*}L_{*}(X)=i_{*}IT_{1,*}(X)\) for all inclusions \(i\colon X\to G\) of compact irreducible Cohen-Macaulay subvarieties in ambient Grassmannians._ Since Schubert varieties in a Grassmannian are Cohen-Macaulay, and their homology injects into the homology of the ambient Grassmannian, we obtain **Corollary 7.7**.: _We have \(L_{*}(X)=IT_{1,*}(X)\) for all Schubert varieties \(X\) in a Grassmannian._ ## 8. The Chern Class as a Gysin Coherent Characteristic Class Let \(c_{*}\colon F(X)\longrightarrow H^{\mathrm{BM}}_{2*}(X)\otimes\mathbb{Q}\) denote the (rationalized) Chern class transformation of MacPherson [38] defined on the group \(F(X)\) of \(\mathbb{Z}\)-valued algebraically constructible functions on the possibly singular complex algebraic variety \(X\). (Such a constructible function is a linear combination of indicator functions \(1_{Z}\) with \(Z\subset X\) a closed irreducible subvariety.) The Chern-Schwartz-MacPherson class \(c_{*}(X)\) of a possibly singular irreducible complex algebraic variety \(X\) is defined as \[c_{*}(X):=c_{*}(1_{X})\in H^{BM}_{2*}(X;\mathbb{Q}).\] While the Chern classes \(c_{*}(X)\) were originally defined with integral coefficients, we need to consider them with rational coefficients to be able to study them in the framework of Gysin coherence. In [53], the second author derives the following Verdier-Riemann-Roch type theorem for the behavior of the Chow homology Chern class transformation \(c_{*}\colon F(X)\to A_{*}(X)\) under refined Gysin maps associated to transverse intersections in a microlocal context. **Theorem 8.1** (see Corollary 2.7 in [53]).: _Let \(f\colon M\to W\) be a closed embedding of smooth complex varieties with algebraic normal bundle \(N=N_{M}W\). Let \(X\subset W\) be a (Zariski) closed subspace, and set \(Y:=f^{-1}(X)=M\cap X\subset M\). Assume that \(\gamma\in F(X)\) is a constructible function such that \(f\) is non-characteristic with respect to the support \(\operatorname{supp}(CC(\gamma))\) of the characteristic cycle \(CC(\gamma)\) of \(\gamma\). Then,_ \[(f,i)^{!}_{\mathrm{ref}}(c_{*}(\gamma))=c^{*}(N|_{Y})\cap c_{*}(g^{*}(\gamma) )\in A_{*}(Y),\] _where \((f,i)^{!}_{\mathrm{ref}}\colon A_{*}(X)\to A_{*}(Y)\) denotes the refined Gysin map associated to the cartesian square of closed embeddings_ _(see Fulton_[25, Section 6.2])._ _Remark 8.2_.: As pointed out in [53], the non-characteristic condition for \(\gamma\in F(X)\) in Theorem 8.1 above is for example satisfied when \(M\) is transverse to all strata of a complex algebraic Whitney stratification of \(X\subset W\), and \(\gamma\) is constructible with respect to this stratification, i.e., \(\gamma|_{S}\) is locally constant for all strata \(S\) of \(X\). A simple consequence is that the highest non-trivial homogeneous component of the Chern class is the fundamental class (compare Proposition 7.1). **Corollary 8.3**.: _For an irreducible projective variety \(X\) of complex dimension \(d\), the highest non-trivial homogeneous component of the class_ \[c_{*}(X)=c_{0}(X)+c_{1}(X)+\cdots\in A_{*}(X)\] _is the fundamental class, \(c_{d}(X)=[X]_{X}\)._ Proof.: Since \(X\) is irreducible, we know that \(A_{d}(X)\) is generated by the fundamental class \([X]_{X}\), and that \(A_{i}(X)=0\) for \(i>d\). Therefore, writing \(c_{d}(X)=r\cdot[X]_{X}\) for some \(r\in\mathbb{Z}\), it remains to show that \(r=1\). For this purpose, we fix an embedding \(X\subset\mathbb{P}^{m}=:W\) and a complex algebraic Whitney stratification on \(X\) (see Example 4.6). By applying a topological version of Kleiman's transversality theorem (see e.g. [30, p. 39, Theorem 1.3.6 and Example 1.3.7]), we find a generic linear subspace \(H\subset\mathbb{P}^{m}=W\) of complex codimension \(d\) that is transverse to all strata of \(X\). Then, the intersection \(Y:=H\cap X\) is a pure \(0\)-dimensional closed subvariety of a Zariski open subset \(U\subset X\). Hence, the closed embedding \(g\colon Y\hookrightarrow X\) is regular since it is the composition of a smooth embedding of smooth varieties and an open embedding. Therefore, the refined Gysin map \((f,i)^{!}_{\operatorname{ref}}\colon A_{*}(X)\to A_{*}(Y)\) associated to the cartesian square of closed embeddings coincides with the algebraic Gysin map \(g^{!}_{\operatorname{alg}}\colon A_{*}(X)\to A_{*}(Y)\) because \(f\) and \(g\) are both codimension \(d\) regular embeddings (see Fulton [25, p. 99, Remark 6.2.1]). By Remark 8.2, the constructible function \[\gamma\colon=1_{X}\in F(X)\] satisfies the non-characteristic property of Theorem 8.1 because \(M:=H\) is by assumption transverse to all strata of the given complex algebraic Whitney stratification on \(X\), and \(\gamma|_{S}=1_{S}\) is locally constant for all strata \(S\) of \(X\). The pull-back \(g^{*}\colon F(X)\to F(Y)\) induced by the closed embedding \(g\colon Y\hookrightarrow X\) satisfies \(g^{*}(\gamma)=g^{*}(1_{X})=1_{H\cap X}=1_{Y}\). Hence, Theorem 8.1 yields \[g^{!}_{\operatorname{alg}}(c_{*}(X))=(f,i)^{!}_{\operatorname{ref}}(c_{*}(X) )=c^{*}(N|_{Y})\cap c_{*}(Y)\in A_{*}(Y).\] Since the compact subvariety \(Y\subset\mathbb{P}^{m}\) has pure dimension \(0\), it consists of a finite number \(k>0\) of points. (By construction, \(k\) is the degree of the embedding \(i\colon X\hookrightarrow\mathbb{P}^{m}\), and is hence positive.) As any vector bundle over a one-point space is trivial, we have \(c^{*}(N|_{Y})=1\). Thus, \(g^{!}_{\operatorname{alg}}(c_{*}(X))=c_{*}(Y)\). Furthermore, we have \(g^{!}_{\operatorname{alg}}([X]_{X})=[Y]_{Y}\) (see e.g. [25, p. 100, Example 6.2.1]). Altogether, we conclude that \[r\cdot[Y]_{Y}=(f,i)^{!}_{\operatorname{ref}}(r\cdot[X]_{X})=(f,i)^{!}_{ \operatorname{ref}}(c_{d}(X))=c_{0}(Y)\in A_{0}(Y).\] By applying the augmentation \(\varepsilon_{*}\colon A_{0}(Y)\to\mathbb{Z}\) and using that \(Y\) is smooth, we obtain \[r\cdot k=r\cdot\varepsilon_{*}\cdot[Y]_{Y}=\varepsilon_{*}c_{0}(Y)=k.\] Since \(k>0\), we conclude that \(r=1\). _Remark 8.4_.: As in Remark 7.3, an alternative proof of Corollary 8.3 can be obtained by observing that the restriction of \(c_{d}(X)\) to the regular part of \(X\) is \(c_{d}(X_{\operatorname{reg}})\). The restriction \(A_{d}(X)\to A_{d}(X_{\operatorname{reg}})\) is an isomorphism by [25, Proposition 1.8, p. 21] (recall that \(X\) is assumed to be irreducible). Theorem 8.1 implies the following result. **Corollary 8.5**.: _Let \(X\hookrightarrow W\hookhook\hook M\) be closed algebraic embeddings of pure-dimensional complex quasiprojective algebraic varieties with \(M,W\) smooth. Let \(V\subset X\) be an irreducible closed subvariety. We suppose that \(X\subset W\) is equipped with a complex algebraic Whitney stratification such that \(M\) is transverse to all strata, and \(V\) is a union of strata. Then the refined Gysin map_ \[(f,i)^{!}_{\operatorname{ref}}\colon A_{*}(X)\longrightarrow A_{*-c}(Y)\] associated to the cartesian square of inclusions_ _satisfies \((f,i)^{!}_{\rm ref}[V]_{X}=[V\cap Y]_{Y}\), where \(c\) denotes the complex codimension of \(M\) in \(W\)._ Proof.: By Remark 8.2, the constructible function \[\gamma:=1_{V}\in F(X)\] satisfies the non-characteristic property of Theorem 8.1 because the given complex algebraic Whitney stratification on \(X\) is such that \(M\) is transverse to all strata, and \(V\) is a union of strata. The pull-back \(g^{*}\colon F(X)\to F(Y)\) satisfies \(g^{*}(\gamma)=g^{*}(1_{V})=1_{V\cap Y}\). Hence, Theorem 8.1 yields \[(f,i)^{!}_{\rm ref}(c_{*}(1_{V}))=c^{*}(N|_{Y})\cap c_{*}(1_{V\cap Y})\in A_{*} (Y). \tag{11}\] The inclusions \(\alpha\colon V\hookrightarrow X\) and \(\beta\colon V\cap Y\hookrightarrow Y\) are closed embeddings, and hence proper. The induced maps \(\alpha_{*}\colon F(V)\to F(X)\) and \(\beta_{*}\colon F(V\cap Y)\to F(Y)\) satisfy \(\alpha_{*}(1_{V})=1_{V}\) and \(\beta_{*}(1_{V\cap Y})=1_{V\cap Y}\), respectively. (In general, the push-forward \(\varphi_{*}\colon F(U)\to F(U^{\prime})\) of a proper morphism \(\varphi\colon U\to U^{\prime}\) is defined on a subvariety \(Z\subset U\) by \(\varphi_{*}(1_{Z})(v)=\chi(Z\cap\varphi^{-1}(v))\), where \(\chi\) denotes the topological Euler characteristic, see [25, p. 376, Example 19.1.7].) Since \(c_{*}\) commutes with proper push-forward according to [25, p. 377, Example 19.1.7], equation (11) becomes \[(f,i)^{!}_{\rm ref}(\alpha_{*}c_{*}(V))=c^{*}(N|_{Y})\cap\beta_{*}c_{*}(V\cap Y )\in A_{*}(Y). \tag{12}\] Let \(d\) denote the complex dimension of \(V\). Then, the highest non-trivial homogeneous component of \(c_{*}(V)\) is \(c_{d}(V)=[V]_{V}\) by Corollary 8.3. Similarly, since \(V\cap Y=V\cap M\) is pure \((d-c)\)-dimensional, the highest non-trivial homogeneous component of \(c_{*}(V\cap Y)\) is \(c_{d-c}(V\cap Y)=[V\cap Y]_{V\cap Y}\). Consequently, evaluation of equation (12) in degree \(d-c\) yields \[(f,i)^{!}_{\rm ref}([V]_{X})=[V\cap Y]_{Y}\in A_{d-c}(Y).\] Since \(c_{*}(X)\in A_{*}(X)\) is an algebraic cycle, there exist a finite number of irreducible closed subvarieties \(V_{1},\dots,V_{r}\subset X\) such that \[c_{*}(X)=\sum_{l=1}^{r}\lambda_{l}[V_{l}]_{X},\quad\lambda_{l}\in\mathbb{Z}. \tag{13}\] For an irreducible closed subvariety \(X\subset W\) of a smooth variety \(W\), we call a complex algebraic Whitney stratification of \(X\subset W\)\(c_{*}(X)\)_-constructible_ if there is a representation (13) in which every \(V_{l}\) is a union of strata of \(X\). It follows from [30, p. 43, Section 1.7, Theorem] that such Whitney stratifications exist on \(X\subset W\). **Theorem 8.6**.: _Let \(X\hookrightarrow W\ Proof.: By Remark 8.2, the constructible function \[\gamma:=1_{X}\in F(X)\] satisfies the non-characteristic property of Theorem 8.1 because \(M\) is by assumption transverse to all strata of the given complex algebraic Whitney stratification on \(X\), and \(\gamma|_{S}=1_{S}\) is locally constant for all strata \(S\) of \(X\). The pull-back \(g^{*}\colon F(X)\to F(Y)\) induced by the closed embedding \(g\colon Y\hookrightarrow X\) satisfies \(g^{*}(\gamma)=g^{*}(1_{X})=1_{M\cap X}=1_{Y}\). Hence, Theorem 8.1 yields \[(f,i)^{!}_{\text{ref}}(c_{*}(X))=c^{*}(N|_{Y})\cap c_{*}(Y)\in A_{*}(Y).\] By assumption, the Chern-Schwartz-MacPherson class \(c_{*}(X)\) can be written in the form (13) for some irreducible closed subvarieties \(V_{1},\ldots,V_{r}\subset X\) such that every \(V_{l}\) is a union of strata of the given complex algebraic Whitney stratification on \(X\). Therefore, Corollary 8.5 implies that \[(f,i)^{!}_{\text{ref}}(c_{*}(X))=(f,i)^{!}_{\text{ref}}\sum_{l=1}^{r}\lambda_ {l}[V_{l}]_{X}=\sum_{l=1}^{r}\lambda_{l}\cdot(f,i)^{!}_{\text{ref}}([V_{l}]_{ X})=\sum_{i=1}^{r}\lambda_{l}[V_{l}\cap Y]_{Y}.\] By applying the cycle map \(\operatorname{cl}:A_{*}(Y)\to H^{\text{BM}}_{2*}(Y)\otimes\mathbb{Q}\) from Chow homology to Borel-Moore homology, we conclude that \[\sum_{i=1}^{r}\lambda_{l}[V_{l}\cap Y]_{Y} =\operatorname{cl}(f,i)^{!}_{\text{ref}}(c_{*}(X))= \operatorname{cl}(c^{*}(N|_{Y})\cap c_{*}(Y))\] \[=c^{*}(N|_{Y})\cap\operatorname{cl}(c_{*}(Y))\in H^{\text{BM}}_{ 2*}(Y)\otimes\mathbb{Q},\] where the cycle map \(\operatorname{cl}\) and the cap product with Chern classes are compatible by [25, p. 374, Prop. 19.1.2]. Furthermore, by the analog of Proposition 5.1 for the topological Gysin map in Borel-Moore homology (see Remark 5.4), we have \[\sum_{i=1}^{r}\lambda_{l}[V_{l}\cap Y]_{Y}=\sum_{i=1}^{r}\lambda_{l}g^{!!}_{ \text{top}}([V_{l}]_{X})=g^{!!}_{\text{top}}\sum_{i=1}^{r}\lambda_{l}[V_{l}]_ {X}=g^{!!}_{\text{top}}(c_{*}(X))\in H^{\text{BM}}_{2*}(Y)\otimes\mathbb{Q}.\] Note that the following theorem does not require a Cohen-Macaulay assumption. **Theorem 8.7**.: _The pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f\colon M\hookrightarrow W)=c^{*}(N_{M}W)\) and \(c\ell_{*}(i\colon X\hookrightarrow W)=i_{*}c_{*}(X)\) forms a Gysin coherent characteristic class._ Proof.: By the properties of the cohomological Chern class \(c^{*}\), the class \(c\ell^{*}(f)=c^{*}(N_{f})\in H^{*}(M;\mathbb{Q})\) is normalized for all \(f\). Moreover, by Corollary 8.3, the highest nontrivial homogeneous component of the class \(c_{*}(X)=c_{0}(X)+c_{1}(X)+\cdots\in H_{*}(X;\mathbb{Q})\) is \(c_{d}(X)=[X]_{X}\), where \(d\) denotes the complex dimension of \(X\), and \(X\) may be assumed to be irreducible by Remark 6.5. Consequently, the highest nontrivial homogeneous component of \(c\ell_{*}(i)=i_{*}c_{*}(X)\in H_{*}(W;\mathbb{Q})\) is the ambient fundamental class \(i_{*}c_{d}(X)=i_{*}[X]_{X}=[X]_{W}\). We proceed to check the axioms of Gysin coherent characteristic classes for the pair \(c\ell\). As for axiom (1), multiplicativity \(c_{*}(X\times X^{\prime})=c_{*}(X)\times c_{*}(X^{\prime})\) holds for all compact irreducible complex algebraic varieties \(X\) and \(X^{\prime}\) by Kwiecinski [35] and Kwiecinski-Yokura [36]. Hence, for every \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), the claim follows by applying \((i\times i^{\prime})_{*}\) and using naturality of the cross product: \[c\ell_{*}(i\times i^{\prime}) =(i\times i^{\prime})_{*}c_{*}(X\times X^{\prime})=(i\times i^{ \prime})_{*}(c_{*}(X)\times c_{*}(X^{\prime}))\] \[=i_{*}c_{*}(X)\times i^{\prime}_{*}c_{*}(X^{\prime})=c\ell_{*}(i )\times c\ell_{*}(i^{\prime}).\] Next, let us show that the pair \(c\ell\) is compatible with ambient isomorphisms as stated in axiom (2). As for \(c^{*}\), we consider \(f\colon M\to W\) and \(f^{\prime}\colon M^{\prime}\to W^{\prime}\), and an isomorphism \(W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\phi\colon M\stackrel{{\cong}}{{\longrightarrow}}M^{\prime}\). Then, we have \(\phi^{*}N_{f^{\prime}}=N_{f}\), and thus \[\phi^{*}c\ell^{*}(f^{\prime})=\phi^{*}c^{*}(N_{f^{\prime}})=c^{*}(\phi^{*}N_{f^{ \prime}})=c^{*}(N_{f})=c\ell^{*}(f).\] As for \(c\ell_{*}\), we consider \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), and an isomorphism \(\Phi\colon W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\Phi_{0}\colon X\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\). Then, we have \[c_{*}(X^{\prime})=c_{*}(1_{X^{\prime}})=c_{*}(\Phi_{0*}1_{X})=\Phi_{0*}c_{*}(1_ {X})=\Phi_{0*}c_{*}(X)\] because \(c_{*}\colon F(X)\longrightarrow H^{\mathrm{BM}}_{2*}(X)\otimes\mathbb{Q}\) commutes with proper push-forward (see e.g. [25, p. 377, top]). Hence, we obtain \[\Phi_{*}c\ell_{*}(i)=\Phi_{*}i_{*}c_{*}(X)=i^{\prime}_{*}\Phi_{0*}c_{*}(X)=i^{ \prime}_{*}c_{*}(X^{\prime})=c\ell_{*}(i^{\prime}).\] To verify axiom (3), we consider \(i\colon X\to W\) and \(f\colon M\to W\) such that \(X\subset M\). Then, the inclusion \(i^{M}:=i|\colon X\to M\) satisfies \(f\circ i^{M}=i\), and we obtain \[f_{*}c\ell_{*}(i^{M})=f_{*}i^{M}_{*}c_{*}(X)=i_{*}c_{*}(X)=c\ell_{*}(i).\] Finally, to show axiom (4), let us recall from Example 6.1 that \(\mathcal{X}=\mathcal{X}_{0}\) is the family of all inclusions \(i\colon X\hookrightarrow W\) of compact irreducible subvarieties \(X\) in smooth varieties \(W\). Furthermore, let us call two closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\)\(\mathcal{X}_{0}\)-_transverse_ if they admit complex algebraic Whitney stratifications such that every stratum of \(Z\) is transverse to every stratum of \(Z^{\prime}\), where the stratification of \(Z\) (resp. \(Z^{\prime}\)) can be chosen to be \(c_{*}(Z)\)-constructible when \(Z\) is compact (resp. \(c_{*}(Z^{\prime})\)-constructible when \(Z^{\prime}\) is compact). Then, the notion of \(\mathcal{X}_{0}\)-transversality satisfies all required properties. (In fact, the intersection \(Z\cap Z^{\prime}\) of two \(\mathcal{X}_{0}\)-transverse closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\) is proper, which follows from the transversality of the strata of the complex algebraic Whitney stratifications of \(Z\) and \(Z^{\prime}\). Moreover, for any inclusions \(i\colon X\to G\) and \(i^{\prime}\colon X^{\prime}\to G\) in \(\mathcal{X}_{0}\), there is a nonempty open dense subset \(U\subset GL_{n}(\mathbb{C})\) (in the complex topology) such that \(X\) is \(\mathcal{X}_{0}\)-transverse to \(g\cdot X^{\prime}\) for all \(g\in U\). This can be achieved by applying a topological version of Kleiman's transversality theorem (see e.g. [30, p. 39, Theorem 1.3.6 and Example 1.3.7]) to complex algebraic Whitney stratifications on the compact varieties \(X\) and \(X^{\prime}\) that are \(c_{*}(X)\)-constructible and \(c_{*}(X^{\prime})\)-constructible, respectively. Finally, to see that our notion of \(\mathcal{X}_{0}\)-transversality satisfies locality, we suppose that \(Z,Z^{\prime}\subset W\) are \(\mathcal{X}_{0}\)-transverse. Then, the desired transverse Whitney stratifications on \(Z\cap U\) and \(Z^{\prime}\cap U\) are obtained by intersecting those on \(Z\) and \(Z^{\prime}\) with \(U\). Moreover, if, say, \(Z\cap U\) is compact, and hence a complete variety, then it is also a closed subset of \(Z\), see e.g. [42, p. 55, property i]). But since \(Z\cap U\) is also non-empty open subset of the irreducible space \(Z\), we conclude that \(Z\cap U=Z\). Hence, \(Z\) is compact and \(c_{*}(Z)\)-constructible, that is, \(Z\cap U\) is \(c_{*}(Z\cap U)\)-constructible.) Now, consider an inclusion \(i\colon X\to W\) in \(\mathcal{X}_{0}\) and an inclusion \(f\colon M\to W\) (of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\)) such that \(M\) is irreducible, and \(M\) and \(X\) are \(\mathcal{X}_{0}\)-transverse in \(W\). By definition of \(\mathcal{X}_{0}\)-transversality, \(X\) can be equipped with a complex algebraic Whitney stratification that is \(c_{*}(X)\)-constructible, and such that \(M\) is transverse to all strata of \(X\). Hence, by Theorem 8.6 and Proposition 5.7 (where we may use the natural identification of Borel-Moore homology and singular homology because \(X\) and \(Y\) are compact), the topologically normally nonsingular inclusion \(g\colon Y=M\cap X\hookrightarrow X\) satisfies \[g^{!}_{\mathrm{top}}(c_{*}(X))=c^{*}(j^{*}N)\cap c_{*}(Y)\in H_{2*}(Y)\otimes \mathbb{Q},\] where \(N=N_{M}W\) denotes the complex normal bundle of \(M\) in \(W\), and \(j\colon Y\hookrightarrow M\) is the inclusion. Using the base change \(f^{!}_{\mathrm{top}}i_{*}=j_{*}g^{!}_{\mathrm{top}}\) for topological Gysin maps (see [10, Proposition 2.4]), as well as \(c^{*}(j^{*}N)=j^{*}c^{*}(N)\), we conclude that \[f^{!}_{\rm top}c\ell_{*}(i) =f^{!}_{\rm top}i_{*}c_{*}(X)=j_{*}g^{!}_{\rm top}c_{*}(X)=j_{*}(c ^{*}(j^{*}N)\cap c_{*}(Y))\] \[=j_{*}(j^{*}c^{*}(N)\cap c_{*}(Y))=c^{*}(N)\cap j_{*}c_{*}(Y)=c \ell^{*}(f)\cap c\ell_{*}(j).\] This completes the proof of Theorem 8.7. _Remark 8.8_.: A similar proof shows that the pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f)=c^{*}(N_{f})\) and \(c\ell_{*}(i)=i_{*}Ic_{*}(X)\) is also an example of a Gysin coherent characteristic class. Here, \(Ic_{*}(X):=c_{*}\chi_{\rm stalk}(IC^{H}_{X}[-n])\), with the constructible function \(\chi_{\rm stalk}\) given by the stalkwise Euler characteristic. (For completeness, we also point out that \(Ic_{*}(X)=IT_{-1,*}(X)\), where the specialization \(y=-1\) is possible in Definition 2.11 because we actually have \(IT_{\rm y*}(X)\in H^{\rm BM}_{2*}(X)\otimes\mathbb{Q}[y^{\pm 1}]\) as shown in [52, p. 465, Proposition 5.21]. However, an alternative proof in analogy with Remark 7.5 will only yield a Gysin coherent characteristic class with respect to \(\mathcal{X}_{CM}\).) _Remark 8.9_.: The same type of argument can be used to show that the pair \((c\ell^{*},c\ell_{*})\) given by \(c\ell^{*}(f)=c^{*}(N_{f})\) and \(c\ell_{*}(i)=i_{*}c^{M}_{*}(X)\) is another example of a Gysin coherent characteristic class, where \[c^{M}_{*}(X):=c_{*}(Eu_{X})\in H^{BM}_{2*}(X;\mathbb{Q})\] is the Chern-Mather class of a possibly singular irreducible complex projective algebraic variety \(X\), and \(Eu_{X}\) denotes the Euler obstruction constructible function of \(X\) introduced by MacPherson [38]. In the proof, one exploits the following well known properties (see e.g. Parusinski and Pragacz [44, Lemma 1.1]): 1. \(Eu_{X}(x)=1\) for \(x\in X_{reg}\). 2. \(Eu_{X\times X^{\prime}}(x,x^{\prime})=E_{X}(x)\cdot Eu_{X^{\prime}}(x^{\prime})\) for \(x\in X,x^{\prime}\in X^{\prime}\). 3. \(Eu_{X}\) is constructible with respect to any complex Whitney stratification of \(X\). 4. \(f^{*}(Eu_{X})=Eu_{M\cap X}\) for a complex manifold embedding \(f:M\to W\) transversal to a complex Whitney stratification of \(X\). We have seen that the three generalizations \(c_{*}\), \(c^{M}_{*}\) and \(Ic_{*}\) of the Chern class to singular varieties give rise to Gysin coherent classes. As these classes are generally not equal, their associated genera must already be different in light of the Uniqueness Theorem 6.4 for Gysin coherent classes. Indeed, \[|c_{*}|(X)=\chi(X)\] the topological Euler characteristic, \[|c^{M}_{*}|(X)=\chi(X;Eu_{X})=\sum_{S}Eu_{X}(S)\cdot\chi(S),\] where the sum ranges over all connected strata \(S\) of a complex algebraic Whitney stratification of \(X\), and \[|Ic_{*}|(X)=|IT_{-1*}|(X)=I\chi_{-1}(X)=\chi(IH^{*}(X;\mathbb{Q})),\] the intersection cohomology Euler characteristic. ## 9. The Todd Class as a Gysin Coherent Characteristic Class Let \(\tau_{*}\colon K_{0}(X)\longrightarrow H^{\rm BM}_{2*}(X)\otimes\mathbb{Q}\) denote the Todd class transformation of Baum, Fulton and MacPherson [11], [27]. Recall from Remark 2.5 that this transformation is compatible with its Chow homology analog \(\tau_{*}\colon K_{0}(X)\longrightarrow A_{*}(X)\otimes\mathbb{Q}\) under the cycle map \(\operatorname{cl}\colon A_{*}(X)\otimes\mathbb{Q}\to H^{\rm BM}_{2*}(X) \otimes\mathbb{Q}\). Baum, Fulton and MacPherson [11] define the Todd class of a possibly singular complex algebraic variety \(X\) as \[\operatorname{td}_{*}(X):=\tau_{*}([\mathbb{O}_{X}])\in H^{BM}_{2*}(X;\mathbb{ Q}).\] Its genus \(|\operatorname{td}_{*}|\) (for \(X\) compact) is the arithmetic genus of a singular variety, i.e. the holomorphic Euler characteristic. Todd classes can be studied within the framework of Gysin coherent characteristic classes by a similar method to that used for the class \(IT_{1,*}\) in Section 7. In the following result, we are therefore using the same definition of the set \(\mathcal{X}_{CM}\) and the notion of \(\mathcal{X}_{CM}\)-transversality as in Theorem 7.4 (and its proof) to be able to replace the algebraic by the topological Gysin map in Verdier's Gysin restriction formula for the Todd class. **Theorem 9.1**.: _The pair \((c\ell^{*},c\ell_{*})\) defined by \(c\ell^{*}(f)=\operatorname{td}^{*}(N_{f})\) for every inclusion \(f\colon M\to W\) of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\) with complex normal bundle \(N_{f}\), and by \(c\ell_{2*}(i)=i_{*}\operatorname{td}_{*}(X)\) for every inclusion \(i\colon X\to W\) of a compact possibly singular subvariety \(X\subset W\) in a smooth variety \(W\), is a Gysin coherent characteristic class with respect to \(\mathcal{X}_{CM}\)._ Proof.: By the properties of the cohomological Todd class, the class \(c\ell^{*}(f)=\operatorname{td}^{*}(N_{f})\in H^{*}(M;\mathbb{Q})\) is normalized for all \(f\). Moreover, the highest non-vanishing homogeneous component of \(\operatorname{td}_{*}(X)\) is the fundamental class \(\operatorname{td}_{d}(X)=[X]_{X}\in H_{2d}(X;\mathbb{Q})\) according to [25, p. 353, Theorem 18.3.5(5)]. Consequently, the highest nontrivial homogeneous component of \(c\ell_{2*}(i)=i_{*}\operatorname{td}_{*}(X)\in H_{*}(W;\mathbb{Q})\) is the ambient fundamental class \(i_{*}\operatorname{td}_{d}(X)=i_{*}[X]_{X}=[X]_{W}\). We proceed to check the axioms of Gysin coherent characteristic classes for the pair \(c\ell\). As for axiom (1), the multiplicativity property \(\operatorname{td}_{*}(X\times X^{\prime})=\operatorname{td}_{*}(X)\times \operatorname{td}_{*}(X^{\prime})\) holds for all compact irreducible complex algebraic varieties \(X\) and \(X^{\prime}\) by [25, p. 360, Example 18.3.1]. Hence, for every \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), the claim follows by applying \((i\times i^{\prime})_{*}\) and using naturality of the cross product: \[c\ell_{2*}(i\times i^{\prime}) =(i\times i^{\prime})_{*}\operatorname{td}_{*}(X\times X^{\prime} )=(i\times i^{\prime})_{*}(\operatorname{td}_{*}(X)\times\operatorname{td}_{ *}(X^{\prime}))\] \[=i_{*}\operatorname{td}_{*}(X)\times i^{\prime}_{*}\operatorname{td }_{*}(X^{\prime})=c\ell_{2*}(i)\times c\ell_{2*}(i^{\prime}).\] Next, let us show that the pair \(c\ell\) is compatible with ambient isomorphisms as stated in axiom (2). As for \(c\ell^{*}\), we consider \(f\colon M\to W\) and \(f^{\prime}\colon M^{\prime}\to W^{\prime}\), and an isomorphism \(W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\phi\colon M\stackrel{{\cong}}{{\longrightarrow}}M^{\prime}\). Then, we have \(\phi^{*}N_{f^{\prime}}=N_{f}\), and thus \[\phi^{*}c\ell^{*}(f^{\prime})=\phi^{*}\operatorname{td}^{*}(N_{f^{\prime}})= \operatorname{td}^{*}(\phi^{*}N_{f^{\prime}})=\operatorname{td}^{*}(N_{f})=c \ell^{*}(f).\] As for \(c\ell_{*}\), we consider \(i\colon X\to W\) and \(i^{\prime}\colon X^{\prime}\to W^{\prime}\), and an isomorphism \(\Phi\colon W\stackrel{{\cong}}{{\longrightarrow}}W^{\prime}\) that restricts to an isomorphism \(\Phi_{0}\colon X\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\). Invariance \(\Phi_{0*}\operatorname{td}_{*}(X)=\operatorname{td}_{*}(X^{\prime})\) under algebraic isomorphisms \(\Phi_{0}\colon X\stackrel{{\cong}}{{\rightarrow}}X^{\prime}\) follows from [25, p. 360, Example 18.3.3]. Hence, we obtain \[\Phi_{*}c\ell_{2*}(i)=\Phi_{*}i_{*}\operatorname{td}_{*}(X)=i^{\prime}_{*} \Phi_{0*}\operatorname{td}_{*}(X)=i^{\prime}_{*}\operatorname{td}_{*}(X)=i^{ \prime}_{*}\operatorname{td}_{*}(X^{\prime})=c\ell_{2*}(i^{\prime}).\] To verify axiom (3), we consider \(i\colon X\to W\) and \(f\colon M\to W\) such that \(X\subset M\). Then, the inclusion \(i^{M}:=i|\colon X\to M\) satisfies \(f\circ i^{M}=i\), and we obtain \[f_{*}c\ell_{2*}(i^{M})=f_{*}i^{M}_{*}\operatorname{td}_{*}(X)=i_{*} \operatorname{td}_{*}(X)=c\ell_{2*}(i).\] Finally, to show axiom (4), let us proceed as in the proof of Theorem 7.4 and call closed irreducible subvarieties \(Z,Z^{\prime}\subset W\) of a smooth variety \(W\)_\(\mathcal{X}_{CM}\)-transverse_ if \(Z\) and \(Z^{\prime}\) are simultaneously complex algebraic Whitney transverse (that is, they admit complex algebraic Whitney stratifications such that every stratum of \(Z\) is transverse to every stratum of \(Z^{\prime}\) as smooth submanifolds of \(W\)), generically transverse (see e.g. [10, Section 3]), and Tor-independent (see Definition 4.11) in \(W\). Now, consider an inclusion \(i\colon X\to W\) in \(\mathcal{X}_{CM}\) and an inclusion \(f\colon M\to W\) (of a smooth closed subvariety \(M\subset W\) in a smooth variety \(W\)) such that \(M\) is irreducible, and \(M\) and \(X\) are \(\mathcal{X}_{CM}\)-transverse in \(W\). Then, Proposition 4.12 implies that the embedding \(g\colon Y\hookrightarrow X\) of the compact subvariety \(Y=X\cap M\subset X\) is tight. For the regular closed embedding \(g\colon Y\hookrightarrow X\) with algebraic normal bundle \(N=N_{Y}X\), we have the Gysin restriction formula \(g^{!}_{\operatorname{alg}}\operatorname{td}_{*}(X)=\operatorname{td}^{*}(N) \cap\operatorname{td}_{*}(Y)\) on Chow homology (see [25, p. 361, Example 18.3.5]). The latter is a direct consequence of the Verdier-Riemann-Roch formula for the Todd class transformation \(\tau_{*}\), which was conjectured by Baum-Fulton-MacPherson in [11, p. 137], and proved by Verdier [58, p. 214, Theorem 7.1] (see also [25, p. 349, Theorem 18.2(3)]). By invoking the cycle map \(\operatorname{cl}\colon A_{*}(X)\otimes\mathbb{Q}\to H^{\operatorname{BM}}_{2* }(X)\otimes\mathbb{Q}\), we obtain \[g^{!}_{\operatorname{alg}}\operatorname{td}_{*}(X) =g^{!}_{\operatorname{alg}}\operatorname{cl}(\operatorname{td}_{ *}(X))=\operatorname{cl}g^{!}_{\operatorname{alg}}\operatorname{td}_{*}(X)\] \[=\operatorname{cl}(\operatorname{td}^{*}(N)\cap\operatorname{td }_{*}(Y))=\operatorname{td}^{*}(N)\cap\operatorname{cl}(\operatorname{td}_{*} (Y))=\operatorname{td}^{*}(N)\cap\operatorname{td}_{*}(Y),\] where we used that, according to Verdier [58, p. 222, 9.2.1], the algebraic Gysin map of a closed regular embedding commutes with the cycle map (see diagram (8)), and that the cycle map \(\operatorname{cl}\) and the cap product with Chern classes are compatible by [25, p. 374, Prop. 19.1.2] (where note that the Todd class of a complex vector bundle is a rational polynomial in Chern classes, see e.g. [25, p. 56, Example 3.2.4]). Since \(i\in\mathcal{X}_{CM}\) and \(M\) and \(X\) are \(\mathcal{X}_{CM}\)-transverse in \(W\), Theorem 5.10 and Proposition 5.7 (where we may use the natural identification of Borel-Moore homology and singular homology because \(X\) and \(Y\) are compact) imply that the algebraic Gysin map \(g^{!}_{\operatorname{alg}}\colon H_{*}(X;\mathbb{Q})\to H_{*}(Y;\mathbb{Q})\) coincides with the topological Gysin map \(g^{!}_{\operatorname{top}}\colon H_{*}(X;\mathbb{Q})\to H_{*}(Y;\mathbb{Q})\) on all fundamental classes \([Z]_{X}\) of closed irreducible subvarieties \(Z\subset X\). As \(\operatorname{td}_{*}(X)\in H_{*}(X;\mathbb{Q})\) is an algebraic cycle according to Remark 2.5, we obtain \[g^{!}_{\operatorname{top}}\operatorname{td}_{*}(X)=g^{!}_{\operatorname{alg} }\operatorname{td}_{*}(X).\] Since the embedding \(g\colon Y\hookrightarrow X\) is tight, we know that the underlying inclusion \(Y\subset X\) is topologically normally nonsingular with topological normal bundle \(\nu\) isomorphic (as a topological vector bundle) to the underlying topological vector bundle of the algebraic normal bundle \(N=N_{Y}X\). Next, recall from Theorem 4.4 that the inclusion \(g\colon Y\hookrightarrow X\) is normally nonsingular with topological normal bundle \(\nu=j^{*}\nu_{f}\) given by the restriction under the inclusion \(j\colon Y\to M\) of the normal bundle \(\nu_{f}\) of \(M\) in \(W\), which is the the underlying topological vector bundle of the algebraic normal bundle \(N_{f}=N_{M}W\). Using the base change \(f^{!}_{\operatorname{top}}g^{!}_{\operatorname{top}}g_{\operatorname{t}}=j_{* }g^{!}_{\operatorname{top}}\) for topological Gysin maps (see [10, Proposition 2.4]), as well as \(\operatorname{td}^{*}(N)=\operatorname{td}^{*}(j^{*}N_{f})=j^{*}\operatorname{ td}^{*}(N_{f})\), we conclude that \[f^{!}_{\operatorname{top}}c\ell_{2*}(i) =f^{!}_{\operatorname{top}}g_{\operatorname{t}}\operatorname{td }_{*}(X)=j_{*}g^{!}_{\operatorname{top}}\operatorname{td}_{*}(X)=j_{*}g^{!}_ {\operatorname{alg}}\operatorname{td}_{*}(X)=j_{*}(\operatorname{td}^{*}(N) \cap\operatorname{td}_{*}(Y))\] \[=j_{*}(j^{*}\operatorname{td}^{*}(N_{f})\cap\operatorname{td}_{*} (Y))=\operatorname{td}^{*}(N_{f})\cap j_{*}\operatorname{td}_{*}(Y)=c\ell^{*}( f)\cap c\ell_{2*}(j).\] This completes the proof of Theorem 9.1.
2306.01909
Separation theorems and Bell inequalities in algebraic quantum mechanics
The paper discusses the concept of separation of quantum mechanical systems in the algebraic approach. We review known theorems, then establish a link between the C*-algebraic and the corresponding W*-algebraic concepts. A characterization of separation in terms of Bell inequalities, due to Raggio (1988), is given a C*-algebraic formulation. Finally, we comment on the implications for the understanding of the Bell inequalities.
Guido Bacciagaluppi
2023-06-02T20:34:45Z
http://arxiv.org/abs/2306.01909v1
# Separation theorems and Bell inequalities in algebraic quantum mechanics+ ###### Abstract The paper discusses the concept of separation of quantum mechanical systems in the algebraic approach. We review known theorems, then establish a link between the \(C^{*}\)-algebraic and the corresponding \(W^{*}\)-algebraic concepts. A characterization of separation in terms of Bell inequalities, due to Raggio (1988), is given a \(C^{*}\)-algebraic formulation. Finally, we comment on the implications for the understanding of the Bell inequalities. ## 1 Introduction The concept of separation, as we use the term, was introduced by Primas (1978), and investigated by Raggio (1981, 1983, 1988). It means the absence of EPR correlations between two subsystems of a composite system, in the strong sense that the systems are _never_ correlated. According to Primas (see for instance his (1983)), the conceptual relevance of such a concept of separation is that separated systems are to lack the holistic features that lead to the EPR paradox, and the subsystems of a separated system can thus be treated as individual _objects_. From the point of view of the traditional Hilbert-space formalism this is an unnatural condition. The states of a product space \({\cal H}_{1}\otimes{\cal H}_{2}\) that lack EPR-correlations are precisely the product states \(\psi_{1}\otimes\psi_{2}\). Requiring that the two systems never be EPR-correlated is thus a very strong restriction on the dynamics of the system, indeed, this requirement is satisfied only by noninteracting systems. However, if one generalises the formalism to algebraic quantum mechanics, this condition can be met also in the presence of interactions. As an example, take two concrete von Neumann algebras \({\cal A}_{1}\) and \({\cal A}_{2}\) acting on the Hilbert spaces \({\cal H}_{1}\) and \({\cal H}_{2}\), and let \({\cal A}_{1}\) be commutative. It is clear that every element \(A\) of \({\cal A}_{1}\) commutes with every element of the product algebra \({\cal A}_{1}\otimes{\cal A}_{2}\). Intuitively, this means that the possible distributions of the values of \(A\) are not constrained by the uncertainty principle, and so that \(A\) has a dispersion-free value in every pure state \(\Psi\) of \({\cal A}_{1}\otimes{\cal A}_{2}\). But now there are superselection rules separating any two product states \(\psi_{1}\otimes\psi_{2}\) and \(\varphi_{1}\otimes\varphi_{2}\) with \(\psi_{1}\neq\varphi_{1}\). Indeed, if a state of the form \[\Psi=c_{1}\psi_{1}\otimes\psi_{2}+c_{2}\varphi_{1}\otimes\varphi_{2}\] were pure, the reduced state for subsystem 1 would be \[\rho_{1}=\mid c_{1}\mid^{2}P_{\psi_{1}}+\mid c_{2}\mid^{2}P_{\varphi_{1}}\] (where the \(P_{\psi}\) are the projectors on the corresponding rays), i. e. a mixture, but then certainly some \(A\in{\cal A}_{1}\) would have a non-zero dispersion in the state \(\Psi\), contradicting the above. Thus, if \({\cal A}_{1}\) or \({\cal A}_{2}\) is commutative, _all_ pure states are product states, so indeed it is possible that systems 1 and 2 are never EPR-correlated, _independently of the dynamics_. Separation was investigated by Raggio (1981, 1983, 1988) in the framework of \(W^{*}\)-algebras. He derived two structure theorems (Th. 2.6 and 2.7 below) that characterise separated systems completely. In fact, the above example is generic: a \(W^{*}\)-system composed of two subsystems is separated if and only if at least one of the subsystems is _classical_ (i. e. if its algebra is commutative). As Primas (1983) puts it: quantum objects exist only in a classical environment; or better, idealized models of a quantum system in which the system is to be separated from its environment must treat the environment of the system classically. Later, Raggio seized upon a result of Baez (1987) about Bell inequalities in the algebraic setting, and established a remarkable link between Bell inequalities and separation. This is a particularly interesting aspect of Raggio (1988). In this paper we examine separation in the framework of \(C^{*}\)-algebras, and in particular the connection between separation and the Bell inequalities. In algebraic quantum mechanics, \(W^{*}\)-algebras often arise in connection with representations of a \(C^{*}\)-algebra, and the abstract \(C^{*}\)-algebra is regarded as more fundamental. The definition of a separated \(C^{*}\)-system is straightforward, and the structure theorem analogous to Raggio's theorems is already contained in Takesaki (1958). In section 2, we give the definitions and structure theorems for the \(C^{*}\)- and the \(W^{*}\)-algebraic case. We also make the connection between separation for \(C^{*}\)- and \(W^{*}\)-systems. In section 3 we turn to the relation between separation and the Bell inequalities. We prove the converse of Baez's theorem, and derive the \(C^{*}\)-analogue of Raggio's results. We conclude (section 4) with some comments on the significance of these results for the understanding of the Bell inequalities. ## 2 Separation theorems In the following, when \({\cal A}_{1}\) and \({\cal A}_{2}\) are \(C^{*}\)-algebras (which are always supposed to be unital), \({\cal A}_{1}\otimes{\cal A}_{2}\) denotes the injective tensor product of \({\cal A}_{1}\) and \({\cal A}_{2}\). When \({\cal A}_{1}\) and \({\cal A}_{2}\) are \(W^{*}\)-algebras, \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) denotes their \(W^{*}\)-tensor product. We now give the exact definitions of separation in the \(C^{*}\)- and in the \(W^{*}\)-case. **Definition 2.1**: _Let \({\cal A}_{1}\), \({\cal A}_{2}\) be \(C^{*}\)-algebras (respectively, \(W^{*}\)-algebras). A state (a normalized positive linear functional) \(\omega\) on \({\cal A}_{1}\otimes{\cal A}_{2}\) (respectively, \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\)) is said to be a product state iff for all \((A_{1},A_{2})\) in \({\cal A}_{1}\times{\cal A}_{2}\),_ \[\omega(A_{1}\otimes A_{2})=\omega(A_{1}\otimes{\bf 1})\omega({\bf 1}\otimes A_{2}).\] In the case of \(C^{*}\)-algebras we require all pure states to be product states. **Definition 2.2**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras. We say \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated (in the \(C^{*}\)-sense) if and only if:_ _for all pure states \(\omega\) on \({\cal A}_{1}\otimes{\cal A}_{2}\), \(\omega\) is a product state._ Notice that if \(\omega\) is a pure product state of \({\cal A}_{1}\otimes{\cal A}_{2}\), then \(\omega_{1}:=\omega\mid_{{\cal A}_{1}}\) and \(\omega_{2}:=\omega\mid_{{\cal A}_{2}}\) are pure (where \({\cal A}_{1}\) and \({\cal A}_{2}\) are identified with \({\cal A}_{1}\otimes{\bf 1}\) and \({\bf 1}\otimes{\cal A}_{2}\), respectively). This follows easily from Prop. 2.8 below. The structure theorem for separated \(C^{*}\)-systems is the following. **Theorem 2.3** (Takesaki 1958): _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras. Then:_ \({\cal A}_{1}\otimes{\cal A}_{2}\) _is separated \(\Leftrightarrow\)\({\cal A}_{1}\) or \({\cal A}_{2}\) is commutative._ The proof is immediate from Takesaki (1979), Th. IV 4.14, together with the above remark. In the case of \(W^{*}\)-algebras there are two different notions of separation. Raggio (1981) originally considered separation with respect to pure states (as we have done for \(C^{*}\)-algebras). The second definition (Raggio 1983) is more appropriate to the \(W^{*}\)-setting, since it refers only to normal states. **Definition 2.4**: _Let \({\cal A}\) be a \(W^{*}\)-algebra, and \(\omega\) a normal state on \({\cal A}\). \(\omega\) is said to be decomposable into normal product states if and only if:_ \(\omega\) _lies in the norm-closure of the convex hull of the normal product states._ Intuitively, \(\omega\) is a 'continuous convex combination' of normal product states. **Definition 2.5**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(W^{*}\)-algebras._ _i)_ \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) _is_ separated with respect to pure states _if and only if:_ _for all pure states \(\omega\) on \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\), \(\omega\) is a product state._ _ii)_ \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) _is_ separated with respect to normal states _if and only if:_ _for all normal states \(\omega\) on \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\), \(\omega\) is decomposable into normal product states._ The second definition needs to be formulated in terms of decomposability, because pure normal states need not exist in a general \(W^{*}\)-algebra. This makes the two definitions apparently independent of each other. In both cases, however, analogues of Takesaki's theorem hold; and the definitions turn out to be equivalent. **Theorem 2.6** (Raggio 1981): _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(W^{*}\)-algebras. Then the following are equivalent:_ _i)_ \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) _is separated w.r.t. pure states;_ _ii)_ \({\cal A}_{1}\) _or_ \({\cal A}_{2}\) _is commutative._ **Theorem 2.7** (Raggio 1983, 1988): _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(W^{*}\)-algebras. Then the following are equivalent:_ _i)_ \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) _is separated w.r.t. normal states;_ _ii)_ \({\cal A}_{1}\) _or_ \({\cal A}_{2}\) _is commutative._ The first of these two theorems is proved directly in Raggio (1981). One can also derive it from Th. 2.3. For the proof of the second theorem, see Raggio (1988) and the references therein. (In Raggio (1983) the theorem is proved under more restrictive assumptions). Since the two definitions of separation for \(W^{*}\)-algebras coincide, we shall simply talk of separation in the \(W^{*}\)-sense. We now establish the link between the concept of separation for \(C^{*}\)-algebras and the concept of separation for \(W^{*}\)-algebras via the representations of a \(C^{*}\)-algebra. We shall need it in the proof of Th. 3.7. **Proposition 2.8**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras, and \(\pi_{1}\) and \(\pi_{2}\) two representations of \({\cal A}_{1}\) and \({\cal A}_{2}\), respectively. Then:_ \[(\pi_{1}\otimes\pi_{2}({\cal A}_{1}\otimes{\cal A}_{2}))^{\prime\prime}=\pi_{1 }({\cal A}_{1})^{\prime\prime}\,\overline{\otimes}\,\pi_{2}({\cal A}_{2})^{ \prime\prime}.\] This is Takesaki (1979), Prop. IV 4.13. If \(\pi_{1}\) and \(\pi_{2}\) are the representations of \({\cal A}_{1}\) and \({\cal A}_{2}\) induced by a representation \(\pi\) of \({\cal A}_{1}\otimes{\cal A}_{2}\), we have \[\pi({\cal A}_{1}\otimes{\cal A}_{2})=\pi_{1}\otimes\pi_{2}({\cal A}_{1}\otimes {\cal A}_{2}),\] and we obtain: **Corollary 2.9**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras, \(\pi\) a representation of \({\cal A}_{1}\otimes{\cal A}_{2}\), and \(\pi_{1}\) and \(\pi_{2}\) the induced representations of \({\cal A}_{1}\) and \({\cal A}_{2}\). Then_ \[\pi({\cal A}_{1}\otimes{\cal A}_{2})^{\prime\prime}=\pi_{1}({\cal A}_{1})^{ \prime\prime}\,\overline{\otimes}\,\pi_{2}({\cal A}_{2})^{\prime\prime}.\] Thus, the von Neumann algebra associated with a representation of a tensor product \(C^{*}\)-algebra has a canonical factorization as a \(W^{*}\)-tensor product. This motivates the following definition: **Definition 2.10**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras, and \(\pi\) a representation of \({\cal A}_{1}\otimes{\cal A}_{2}\). We say that \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated in the representation \(\pi\) if and only if:_ \[\pi({\cal A}_{1}\otimes{\bf 1})^{\prime\prime}\,\overline{\otimes}\,\pi({\bf 1 }\otimes{\cal A}_{2})^{\prime\prime}\mbox{ is separated (in the $W^{*}$-sense).}\] We now prove our first theorem. **Theorem 2.11**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras. The following are equivalent:_ _i)_ \({\cal A}_{1}\otimes{\cal A}_{2}\) _is separated;_ _ii)_ \({\cal A}_{1}\otimes{\cal A}_{2}\) _is separated in all irreducible representations;_ _iii)_ \({\cal A}_{1}\otimes{\cal A}_{2}\) _is separated in the universal representation._ Proof:'_i)_\(\Rightarrow\)_ii)_' Take any irreducible representation \(\pi\) of \({\cal A}_{1}\otimes{\cal A}_{2}\). By assumption \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated, so, by Th. 2.3, \({\cal A}_{1}\) or \({\cal A}_{2}\) is commutative. It follows that \(\pi_{1}({\cal A}_{1})\) or \(\pi_{2}({\cal A}_{2})\) is commutative, and so that \(\pi_{1}({\cal A}_{1})^{\prime\prime}\) or \(\pi_{2}({\cal A}_{2})^{\prime\prime}\) is commutative. By the second half of Th. 2.7 (or by another direct argument), \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated in the representation \(\pi\). '_ii)_\(\Leftarrow\)_i)_' Assume \({\cal A}_{1}\otimes{\cal A}_{2}\) is not separated. By Th. 2.3, \({\cal A}_{1}\) and \({\cal A}_{2}\) are both noncommutative. Then there exist irreducible representations \((\pi_{1},{\cal H}_{1})\) of \({\cal A}_{1}\) and \((\pi_{2},{\cal H}_{2})\) of \({\cal A}_{2}\), with \(\dim{\cal H}_{1},\dim{\cal H}_{2}\geq 2\). Irreducibility means that \[\pi_{1}({\cal A}_{1})^{\prime\prime}={\cal B}({\cal H}_{1}),\mbox{ and }\pi_{2}({\cal A}_{2})^{\prime\prime}={\cal B}({\cal H}_{2}).\] But \({\cal B}({\cal H}_{1})\) and \({\cal B}({\cal H}_{2})\) are both noncommutative, and it follows by the first half of Th. 2.7 that \[\pi_{1}({\cal A}_{1})^{\prime\prime}\,\overline{\otimes}\,\pi_{2}({\cal A}_{2 })^{\prime\prime}\] is not separated. Consequently, \({\cal A}_{1}\otimes{\cal A}_{2}\) is not separated in the representation \(\pi:=\pi_{1}\otimes\pi_{2}\). _'i) \(\Leftrightarrow\) iii)_' Let \(\pi\) be the universal representation of \({\cal A}_{1}\otimes{\cal A}_{2}\). By Th. 2.3, \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated iff \({\cal A}_{1}\) or \({\cal A}_{2}\) is commutative. This is the case iff \(\pi_{1}({\cal A}_{1})\) or \(\pi_{2}({\cal A}_{2})\) is commutative, iff \(\pi_{1}({\cal A}_{1})^{\prime\prime}\) or \(\pi_{2}({\cal A}_{2})^{\prime\prime}\) is commutative. By Th. 2.7, this is equivalent to separation in the representation \(\pi\). ## 3 Bell inequalities and separation Separation means that two systems never present EPR-correlations, in other words, that any possible correlations between the two can be explained in terms of classical mixtures. It is well-known that the problem of whether given sets of numbers that look like probabilities and correlations can be fitted into a classical probabilistic scheme is related to the Bell inequalities. Results of this kind are contained in particular in Pitowsky (1989); see also Fine (1982 and references therein) and Prof. Beltrametti's contribution to the present volume. It is thus natural to ask whether there is a way of characterising separation using Bell inequalities. In fact there is: Baez (1987) obtained some partial results for \(C^{*}\)-algebras, and Raggio (1988) showed that in the case of \(W^{*}\)-algebras, a composite system \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) is separated if and only if all normal states on \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\) satisfy the Bell inequalities (in the sense of Def. 3.1 below). We prove the analogous results for \(C^{*}\)-algebras. In 1987, Baez published a short note on Bell inequalities in the \(C^{*}\)-algebraic formalism. His main results are the following. **Definition 3.1**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)- (\(W^{*}\)-) algebras. We say a state \(\omega\) on \({\cal A}_{1}\otimes{\cal A}_{2}\) (\({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\))_ satisfies the Bell inequalities _iff \(\forall A,A^{\prime}\in{\cal A}_{1}\), \(\forall B,B^{\prime}\in{\cal A}_{2}\) with norm less or equal to \(1\),_ \[\mid\omega(A\otimes(B-B^{\prime}))\mid+\mid\omega(A^{\prime}\otimes(B+B^{ \prime}))\mid\leq 2.\] **Proposition 3.2**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras, and let \(\omega\) be a product state on \({\cal A}_{1}\otimes{\cal A}_{2}\). Then \(\omega\) satisfies the Bell inequalities._ The proof is completely analogous to the standard proofs of the Bell inequalities: see Baez (1987). **Definition 3.3**: _Let \({\cal A}\) be a \(C^{*}\)-algebra, and let \(\omega\) be a state on \({\cal A}.\)\(\omega\) is said to be decomposable into product states if and only if:_ \(\omega\) _lies in the weak\({}^{*}\)-closure of the convex hull of the product states._ **Proposition 3.4**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras, and let \({\cal A}_{1}\otimes{\cal A}_{2}\) be separated. Then:_ _for all states \(\omega\) on \({\cal A}_{1}\otimes{\cal A}_{2}\), \(\omega\) is decomposable into product states._ Proof: By the Krein-Milman theorem, every state on \({\cal A}_{1}\otimes{\cal A}_{2}\) lies in the weak\({}^{*}\)-closure of the convex hull of the pure states. By the assumption of separation, these are product states. From this and Th. 2.3 Baez obtains: **Theorem 3.5** (Baez 1987): _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be two \(C^{*}\)-algebras, and let \({\cal A}_{1}\) or \({\cal A}_{2}\) be commutative. Then:_ _for all states \(\omega\) on \({\cal A}_{1}\otimes{\cal A}_{2}\), \(\omega\) satisfies the Bell inequalities._ Proof: For a product state, the theorem reduces to Prop. 3.2. For an arbitrary state, it follows by convexity and continuity using Prop. 3.4. Baez states explicitly that for algebras of type I also the converse holds, because in this case one can reproduce the Bohm-Bell situation, which gives violation of the Bell inequalities. We shall now prove more generally that for arbitrary \(C^{*}\)-algebras \({\cal A}_{1}\) and \({\cal A}_{2}\), the product \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated _if and only if_\({\cal A}_{1}\) or \({\cal A}_{2}\) is commutative. This is the analogue of Raggio's results for \(W^{*}\)-algebras (Th. 3.6), from which in fact our theorem follows. **Theorem 3.6** (Raggio 1988): _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(W^{*}\)-algebras. Then the following are equivalent:_ _i)_ \({\cal A}_{1}\) _or_ \({\cal A}_{2}\) _is commutative;_ _ii)_ _for all normal states_ \(\omega\) _on_ \({\cal A}_{1}\overline{\otimes}{\cal A}_{2}\)_,_ \(\omega\) _satisfies the Bell inequalities._ For the proof, see Raggio (1988). The trick is to construct a Bohm-Bell type counterexample not using finite-dimensional projections, which might not exist (type III), but constructing subalgebras isomorphic to the two-by-two matrices using pairs of generally _infinite_-dimensional noncommuting projections. Finally, we prove the converse of Prop. 3.4 and Th. 3.5. **Theorem 3.7**: _Let \({\cal A}_{1}\) and \({\cal A}_{2}\) be \(C^{*}\)-algebras. The following are equivalent:_ _i)_ \({\cal A}_{1}\) _or_ \({\cal A}_{2}\) _is commutative;_ _ii)_ \({\cal A}_{1}\otimes{\cal A}_{2}\) _is separated;_ _iii)_ _for all states_ \(\omega\) _on_ \({\cal A}_{1}\otimes{\cal A}_{2}\)_,_ \(\omega\) _is decomposable into product states;_ _iv)_ _for all states_ \(\omega\) _on_ \({\cal A}_{1}\otimes{\cal A}_{2}\)_,_ \(\omega\) _satisfies the Bell inequalities._ Proof: We will prove that if all \(\omega\) satisfy the Bell inequalities, \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated in all \(\pi\) (the proof using the universal representation is analogous). From Th. 2.11, the theorem will follow. Take an irreducible representation \((\pi,{\cal H})\) of \({\cal A}_{1}\otimes{\cal A}_{2}\), and any \(\psi\) in \({\cal H}\). \(\psi\) is a state on \(\pi({\cal A}_{1}\otimes{\cal A}_{2})^{\prime\prime}\) and defines a (pure) state on \({\cal A}_{1}\otimes{\cal A}_{2}\) via \[\omega_{\psi}(A):=(\psi,\pi(A)\psi).\] By assumption, \(\omega_{\psi}\) satisfies the Bell inequalities, i. e. \(\forall A,A^{\prime}\in{\cal A}_{1}\), \(\forall B,B^{\prime}\in{\cal A}_{2}\) with norm less or equal to \(1\), \[\begin{array}{l}|\;(\psi,\pi(A\otimes(B-B^{\prime}))\psi)\;|+|\;(\psi,\pi(A^{ \prime}\otimes(B+B^{\prime}))\psi)\;|=\\ =\;|\;(\psi,\pi(A)\otimes(\pi(B)-\pi(B^{\prime}))\psi)\;|+|\;(\psi,\pi(A^{ \prime})\otimes(\pi(B)+\pi(B^{\prime}))\psi)\;|\leq 2.\end{array}\] By continuity, we also have \(\forall A,A^{\prime}\in\pi({\cal A}_{1})^{\prime\prime}\), \(\forall B,B^{\prime}\in\pi({\cal A}_{2})^{\prime\prime}\) with norm less or equal to \(1\), \[|\;(\psi,A\otimes(B-B^{\prime})\psi)\;|+|\;(\psi,A^{\prime}\otimes(B+B^{ \prime})\psi)\;|\leq 2.\] Thus, \(\psi\) as a state on \(\pi({\cal A}_{1}\otimes{\cal A}_{2})^{\prime\prime}\) satisfies the Bell inequalities. But now, any normal state on \(\pi({\cal A}_{1}\otimes{\cal A}_{2})^{\prime\prime}\) lies in the norm-closure of the convex hull of states of the form \(\psi\). So, by convexity and continuity, it satisfies the Bell inequalities. By Th. 3.6, \(\pi({\cal A}_{1})^{\prime\prime}\overline{\otimes}\pi({\cal A}_{2})^{\prime\prime}\) is separated, i. e. \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated in the representation \(\pi\). Since \(\pi\) was arbitrary, \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated in all irreducible representations. By Th. 2.11, \({\cal A}_{1}\otimes{\cal A}_{2}\) is separated. ## 4 For whom the Bell tolls The original derivations of the Bell inequalities by Bell and others proceeded from a hypothesis of local hidden variables that corresponds to a classical (albeit generally indeterministic and certainly contextual) model for the probabilities to be expected in EPR-type experiments. Classical examples of composite systems also trivially satisfy the Bell inequalities. 'Classicality implies the Bell inequalities' seems to be the conclusion. The converse also appears to be true since the violation of the Bell inequalities by quantum mechanical systems is what makes them interesting in the first place as a touchstone for testing local hidden variable theories experimentally. One sense in which the converse has been, indeed, shown to hold rigorously is Pitowsky's analysis (1989) of the Bell inequalities: satisfaction of the Bell inequalities is a necessary and sufficient condition for the existence of a classical probability space and probability distributions reproducing the given data. This is the same situation as we have here: Theorems 3.6 and 3.7 indeed show that all possible correlations between results of measurements on the systems represented by \({\cal A}_{1}\) and \({\cal A}_{2}\) are purely classical. However, Pitowsky's results are not vindicating any correspondence between classicality and the Bell inequalities other than in the precise form he gave: as a matter of fact, _it is just the correlations that are classical_. Indeed, the catch-phrase 'Bell inequalities mean that everything is classical' is wrong. Theorems 3.6 and 3.7 show that, contrary to the case of traditional Hilbert-space quantum mechanics, in algebraic quantum mechanics there are systems that satisfy the Bell inequalities in all possible states, and nevertheless are _not_ entirely classical -- indeed, one of the subsystems may be completely quantum mechanical. Neither is it the case that the system, even though not classical, is described by 'classical' hidden variables. In a hidden variable account, one tries to explain the EPR-correlations present in traditional Hilbert-space quantum mechanics by adding 'hidden variables' to the theory. Separation in our sense requires instead that we limit ourselves to systems that are _not_ EPR-correlated, thus remaining _within_ quantum mechanics, albeit within the more general formalism of algebraic quantum mechanics. Thus classicity of behaviour in the sense of the Bell inequalities being satisfied is not sufficient for the system as a whole to be classical -- only a subsystem is. Further, Aerts (1991) has argued that the Bell inequalities are not even _necessary_ for a system to be classical: even a classical system can violate the Bell inequalities if it cannot be separated into individual subsystems. The relation between separation and the Bell inequalities seems to suggest that what lies physically -- or metaphysically -- behind the Bell inequalities (and the mathematical existence of classical probabilities) is the _individuality_ of the systems considered: each system has a physical state of its own, uninfluenced by the states of other systems, a special instance of this being Bell's original concept of locality for hidden variables. ## Acknowledgements I wish to thank first of all Jeremy Butterfield for the endless and immensely helpful discussions I had with him. My thanks go further to the audiences who in some form or other heard and commented on this material. In particular I wish to thank Anton Amann, Harvey Brown, Fiona Harrison, Klaas Landsman, and Constantin Piron. [I am also very grateful to Adrian Kent for alerting me to the inaccuracy of some side remarks in the published version, which have been accordingly edited out of this online version.] This work was supported by the British Academy and the Arnold Gerstenberg Fund.